Wrong Side of Memphis

Brexit

I know politics is not the usual subject in this blog, but I want to make an exception. We are right now at 31st of January 2020, the day the United Kingdom leaves the EU.

Photo by freestocks.org on Pexels.com

A bit of background first: I am a Spaniard that has been living in Ireland for the last 10 years. I’ve always been influenced by British culture, I guess through music and literature, and at some point I was considering moving to the UK for work.

When I read the results of the referendum, back in 2016, I was utterly in shock. I think it was the time I’ve been most in disbelief of a political event ever. It felt like completely surreal. So much that I changed my Internet avatar to include an EU flag. This may result silly for you, but I keep my avatar as my persona, and I’m very protective about it. To put things in context, the only other time I added a flag to my avatar was due the train bombing attacks in Madrid in 2004. I feel deeply European, and feel “at home” not only in Ireland, but also when travelling to France, Italy, Portugal or Germany…

And this whole process of the Brexit is so so frustrating. I’ve been keeping a close eye to the politics of UK during the last years, and I’ve been surprise by the sheer misunderstanding of, well mostly everything.

From the point of view of Ireland, that also has troubling implications. The Irish Border is an incredibly complicated problem, but it was dismissed for a long time, and even now I’m not sure most English politicians even understand properly the issue.

This post has been drafted for months, trying to capture my thoughts and feelings, but it’s just so difficult to write something that doesn’t feel incredibly silly, outdated or redundant.

During this almost four years, I think we collectively pass through a lot of states, from incredulity to relief, to acceptance to Schadenfreude, sometimes all at the same time. This long and complicated process has also exacerbated the nationalistic feelings, something that I don’t like and even scares me. Probably at this point everyone is tired and happy to see it moving to the next stage.

There’s still emotion attached. This rare moment in the European Parliament was particularly moving

:___(

Still, inside of me there’s still that shock, and the nagging feeling that this is a step in the opposite direction on where I’d like to move… It’s also the start of another part of the process, full of painful negotiations, and likely disappointments…

This is not over yet. And won’t be for a while…

Una década trabajando en Irlanda en Desarrollo Software

Hoy hace diez años que me mudé a Dublín, a empezar un nuevo trabajo.

No puedo garantizar estas vistas desde el trabajo
Photo by KML on Pexels.com

Han sido diez años de mucha actividad, y de gran desarrollo personal y profesional, con cuatro trabajos distintos. Además, coincide casi exactamente con “los años 2010s”. Algunos detalles

  • Lógicamente, el inglés que tengo ahora no es el de hace diez años. Aunque me sorprende que siempre haya momentos que no entiendes, o frases que te lías. Esto es más ser quejica que otra cosa.
  • He desarrollado todo este tiempo mayoritariamente en Python, que era mi objetivo inicial. Es un lenguaje que me sigue encantando, y además, no ha hecho más que crecer en esta década. En todo este tiempo, obviamente, he aprendido mucho sobre cómo usarlo. Mi código es mucho más pythónico ahora que antes.
Photo by Christina Morillo on Pexels.com
  • He sido relativamente activo en la comunidad local de Python, y he dado charlas tanto en reuniones mensuales como en las PyCons de estos años. Sólo falté a la última del año pasado (2019) por tener otro compromiso ese fin de semana.
  • He aprendido muchísimo sobre temas como servicios web, escalabilidad, bases de datos (SQL y NoSQL), arquitectura de sistemas, microservicios, DevOps, monitorización, y mil más… Y no sólo tecnologías puras y duras, he tenido compañeros fabulosos de los que aprender cosas como gestión de conflicto, atención al cliente, cómo tratar otros desarrolladores, etc.
  • Hablando de compañeros, he estado en equipos realmente internacionales. No sólo gente de toda Europa, sino de todo el mundo. Trabajé unos meses con un equipo en el que había componentes de los cinco continentes. Aprendes mucho y el mundo se hace un poco más pequeño, relacionas lugares remotos con gente que conoces.
  • He trabajado en empresas que gestionaban millones de usuarios concurrentemente. El trabajo de preparación y gestión que hay detrás de eso es impresionante.
Diez años de uno al otro
Que bonicos se ven
  • Me pasé definitivamente al lado oscuro de usar Mac y productos Apple a principios de la década y hasta ahora… El que sean “Unix con un interfaz bonito” es maravilloso para el desarrollo.
  • Revisando estos antiguos artículos sobre las cosas que uso para trabajar, es curioso que sigo usando prácticamente lo mismo. Ahora que tengo ya bastante práctica con Vim va a costar cambiarlo…
  • He viajado bastante. Por trabajo y fuera de él. A muchos sitios como Canadá, Estados Unidos, Emiratos, Alemania, Italia, Portugal…. Y muchos sitios en España.

TECNOLOGÍAS

En términos de tecnologías, las dos que más me han marcado, en el sentido de ser una revolución en cómo trabajo, son tanto Git como los servicios web de Amazon (AWS), a principio de década y Docker/Kubernetes hacia el final.

  • Git se ha convertido en el estándar de facto de control de versiones, en gran parte por el gran trabajo de GitHub.
  • AWS es increíblemente relevante, y permite usar infraestructura (mayoritariamente, pero no limitado a, servidores en la nube y elementos asociados como espacio de almacenaje) de manera muy sencilla. Puedes montarte tu propio centro de datos con 50 servidores en una tarde, si quieres. Y al día siguiente desmontarlo. La contrapartida es que es carillo, un poco la diferencia entre pillar un taxi o comprar un coche. Además, la nomenclatura de todo es ignota y confusa. Es complicadísimo además saber a priori cuanto dinero va a costar nada.
  • Docker permite trabajar con contenedores, que son pequeños procesos autocontenidos, fáciles de ejecutar de forma estándar. Aunque la idea inicial que tienes al usarlos es que son un tipo de máquina virtual ligera, pero la forma más precisa de verlos es como procesos que tienen un sistema de ficheros para ellos solos. El trabajar con contenedores simplifica muchos problemas tradicionales de despliegues en producción, donde el control del entorno es crucial (y siempre daba problemas de algún u otro tipo)
  • Kubernetes es la evolución de los containers, facilitando el usar múltiples a la vez y coordinándolos entre sí. Es ahora mismo un tema candente y de moda. Kubernetes no es la única opción, pero sí la que está ganando más atención. Utilizando Kubernetes y contenedores se puede abstraer los servicios del hardware en el que se ejecutan, separando la gestión del cluster (una colección de servidores, físicos o virtuales que aportan los recursos) de los servicios que operan sobre ellos. Todo esto es posible hacerse configurándolo en ficheros, así que cambiar la infraestructura lógica (qué servicios están desplegados y cómo se conectan entre sí) se convierte en un problema de cambiar unos ficheros de configuración, en lugar de gestionar servidores directamente y conectarlos.

Todas estas tecnologías tienen su curva de aprendizaje y necesitan tiempo para realmente entenderlos.

La verdad sea dicha, estoy enormemente satisfecho con estos últimos diez años… ¡Vamos a por la siguiente década!

Interviewed about microservices

I got interviewed about Microservice and talk a bit about my last book, Hands-on Docker for Microservices with Python.

I was an interesting view on what are the most important areas of Microservices and when migrating from Monolith architecture is a good idea. And also talking about related tools like Python, Docker or Kubernetes.

Check it the interview here.

Hands-On Docker for Microservices with Python Book

Last year I published a book, and I liked the experience, so I wrote another!

I like the cover. Look at all those micro services!

The book is called Hands-On Docker for Microservices with Python, and it goes through the different steps to move from a Monolith Architecture towards a Microservices one.

It is written from a very practical stand point, and aims to cover all the different elements involved. From the implementation of a single RESTful web microservice programmed in Python, containerise it in Docker, create a CI pipeline to ensure that the code is always high quality, and deploy it along with other Microservices in a Kubernetes cluster.

Most of the examples are to be run locally, but a chapter is included to create a Kubernetes cluster in the cloud using AWS services. There’s also other chapters dealing with production related issues, like observability or handling secrets.

Other than talking about technologies, like Python, Docker and Kubernetes; or techniques like Continuous Integration or GitOps; I also talk about the different challenges that teams and organisations face on the adoption of Microservices. And how to structure the work properly to reduce problems.

I think the book will be useful for people dealing with these problems, or thinking to make the move. Kubernetes, in particular, is a new tool, and there are not that many books dealing with it from a “start to finish” approach, looking at the whole software lifecycle, not only under a “I want to learn this piece of tech in isolation”.

Writing it also took a lot of time that I could be using in writing in this blog, I guess. Writing a book is a lot of hard work, but I’m proud of the result. I’m very excited to have it finally released!

You can check the book at Packt website and at Amazon. Let me know what do you think!

Python Automation Cookbook on sale for Black Friday

Black Friday offer!

For this week, you can get the book on the Packt web page from $10 in ebook format.

Python Automation Cookbook

You can get more information about the book in the page or in this post.

Package and deploy a Python module in PyPI with Poetry, tox and Travis

I’ve been working for the last couple of days in a small command line tool in Python, and I took the opportunity to check out a little bit Poetry, which seems to help in package and distribute Python modules.

Enter pyproject.toml

A very promising development in the Python ecosystem are the new pyproject.toml files, presented in PEP 518. This file aims to replace the old setup.py with a config file, to avoid executing arbitrary code, as well as clarify the usage.

book data education eyeglasses

Poetry in no motion. Photo by Pixabay on Pexels.com

Poetry generates a new project, and includes the corresponding pyproject.toml.


[tool.poetry]
name = "std_encode"
version = "0.2.1"
description = "Encode and decode files through the standard input/output"
homepage = "https://github.com/jaimebuelta/std_encode"
repository = "https://github.com/jaimebuelta/std_encode"
authors = ["Jaime Buelta "]
license = "MIT"
readme = "README.md"


[tool.poetry.dependencies]
python = ">=2.7, !=3.0, !=3.1, !=3.2, !=3.3, !=3.4, =0.12"]
build-backend = "poetry.masonry.api"

Most of it is generated automatically by Poetry, there are a couple of interesting bits:

Python compatibility

[tool.poetry.dependencies]
python = ">=2.7, !=3.0, !=3.1, !=3.2, !=3.3, !=3.4, <4"

This makes it compatible with Python 2.7 and Python 3.5 and later.

Including documentation automatically

The added README.md will be automatically included in the package.

Easy dependencies management

The dependencies are clearly stated, in particular, the difference between dev dependencies and regular dependencies. Poetry creates also a poetry.lock file that includes the versions, etc.

Scripts and entry points

This package creates command line tools. This is easy to do describing scripts.


[tool.poetry.scripts]
std_decode = 'std_encode:console.run_sd'
std_encode = 'std_encode:console.run_se'

They’ll call the function run_sd and run_se on the console.py file.

Testing the code with cram and tox

Cramtastic!

As the module is aimed as a command line tool, the best way of testing it is through command line actions. A great tool for that is cram. It allows to describe a test file as a series of command line actions and the returned standard output. For example:


Setup

  $ . $TESTDIR/setup.sh

Run test

  $ echo 'test line' | std_decode
  test line

Any line starting with $ is a command, and any line following, the result, as it will appear in the console. There’s a plugin for pytest, so it can be integrated in a bigger test suite with other Python tests.

Ensuring installation and tests with tox

To run the tests, the process should be:

  • Generate a package with your changes.
  • Install it in a virtual environment.
  • Run all the cram tests, that will call the installed command line scripts.

The best way of doing this is to use tox, that also adds the possibility of running it over different Python versions.

abstract art circle clockwork

All and all they’re just another cog in the tox. Photo by Pixabay on Pexels.com

To do so, we create a tox.ini file


[tox]
isolated_build = true
envlist = py37,py27

[testenv]
whitelist_externals = poetry
commands =
  poetry install -v
  poetry run pytest {posargs} tests/

Which defines two environments to run the tests, Python 2.7 and 3.7, and for each poetry installs the package and then runs the tests using pytest.

Running

$ tox

Runs the whole suite, but while testing, to speed up development, you can do instead

$ tox -e py37 -- -k my_test

The parameter -e runs only in one environment, and anything after the --​ will be transferred to pytest to select only a subset of tests, or any other possibility.

Locally, this allow to run and iterate on the package. But we also want to run the test remotely in CI fashion.

CI and deployment with Travis

Travis-CI is a great tool to setup in your open source repo.  Enabling your GitHub repo can be done very quickly. But after enabling it to our repo, we need to define the .travis.yml file with info.

language: python
python:
- '2.7'
- '3.5'
- '3.6'
matrix:
  include:
  - python: 3.7
    dist: xenial
before_install:
- pip install poetry
install:
- poetry install -v
- pip install tox-travis
script:
- tox
before_deploy:
- poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD
- poetry build
deploy:
  provider: script
  script: poetry publish
  on:
    tags: true
    condition: "$TRAVIS_PYTHON_VERSION == 3.7"
env:
  global:
  - secure: [REDACTED]
  - secure: [REDACTED]

The first part defines the different build to run, for versions of Python 2.7, 3.5, 3.6 and 3.7.

Version 3.7 requires to be executed in Ubuntu Xenial (16.04), as by default travis uses Trusty (14.04), which doesn’t support Python 3.7.

language: python
python:
- '2.7'
- '3.5'
- '3.6'
matrix:
  include:
    - python: 3.7
      dist: xenial

The next part describes how to run the tests. The package tox-travis is installed to seamless integrate both. This makes travis run versions of Python that are not included in tox.ini.

before_install:
- pip install poetry
install:
- poetry install -v
- pip install tox-travis
script:
- tox

Finally, a deployment part is added.

before_deploy:
- poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD
- poetry build
deploy:
  provider: script
  script: poetry publish
  on:
    tags: true
    condition: "$TRAVIS_PYTHON_VERSION == 3.7"

The deploy is configured to happen only if a git tag is set up, and if the build is using Python 3.7. The last condition can be removed, but then the package will be uploaded several times. Poetry ignores it if that’s the case, but it’s just wasteful.

The package is build before deploying it with poetry publish.

To properly configure the access to PyPI, we need to store in secure variables our login and password. To do so, install the travis command line tool, and encrypt the secrets, including the variable name.

$ travis encrypt PYPI_PASSWORD=<PASSWORD> --add env.global
$ travis encrypt PYPI_USER=<USER> --add env.global

The line


poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD

will configure poetry to use these credentials and upload the packages correctly.

Release flow

business equipment factory industrial plant

I image the builds entering an assembly line while Raymond Scott’s Powerhouse plays Photo by Pixabay on Pexels.com

After all this in place, to prepare a new release of the package, the flow will be like this:

  1. Set up the new functionality and commits. Travis will run the tests to ensure that the build works as expected. This may include bumping the dependencies with poetry update.
  2. Once everything is ready, create a new commit with the new version information. This normally includes:
    1. Run poetry version {patch|minor|major|...} to bump the version.
    2. Set up any manual changes, like release notes, documentation updates or internal version references.
  3. Commit and verify that the build is green in travis.
  4. Create a new tag (or GitHub release) with the version. Remember to push the tag to GitHub.
  5. Travis will upload the new version automatically to PyPI.
  6. Spread the word! Your package deserves to be known!

The future, suggestions and things to keep an eye to

There are a couple of elements that could be a little bit easier in the process. As pyproject.toml and poetry are quite new there are a couple of rough edges that could be improved.

Tags, versions and releases

Poetry has a version command to bump the version, but its only effect is to change the pyproject.toml file. I’d love to see an integration to update more elements, including internal versions like the one in __init__.py that gets generated automatically, or ask for release notes and append them to a standard document.

There’s also no integration with generating a git tag or GitHub release in the same command. You need to perform all these commands manually, while it seems like they should be part of the same action.

Something like:

$ poetry version
Generating version 3.4
Append release notes? [y/n]:
Opening editor to add release notes
Saved
A new git commit and tag (v3.4) will be generated with the following changes:
pyproject.toml
- version: 3.3
+ version: 3.4
src/package/__init__.py
- __version__ = "3.3"
+ __version__ = "3.4"
RELEASE_NOTES.md
+ Version 3.4 [2018-10-28]
+ ===
+ New features and bugfixes
Continue? [y/n/c(change tag name)]
Creating and pushing...
Waiting for CI confirmation
CI build is green
Creating new tag. Done.
Create a new release in GitHub [y/n]

This is a wishlist, obviously, but I think it will fit the flow of a lot of GitHub releases to PyPI.

person holding brown stamp

Ready for release! Photo by rawpixel.com on Pexels.com

Travis work with Python 3.7 and Poetry

I’m pretty sure that travis will update the support for Python 3.7 quite soon. Having to define a different environment feels awkward, though I understand the underlying technical issues with it. It’s not a big deal, but I imagine that they’ll fix it so the definition is the same wether you work on 3.6 or 3.7. 3.7 was released 4 months ago at this time.

The other possible improvement is to add pyproject.toml support. At the moment setup.py uploads to PyPI is natively supported, so adding support for pyproject.toml will be amazing. I imagine it will be added if more projects uses this way of packaging more and more.

Final words

Having a CI running properly, and a deployment flow is actually a lot of work. Even doing it with great tools like the ones discussed here, there’s a lot of details to keep into account and polishing bits that need to be considered. It took me around a full day of experimentation to get this setup, even if I worked previously with travis (I configured it for ffind some time ago).

Poetry is also a very promising tool, and I’ll keep checking it. The packaging world in Python is complicated, but there has been a lot of work recently to improve it.

Python Automation Cookbook

So, great news, I wrote a book and it’s available!

IMG_7207

Receiving your own physical book is very exciting!

It’s called Python Automation Cookbook, and it’s aimed to people that already know a bit of Python (not necessarily developers only), but would like to use it to automate common tasks like search files, creating different kind of documents, adding graphs, sending emails, text messages, etc. It’s written in the cookbook format, so it’s a collection of recipes that can be read independently, though there’s always references to show how to combine them to create more complex flows.

The book is available both in the Packt website and in Amazon. There’s more information about the book there, like previews and samples, in case anyone is interested…

This is my first written book, so all is very exciting. The process itself has been a lot of work, but not without its fun parts. I’m also quite proud of having written it in English, not being my first language.

A Django project template for a RESTful Application using Docker – PyCon IE slides

Just putting here the slides for my presentation in the PyCon Ireland, that are a follow up from this blog post. I’ll try to include the video if/when available.

I hand drawn all the slides myself, so it was a lot of fun work!

Enjoy!

Notes about ShipItCon 2017

Disclaimer: I know personally and worked with a good portion of the conference organizers and talkers. I label them with an asterisk*.

The ShipItCon finally took place last Friday. I think it’s quite impressive, given the short amount of time since announcing it and being the first edition, that was so well organized. The venue was very good (and fairly unusual for a tech conference), and all the usual things that are easy to take as granted (food, space, projector, sound, etc) work like clockwork. Kudos to the organizers.

The conference was oriented towards releasing online services, with special emphasis on Continuous Integration/Delivery. I think that focusing a conference over this kind of topic is challenging, as talks need to be generic enough in terms of tools, but narrow enough that is useful. Conferences about a specific technology (like PyConRubyConf or Linux Con) are more focused by concept.

The following is some notes, ideas and follow up articles that I took. Obviously, there are biased over the kind of things I find more interesting. I’ll try to link the presentation slides if/once they’re available.

  • The keynote by the Romero family was a great story and addressed a lot of specific points to the game industry (like the design challenges). It was also the exception in talking about shipping something other than a service, but a game (in Steam and iOS). I played a little GunMan Taco Track over the weekend!
    • Ship a game while on a ship“. They released part of the game while on the Queen Elizabeth cruise, crossing the Atlantic.
  • Release often and use feature toggles, detaching the code release and feature release. This is a point done in the Frederick Meyer talk that I heard recently in other places.
    • Friday night releases make me cringe, but it can make sense if the weekend is the lowest activity point of your customers.
    • Dependency trees grow to be more and more complex, to the point no one understands them anymore and only automated tools can plot them.
    • Challenges in treating data in CI. Use production data? A subset? Fake data? Redacted data? Performance analysis can be tricky.
    • Automate what you care about
  • The need for early testing, including integration/system/performance, was the theme around Chloe Condon talk. Typically, a lot of testing will be performed at the “main branch” (after a feature is merged back) level that can be prepared in advance, giving better and faster feedback to developers. Test early, test often.She presented Codefresh with seems an interesting Cloud CI tool aimed at working with containers.
  • Lauri Apple talked about communication and how important READMEs and documentation are for projects, both internal and external. The WHAT to build is a key aspect that shouldn’t be overlooked.
    • READMEs should include a roadmap, as well as info about installation, run and configure the code.
    • This project offers help, review and advice for READMEs. I’ll definitively submit a  review for ffind (after I review it and polish it a little bit myself).
    • She talked about the Open Organization Maturity Model, a framework about how open organizations are.
    • A couple of projects in Zalando that catches my eye:
      • Patroni, an HA template for PostgreSQL
      • Zalenium, distribute a Selenium Grid over Docker to speed up Selenium tests.
      • External DNS, to help configure external DNS access (like AWS Route 53 or CloudFare) to Kubernetes cluster.
  • If it hurts, do it more frequently. A great quote for Continuous Delivery and automated pipelines. Darin Egan talked about the mindfullness principes and how the status quo get challenges and driving change opposes inertia.
  • The main point in Ingrid Epure‘s talk was the integration of security practices during the development process and the differences between academia and engineering practices.
    • Linters can play a part in enforcing security practices, as well as automating autoformatting to leave format differences out from the review process.
    • Standardizing the logs is also a great idea. Using Canonical Log Lines for Online Visibility. I talked before about the need to increasing logs and generate them during the development process.
  • Eric Maxwell talked about the need to standardise the “upper levels” of the apps, mainly related to logging and metrics, and making applications (Modern Applications) more aware of their environment (choreography vs orquestration) and abstracted from the underlying infrastructure.
    • He presented habitat.sh, a tool aimed at working with these principles.
    • Packaging the application code and letting the tool to do the heavy lifting on the “plumbing
  • The pipeline in Intercom was discussed by Eugene Kenny, and the differences between “the ideal pipeline” and “the reality” of making dozens of deployments every day.
    • For example, fully test and deploy only the latest change in the pipeline, speeding deployments at the expense of fewer separations of changes.
    • Or allow locking the pipeline when things are broken.
    • Follow up article: Continuous Deployment at Instagram
  • Observability is an indispensable property for online services: the ability to check what’s going on in production systems. Damien Marshall* had this concept of graphulsion that I can only share.

https://twitter.com/damo_marshall/status/614165915987480576

He gave some nice ideas on observability through the whole life cycle:

Development:

  • Make reporting logs and metrics simple
  • Account for the effort to do observability work
  • Standardize what to report. The three most useful metrics are Request Rate, Error Rate and Duration per Request.

Deployment:

  • Do capacity planning. Know approximately the limits of your system and calculate the utilization of the system (% of that limit)
  • Ship the observability

Production:

  • Make metrics easy to use
  • Centralise dashboard views across different systems
  • Good alerting is hard. Start and keep it simple.

 

  • Riot Games uses custom generation of services to generate skeletons and standardise good practices and reduce development time. Adam Comeford talked about those practices and how they implemented them.
    • Thinking inside the container.
    • Docker-gc is a tool to reduce the size of image repos, as they tend to grow very fast very quickly.
  • Jacopo Scrinzi talked about defining Infrastructure as Code, making the infrastructure changes through the same process as code (review, subjected to source control, etc). In particular using Terraform and Atlas (now Terraform Enterprise) to make automatic deployments, following CI practices for infrastructure.
    • Using modules in Terraform simplifies and standardises common systems.
  • The last keynote was about Skypilot, an initiative inside Demonware to deploy a game fully using Docker containers over Marathon/Mesos , in the Cloud. It was given by Tom Shaw* and the game was last year’s release of Skylanders. As I’ve worked in Demonware, I know how big an undertaking is to prepare the launch of a game previously in dedicated hardware (and how much in underused to avoid risks), so this is a huge improvement.

 

 

As noted by the amount of notes I took, I found the conference very interesting and full of ideas that are worth following up. I really expect a ShipItCon 2018 full of great content.

 

ffind v1.2.0 released!

The new version of ffind v1.2.0 is available in GitHub and PyPi. This version includes the ability to configure defaults by environment variables and to force case insensitivity in searches.

You can upgrade with

    pip install ffind --upgrade

This will be the latest version to support Python 2.6.

woman programming on a notebook
Photo by Christina Morillo on Pexels.com

Happy searching!