Interviewed about microservices

I got interviewed about Microservice and talk a bit about my last book, Hands-on Docker for Microservices with Python.

I was an interesting view on what are the most important areas of Microservices and when migrating from Monolith architecture is a good idea. And also talking about related tools like Python, Docker or Kubernetes.

Check it out here.

“Hands-On Docker for Microservices with Python” is now available!

Last year I published a book, and I liked the experience, so I wrote another!

I like the cover. Look at all those micro services!

The book is called Hands-On Docker for Microservices with Python, and it goes through the different steps to move from a Monolith Architecture towards a Microservices one.

It is written from a very practical stand point, and aims to cover all the different elements involved. From the implementation of a single RESTful web microservice programmed in Python, containerise it in Docker, create a CI pipeline to ensure that the code is always high quality, and deploy it along with other Microservices in a Kubernetes cluster.

Most of the examples are to be run locally, but a chapter is included to create a Kubernetes cluster in the cloud using AWS services. There’s also other chapters dealing with production related issues, like observability or handling secrets.

Other than talking about technologies, like Python, Docker and Kubernetes; or techniques like Continuous Integration or GitOps; I also talk about the different challenges that teams and organisations face on the adoption of Microservices. And how to structure the work properly to reduce problems.

I think the book will be useful for people dealing with these problems, or thinking to make the move. Kubernetes, in particular, is a new tool, and there are not that many books dealing with it from a “start to finish” approach, looking at the whole software lifecycle, not only under a “I want to learn this piece of tech in isolation”.

Writing it also took a lot of time that I could be using in writing in this blog, I guess. Writing a book is a lot of hard work, but I’m proud of the result. I’m very excited to have it finally released!

You can check the book at Packt website and at Amazon. Let me know what do you think!

Python Automation Cookbook on sale for Black Friday

Black Friday offer!

For this week, you can get the book on the Packt web page from $10 in ebook format.

Python Automation Cookbook

You can get more information about the book in the page or in this post.

Package and deploy a Python module in PyPI with Poetry, tox and Travis

I’ve been working for the last couple of days in a small command line tool in Python, and I took the opportunity to check out a little bit Poetry, which seems to help in package and distribute Python modules.

Enter pyproject.toml

A very promising development in the Python ecosystem are the new pyproject.toml files, presented in PEP 518. This file aims to replace the old setup.py with a config file, to avoid executing arbitrary code, as well as clarify the usage.

book data education eyeglasses
Poetry in no motion. Photo by Pixabay on Pexels.com

Poetry generates a new project, and includes the corresponding pyproject.toml.


[tool.poetry]
name = "std_encode"
version = "0.2.1"
description = "Encode and decode files through the standard input/output"
homepage = "https://github.com/jaimebuelta/std_encode"
repository = "https://github.com/jaimebuelta/std_encode"
authors = ["Jaime Buelta "]
license = "MIT"
readme = "README.md"


[tool.poetry.dependencies]
python = ">=2.7, !=3.0, !=3.1, !=3.2, !=3.3, !=3.4, =0.12"]
build-backend = "poetry.masonry.api"

Most of it is generated automatically by Poetry, there are a couple of interesting bits:

Python compatibility

[tool.poetry.dependencies]
python = ">=2.7, !=3.0, !=3.1, !=3.2, !=3.3, !=3.4, <4"

This makes it compatible with Python 2.7 and Python 3.5 and later.

Including documentation automatically

The added README.md will be automatically included in the package.

Easy dependencies management

The dependencies are clearly stated, in particular, the difference between dev dependencies and regular dependencies. Poetry creates also a poetry.lock file that includes the versions, etc.

Scripts and entry points

This package creates command line tools. This is easy to do describing scripts.


[tool.poetry.scripts]
std_decode = 'std_encode:console.run_sd'
std_encode = 'std_encode:console.run_se'

They’ll call the function run_sd and run_se on the console.py file.

Testing the code with cram and tox

Cramtastic!

As the module is aimed as a command line tool, the best way of testing it is through command line actions. A great tool for that is cram. It allows to describe a test file as a series of command line actions and the returned standard output. For example:


Setup

  $ . $TESTDIR/setup.sh

Run test

  $ echo 'test line' | std_decode
  test line

Any line starting with $ is a command, and any line following, the result, as it will appear in the console. There’s a plugin for pytest, so it can be integrated in a bigger test suite with other Python tests.

Ensuring installation and tests with tox

To run the tests, the process should be:

  • Generate a package with your changes.
  • Install it in a virtual environment.
  • Run all the cram tests, that will call the installed command line scripts.

The best way of doing this is to use tox, that also adds the possibility of running it over different Python versions.

abstract art circle clockwork
All and all they’re just another cog in the tox. Photo by Pixabay on Pexels.com

To do so, we create a tox.ini file


[tox]
isolated_build = true
envlist = py37,py27

[testenv]
whitelist_externals = poetry
commands =
  poetry install -v
  poetry run pytest {posargs} tests/

Which defines two environments to run the tests, Python 2.7 and 3.7, and for each poetry installs the package and then runs the tests using pytest.

Running

$ tox

Runs the whole suite, but while testing, to speed up development, you can do instead

$ tox -e py37 -- -k my_test

The parameter -e runs only in one environment, and anything after the --​ will be transferred to pytest to select only a subset of tests, or any other possibility.

Locally, this allow to run and iterate on the package. But we also want to run the test remotely in CI fashion.

CI and deployment with Travis

Travis-CI is a great tool to setup in your open source repo.  Enabling your GitHub repo can be done very quickly. But after enabling it to our repo, we need to define the .travis.yml file with info.

language: python
python:
- '2.7'
- '3.5'
- '3.6'
matrix:
  include:
  - python: 3.7
    dist: xenial
before_install:
- pip install poetry
install:
- poetry install -v
- pip install tox-travis
script:
- tox
before_deploy:
- poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD
- poetry build
deploy:
  provider: script
  script: poetry publish
  on:
    tags: true
    condition: "$TRAVIS_PYTHON_VERSION == 3.7"
env:
  global:
  - secure: [REDACTED]
  - secure: [REDACTED]

The first part defines the different build to run, for versions of Python 2.7, 3.5, 3.6 and 3.7.

Version 3.7 requires to be executed in Ubuntu Xenial (16.04), as by default travis uses Trusty (14.04), which doesn’t support Python 3.7.

language: python
python:
- '2.7'
- '3.5'
- '3.6'
matrix:
  include:
    - python: 3.7
      dist: xenial

The next part describes how to run the tests. The package tox-travis is installed to seamless integrate both. This makes travis run versions of Python that are not included in tox.ini.

before_install:
- pip install poetry
install:
- poetry install -v
- pip install tox-travis
script:
- tox

Finally, a deployment part is added.

before_deploy:
- poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD
- poetry build
deploy:
  provider: script
  script: poetry publish
  on:
    tags: true
    condition: "$TRAVIS_PYTHON_VERSION == 3.7"

The deploy is configured to happen only if a git tag is set up, and if the build is using Python 3.7. The last condition can be removed, but then the package will be uploaded several times. Poetry ignores it if that’s the case, but it’s just wasteful.

The package is build before deploying it with poetry publish.

To properly configure the access to PyPI, we need to store in secure variables our login and password. To do so, install the travis command line tool, and encrypt the secrets, including the variable name.

$ travis encrypt PYPI_PASSWORD=<PASSWORD> --add env.global
$ travis encrypt PYPI_USER=<USER> --add env.global

The line


poetry config http-basic.pypi $PYPI_USER $PYPI_PASSWORD

will configure poetry to use these credentials and upload the packages correctly.

Release flow

business equipment factory industrial plant
I image the builds entering an assembly line while Raymond Scott’s Powerhouse plays Photo by Pixabay on Pexels.com

After all this in place, to prepare a new release of the package, the flow will be like this:

  1. Set up the new functionality and commits. Travis will run the tests to ensure that the build works as expected. This may include bumping the dependencies with poetry update.
  2. Once everything is ready, create a new commit with the new version information. This normally includes:
    1. Run poetry version {patch|minor|major|...} to bump the version.
    2. Set up any manual changes, like release notes, documentation updates or internal version references.
  3. Commit and verify that the build is green in travis.
  4. Create a new tag (or GitHub release) with the version. Remember to push the tag to GitHub.
  5. Travis will upload the new version automatically to PyPI.
  6. Spread the word! Your package deserves to be known!

The future, suggestions and things to keep an eye to

There are a couple of elements that could be a little bit easier in the process. As pyproject.toml and poetry are quite new there are a couple of rough edges that could be improved.

Tags, versions and releases

Poetry has a version command to bump the version, but its only effect is to change the pyproject.toml file. I’d love to see an integration to update more elements, including internal versions like the one in __init__.py that gets generated automatically, or ask for release notes and append them to a standard document.

There’s also no integration with generating a git tag or GitHub release in the same command. You need to perform all these commands manually, while it seems like they should be part of the same action.

Something like:

$ poetry version
Generating version 3.4
Append release notes? [y/n]:
Opening editor to add release notes
Saved
A new git commit and tag (v3.4) will be generated with the following changes:
pyproject.toml
- version: 3.3
+ version: 3.4
src/package/__init__.py
- __version__ = "3.3"
+ __version__ = "3.4"
RELEASE_NOTES.md
+ Version 3.4 [2018-10-28]
+ ===
+ New features and bugfixes
Continue? [y/n/c(change tag name)]
Creating and pushing...
Waiting for CI confirmation
CI build is green
Creating new tag. Done.
Create a new release in GitHub [y/n]

This is a wishlist, obviously, but I think it will fit the flow of a lot of GitHub releases to PyPI.

person holding brown stamp
Ready for release! Photo by rawpixel.com on Pexels.com

Travis work with Python 3.7 and Poetry

I’m pretty sure that travis will update the support for Python 3.7 quite soon. Having to define a different environment feels awkward, though I understand the underlying technical issues with it. It’s not a big deal, but I imagine that they’ll fix it so the definition is the same wether you work on 3.6 or 3.7. 3.7 was released 4 months ago at this time.

The other possible improvement is to add pyproject.toml support. At the moment setup.py uploads to PyPI is natively supported, so adding support for pyproject.toml will be amazing. I imagine it will be added if more projects uses this way of packaging more and more.

Final words

Having a CI running properly, and a deployment flow is actually a lot of work. Even doing it with great tools like the ones discussed here, there’s a lot of details to keep into account and polishing bits that need to be considered. It took me around a full day of experimentation to get this setup, even if I worked previously with travis (I configured it for ffind some time ago).

Poetry is also a very promising tool, and I’ll keep checking it. The packaging world in Python is complicated, but there has been a lot of work recently to improve it.

I wrote a Python book!

So, great news, I wrote a book and it’s available!

IMG_7207
Receiving your own physical book is very exciting!

It’s called Python Automation Cookbook, and it’s aimed to people that already know a bit of Python (not necessarily developers only), but would like to use it to automate common tasks like search files, creating different kind of documents, adding graphs, sending emails, text messages, etc. It’s written in the cookbook format, so it’s a collection of recipes that can be read independently, though there’s always references to show how to combine them to create more complex flows.

The book is available both in the Packt website and in Amazon. There’s more information about the book there, like previews and samples, in case anyone is interested…

This is my first written book, so all is very exciting. The process itself has been a lot of work, but not without its fun parts. I’m also quite proud of having written it in English, not being my first language.

A Django project template for a RESTful Application using Docker – PyCon IE slides

Just putting here the slides for my presentation in the PyCon Ireland, that are a follow up from this blog post. I’ll try to include the video if/when available.

I hand drawn all the slides myself, so it was a lot of fun work!

Enjoy!

Notes about ShipItCon 2017

Disclaimer: I know personally and worked with a good portion of the conference organizers and talkers. I label them with an asterisk*.

The ShipItCon finally took place last Friday. I think it’s quite impressive, given the short amount of time since announcing it and being the first edition, that was so well organized. The venue was very good (and fairly unusual for a tech conference), and all the usual things that are easy to take as granted (food, space, projector, sound, etc) work like clockwork. Kudos to the organizers.

The conference was oriented towards releasing online services, with special emphasis on Continuous Integration/Delivery. I think that focusing a conference over this kind of topic is challenging, as talks need to be generic enough in terms of tools, but narrow enough that is useful. Conferences about a specific technology (like PyConRubyConf or Linux Con) are more focused by concept.

The following is some notes, ideas and follow up articles that I took. Obviously, there are biased over the kind of things I find more interesting. I’ll try to link the presentation slides if/once they’re available.

  • The keynote by the Romero family was a great story and addressed a lot of specific points to the game industry (like the design challenges). It was also the exception in talking about shipping something other than a service, but a game (in Steam and iOS). I played a little GunMan Taco Track over the weekend!
    • Ship a game while on a ship“. They released part of the game while on the Queen Elizabeth cruise, crossing the Atlantic.
  • Release often and use feature toggles, detaching the code release and feature release. This is a point done in the Frederick Meyer talk that I heard recently in other places.
    • Friday night releases make me cringe, but it can make sense if the weekend is the lowest activity point of your customers.
    • Dependency trees grow to be more and more complex, to the point no one understands them anymore and only automated tools can plot them.
    • Challenges in treating data in CI. Use production data? A subset? Fake data? Redacted data? Performance analysis can be tricky.
    • Automate what you care about
  • The need for early testing, including integration/system/performance, was the theme around Chloe Condon talk. Typically, a lot of testing will be performed at the “main branch” (after a feature is merged back) level that can be prepared in advance, giving better and faster feedback to developers. Test early, test often.
    • She presented Codefresh with seems an interesting Cloud CI tool aimed at working with containers.
  • Lauri Apple talked about communication and how important READMEs and documentation are for projects, both internal and external. The WHAT to build is a key aspect that shouldn’t be overlooked.
    • READMEs should include a roadmap, as well as info about installation, run and configure the code.
    • This project offers help, review and advice for READMEs. I’ll definitively submit a  review for ffind (after I review it and polish it a little bit myself).
    • She talked about the Open Organization Maturity Model, a framework about how open organizations are.
    • A couple of projects in Zalando that catches my eye:
      • Patroni, an HA template for PostgreSQL
      • Zalenium, distribute a Selenium Grid over Docker to speed up Selenium tests.
      • External DNS, to help configure external DNS access (like AWS Route 53 or CloudFare) to Kubernetes cluster.
  • If it hurts, do it more frequently. A great quote for Continuous Delivery and automated pipelines. Darin Egan talked about the mindfullness principes and how the status quo get challenges and driving change opposes inertia.
  • The main point in Ingrid Epure‘s talk was the integration of security practices during the development process and the differences between academia and engineering practices.
    • Linters can play a part in enforcing security practices, as well as automating autoformatting to leave format differences out from the review process.
    • Standardizing the logs is also a great idea. Using Canonical Log Lines for Online Visibility. I talked before about the need to increasing logs and generate them during the development process.
  • Eric Maxwell talked about the need to standardise the “upper levels” of the apps, mainly related to logging and metrics, and making applications (Modern Applications) more aware of their environment (choreography vs orquestration) and abstracted from the underlying infrastructure.
    • He presented habitat.sh, a tool aimed at working with these principles.
    • Packaging the application code and letting the tool to do the heavy lifting on the “plumbing
  • The pipeline in Intercom was discussed by Eugene Kenny, and the differences between “the ideal pipeline” and “the reality” of making dozens of deployments every day.
    • For example, fully test and deploy only the latest change in the pipeline, speeding deployments at the expense of fewer separations of changes.
    • Or allow locking the pipeline when things are broken.
    • Follow up article: Continuous Deployment at Instagram
  • Observability is an indispensable property for online services: the ability to check what’s going on in production systems. Damien Marshall* had this concept of graphulsion that I can only share.

He gave some nice ideas on observability through the whole life cycle:

Development:

  • Make reporting logs and metrics simple
  • Account for the effort to do observability work
  • Standardize what to report. The three most useful metrics are Request Rate, Error Rate and Duration per Request.

Deployment:

  • Do capacity planning. Know approximately the limits of your system and calculate the utilization of the system (% of that limit)
  • Ship the observability

Production:

  • Make metrics easy to use
  • Centralise dashboard views across different systems
  • Good alerting is hard. Start and keep it simple.

 

  • Riot Games uses custom generation of services to generate skeletons and standardise good practices and reduce development time. Adam Comeford talked about those practices and how they implemented them.
    • Thinking inside the container.
    • Docker-gc is a tool to reduce the size of image repos, as they tend to grow very fast very quickly.
  • Jacopo Scrinzi talked about defining Infrastructure as Code, making the infrastructure changes through the same process as code (review, subjected to source control, etc). In particular using Terraform and Atlas (now Terraform Enterprise) to make automatic deployments, following CI practices for infrastructure.
    • Using modules in Terraform simplifies and standardises common systems.
  • The last keynote was about Skypilot, an initiative inside Demonware to deploy a game fully using Docker containers over Marathon/Mesos , in the Cloud. It was given by Tom Shaw* and the game was last year’s release of Skylanders. As I’ve worked in Demonware, I know how big an undertaking is to prepare the launch of a game previously in dedicated hardware (and how much in underused to avoid risks), so this is a huge improvement.

 

 

As noted by the amount of notes I took, I found the conference very interesting and full of ideas that are worth following up. I really expect a ShipItCon 2018 full of great content.