These are the times of miracle and wonder


This started all when I was a kid.

This started all when I was a kid. Well, not the exact model, but this is more iconic, isn’t it?

My first computer was a second hand ZX Spectrum+ This says a lot about my age, I guess. I got it from my uncle, who bought himself a more powerful computer. I really loved that computer, and used it for quite a long time. It seemed so magical that you could play a tape, which sounded weird, and load a game. There was also the possibility of program from the command line, which I tried, but I never “got” exactly how to get from very basic stuff to anywhere.

A few years later, and after the Spectrum was broken, I obtain a PC. At first without a sound card, so it was strangely silent compared to the computers of my friends. But the change to a hard drive, where the load times were almost instantaneous was astonishing. Yes, there were disks, but even load something from disk was extremely fast compared to the 15 minutes to load from tape. The usage of MS-DOS was also magical. Learning all the commands, messing around with configuration options (differences between extended and expanded memory) on autoexec.bat and config.sys and even changing port jumpers physically on the cards to resolve problems.

When Plug And Play arrived, at Windows 95, most of the pain seem to disappear and configuration worked fine most of the time. Also having multitask and a GUI was amazing. Around the same time I got my first experience with Internet. Suddenly, there was a way to obtain information not from disks (or CDs), but from a network. It was a slap in the face, and I immediately understood that it was going to be a basic part of the future, as it is today. I think that was obvious to any one interested in computers. It took a considerable amount of time to get to a position where it was something common, as it will be charged by time initially and it was expensive.

I started college and learn more interesting, wonderful stuff. For example, UNIX, a “new” (for me, at least) operative system that seem to have the crazy idea of being able to be used for more than a user at the same time. Or the understanding the internals of computers, which got me a lot of “aahhh, now I get it” moments from previous experiences. Including the programming part, which I discovered was much more powerful and interesting than the small scripts I did before.

When I started my first job, I developed for systems that were also computers, but weren’t shaped as a box, an screen and a keyboard. And that you can compile in a machine code for another. I also learned a lot about how powerful and productive was to use properly development tools like IDEs and the Unix command line.

After that, I spend a few years without  working as developer, but when I came back, it looked like I missed stuff. Like how incredibly easier to use have Linux came to be, thanks to Ubuntu. So much, that after a problem updating my personal computer, I installed Ubuntu at home and never looked back to Windows (I use a Mac these days). But the thing that impressed me more, Virtual Machines. So you’re saying that I can run a full computer inside my computer? That’s amazing!

I also learned Python (and other scripting languages) and, coming from writing mostly C and C++, you can imagine how wonderfully productive it felt. It also have a really great environment, with modules to do anything that you can imagine. One of my first uses of it was to create an application on an Open Office spreadsheet, including dialogs to input information. I got so in love with Python that I decided to move my career around it.

It's Virtual Machines all way down

It’s Virtual Machines all way down

I got really impressed with the iPad presentation. It really was (and is) a magical device I had dreamed a long time ago. I have an iPad, I use it every day and it is probably my favourite device that I ever owned.

The thing that surprises me it’s that I still have this sense of wonder, of enthusiasm after living all those things. I have seen a lot, but somehow, keep that kid inside me that is amazed by technology and how far have we come, and how the next thing is really great. It’s not easy to perceive on a day-by-day basis if you work in this field, but taking a look back, just as close as 5 years back, things were quite different from now in the lands of technology. The change has also been accelerated. Software, in particular, seems to have flourish in ways that seemed impossible. There are better tools to generate it, that make complex projects to be able to be achieved by small teams in very short amounts of time. I know, there are also complains about how exactly this technological progress is happening, and how 50 years ago we think we were going to be able to live in Mars and to wear jetpacks, but I think that having permanent access to the greatest library on our pockets on devices getting faster and more capable every year is not a small achievement. We live in the future.

I remember all this from time to time, when I am tempted to be cynical on new products, like I’m sure you’ve read these days related to iOS 7, PS4 and/or Xbox One. There seem to be a lot of people that put their best “not impressed” face for almost every new release, and that’s not a good thing. Of course, there are things that I’m not particularly like or am fond of, for example the iPad mini (an smaller iPad? I’d love a bigger one, maintain the weight), but I try to remember that there are people that will love all these things like I loved previous ones. I am not necessarily the ideal customer for everything, and I appreciate when a review is about describing the product and its strong and weak points and not that much about stating a (usually predetermined) opinion.

We are truly living in days of miracle and wonder since almost 26 years ago. I hope you’re enjoying the ride. I certainly am.

Commenting the code


please_explainI always find surprising to find out comments like that regarding code comment. I can understand that someone argues about that writing comments on the code is boring, or that you forget about it or whatever. But to say that the code shouldn’t be commented at all looks a little dangerous to me.

That doesn’t mean that you’ll have to comment everything. Or that adding a comment it’s an excuse to not be clear directly on the code, or the comment should be repeat what is on the code. You’ll have to keep a balance, and I agree that it’s something difficult and everyone can have their opinion about when to comment and when not.

Also, each language has it’s own “comment flow”, and definitively you’ll make more comments on low level languages like C than in a higher level language like Python, as the language it’s more descriptive and readable. Ohhh, you have to comment so many things in C if you want to be able to understand what a function does it in less that a couple of days… (the declaration of variables, for example) #

As everyone has their own style when it comes to commenting, I’m going to describe some of my personal habits commenting the code to open the discussion and compare with your opinions (and some example Python code):

    • I put comments summarizing code blocks. That way, when I have to localize a specific section of the code, I can go faster reading the comments and ignoring the code until getting to the relevant part. I also tend to mark those blocks with newlines.
# Obtain the list of elements from the DB
.... [some lines of code]

# Filter and aggregate the list to obtain the statistics
...  [some lines of code]

UPDATED: Some clarification here, as I think that probably I have choose the wrong example. Of course, if blocks of code gets more than a few lines and/or are used in more than one place, will need a function (and a function should ALWAYS get a docstring/comment/whatever) . But some times, I think that a function is not needed, but a clarification is good to know quickly what that code is about. The original example will remain to show my disgrace, but maybe this other example (I have copy-paste some code I am working right now and change a couple of things)
It’s probably not the most clean code in the world, and that’s why I have to comment it. Latter on, maybe I will refactor it (or not, depending on the time).

               # Some code obtaining elements from a web request ....

                # Delete existing layers and requisites
                update = Update.all().filter(Update.item == update).one()
                UpdateLayer.all().filter(UpdateLayer.update_id == update.item_id).delete()
                ItemRequisite.all().filter(ItemRequisite.item == update).delete()

                # Create the new ones
                for key, value in request.params.items():
                    if key == 'layers':
                        slayer = Layer.all().filter(Layer.layer_number == int(value)).one()
                        new_up_lay = UpdateLayer(update=update, layer=slayer)
                        new_up_lay.save()
                    if key == 'requisites':
                        req = ShopItem.all().filter(ShopItem.internal_name == value).one()
                        new_req = ShopItemRequisite(item=update, requisite=req)
                        new_req.save()
  • I describe briefly every non-trivial operation, specially mathematical properties or “clever tricks”. Optimization features usually needs some extra description telling why a particular technique is used (and how it’s used).
# Store found primes to increase performance through memoization
# Also, store first primes
found_primes = [2,3]

def prime(number):
    ''' Find recursively if the number is a prime. Returns True or False'''

    # Check on memoized results
    if number in found_primes:
        return True

    # By definition, 1 is not prime
    if number == 1:
        return False

    # Any even number is not prime (except 2, checked before)
    if number % 2 == 0:
        return False

    # Divide the number between all their lower prime numbers (excluding 2)
    # Use this function recursively
    lower_primes = (i for i in xrange(3,number,2) if prime(i))
    if any(p for p in lower_primes if number % p == 0) :
        return False

    # The number is not divisible, it's a prime number
    # Store to memoize
    found_primes.append(number)
    return True

(Dealing with prime numbers is something that deserves lots of comments!) EDIT: As stated by Álvaro, 1 is not prime. Code updated.

  • I put TODOs, caveats and any indication of further work, planned or possible.
# TODO: Change the hardcoded IP with a dynamic import from the config file on production.
...
# TODO: The decision about which one to use is based only on getting the shorter one. Maybe a more complex algorithm has to be implemented?
...
# Careful here! We are assuming that the DB is MySQL. If not, this code will probably not work.
...

UPDATE: That is probably also related to the tools I use. S.Lott talks about Sphinx notations, which is even better. I use Eclipse to evelop, which takes automatically any “TODO” on the code and make a list with them. I find myself more and more using “ack-grep” for that, curiously…

    • I try to comment structures as soon as they have more than a couple of elements. For example, in Python I make extensive use of lists/dictionaries to initialize static parameters in table-like format, so use a comment as header to describe the elements.
# Init params in format: param_name, value
init_params = (('origin_ip','123.123.123.123'),
               ('destiny_ip','456.456.456.456'),
               ('timeout',5000),
              )
for param_name, value in init_params:
    store_param(param_name, value)
  • Size of the comment is important, it should be short, but clearness goes first. So, I try to avoid shorting words or using acronyms (unless widely used). Multiline comments are welcome, but I try to avoid them as much as possible.
  • Finally, when in doubt, comment. If at any point I have the slightest suspicious that I’m going to spend more than 30 seconds understanding a piece of code, I put a comment. I can always remove it later the next time I read that code and see that is clear enough (which I do a lot of times). Being both bad, I prefer one non-necessary comment than lacking one necessary one.
  • I think I tend to comment sightly more than other fellow programmers. That’s just a particular, completely unmeasured impression.

What are your ideas about the use of comments?

UPDATE: Wow, I have a reference on S.Lott blog, a REALLY good blog that every developer should follow. That’s an honor, even if he disagrees with me on half the post ;-)

On one of my first projects on C, we follow a quality standard that requires us that 30% of the code lines (not blank ones) should be comments.