This blog post by Dan Crosta is interesting. It talks about how is possible to optimise Python code for operations that get called multiple times avoiding the usage of Object Orientation and using Closures instead.
While the “closures” gets the highlight, the main idea is a little more general. Avoid repeating code that is not necessary for the operation.
The difference between the first proposed code, in OOP way
class PageCategoryFilter(object): def __init__(self, config): self.mode = config["mode"] self.categories = config["categories"] def filter(self, bid_request): if self.mode == "whitelist": return bool( bid_request["categories"] & self.categories ) else: return bool( self.categories and not bid_request["categories"] & self.categories )
and the last one
def make_page_category_filter(config): categories = config["categories"] mode = config["mode"] def page_category_filter(bid_request): if mode == "whitelist": return bool(bid_request["categories"] & categories) else: return bool( categories and not bid_request["categories"] & categories ) return page_category_filter
The main differences are that both the config dictionary and the methods (which are also implemented as a dictionary) are not accessed. We create a direct reference to the value (categories and mode) instead of making the Python interpreter search on the self methods over and over.
This generates a significant increase in performance, as described on the post (around 20%).
But why stop there? There is another clear win in terms of access, assuming that the filter doesn’t change. This is the “mode”, which we are comparing for whitelist of blacklist on each iteration. We can create a different closure depending on the mode value.
def make_page_category_filter2(config): categories = config["categories"] if config['mode'] == "whitelist": def whitelist_filter(bid_request): return bool(bid_request["categories"] & categories) return whitelist_filter else: def blacklist_filter(bid_request): return bool( categories and not bid_request["categories"] & categories ) return blacklist_filter
There are another couple of details. The first one is to transform the config categories into a frozenset. Assuming that the config doesn’t change, a frozenset is more efficient than a regular mutable set. This is insinuated in the post, but maybe didn’t get the final review (or to simplify it).
Also, we are calculating the intersection of a set (operand &) to then reduce it to a bool. There is currently a set operation that gets the result without calculating the whole intersection (isdisjoint).
The same basic principle applies to calculate the bool category for the black filter. We can calculate it only once, as it’s there to short-circuit the result in case of an empty config category.
def make_page_category_filter2(config): categories = frozenset(config["categories"]) bool_cat = bool(categories) if config['mode'] == "whitelist": def whitelist_filter(bid_request): return not categories.isdisjoint(bid_request["categories"]) return whitelist_filter else: def blacklist_filter(bid_request): return (bool_cat and categories.isdisjoint(bid_request["categories"])) return blacklist_filter
Even if all of this enters the definition of micro-optimisations (which should be used with care, and only after a hot spot has been found), it actually makes a significant difference, reducing the time around 35% from the closure implementation and ~50% from the initial reference implementation.
All these elements are totally applicable to the OOP implementation, by the way. Python is quite flexible about assigning methods. No closures!
class PageCategoryFilter2(object): ''' Keep the interface of the object ''' def __init__(self, config): self.mode = config["mode"] self.categories = frozenset(config["categories"]) self.bool_cat = bool(self.categories) if self.mode == "whitelist": self.filter = self.filter_whitelist else: self.filter = self.filter_blacklist def filter_whitelist(self, bid_request): return not bid_request["categories"].isdisjoint(self.categories) def filter_blacklist(self, bid_request): return (self.bool_cat and bid_request["categories"].isdisjoint(self.categories))
Show me the time!
Here is the updated code, adding this implementations to the test.
The results in my desktop (2011 iMac 2.7GHz i5) are
total time (sec) time per iteration class 9.59787607193 6.39858404795e-07 func 8.38110518456 5.58740345637e-07 closure 7.96493911743 5.30995941162e-07 class2 6.00997519493 4.00665012995e-07 closur2 5.09431600571 3.39621067047e-07
The new class performs better than the initial closure! The optimised closure is anyway trumping, saving a big chunk compared with the slower implementation. The PyPy results are all very close, and it speeds up 10x the code, which is an amazing feat.
Of course, a word of caution. The configuration is assumed to not change for a filter, which I think is reasonable.
Today John Siracusa announced that he won’t be making more OS X reviews.
Typically journalists or reviewers don’t announce that they stop doing something. They just stop doing it, and maybe explain it after someone asks them.
But John’s reviews were something truly special, and a lot of people on the tech world has lamented the announcement.
I think that the Mac community has always been quite vibrant and passionate, allowing detailed discussion. Crossing the line of obsession sometimes. In other tech worlds, the discussion is more cold and rational, even aseptic. Apple discussion has always been more emotional and sort of aiming for greatness. Years ago the distinction was quite pronounced, when Apple had a small and passionate group of followers.
Tech reviews tend to be on two extremes. Either too much spec and features enumeration, making them arid and boring. Or too much focused in fitting a narrative, talking about whether something is the best or the worst ever.
But these OS X reviews are the perfect combination of both aspects, and they remain the gold standard of tech literature.
John’s reviews are extremely detailed, even to the level of obsession, but yet they are easy to read and understand.
They present a clear exposure of new features. But add historical context of decisions, compromises and forecasting areas of improvement.
Siracusa has strong opinions in quite a few areas, but he’s exposing all the facts, and explaining his biases.
I can’t stress how difficult is keep you hooked and eager to read another 8,000 words, when you’re talking in length about filesystems, pixel alignment, background process handling or Finder performance.
I haven’t found anything that can be on par, though there are great writers writing about technology these days. I can read better reviews now than 5 or 10 years ago.
While I understand his reasons for stopping, and think we have been lucky to have them around for so long, I can’t help but feel a little sad.
I’ll keep following him in his blog, twitter, and of course on his weekly podcast, as well as collaborations. At least, John Siracusa keeps a healthy production that we can consume. I’m sure we’ll keep reading and listening great stuff.
The importance of Spock as an icon of the XX century cannot be overstated.
Let me go back for a second. When I was a child in Spain, access to Star Trek was pretty limited. There were no reruns of the original series. I think it only was emitted during the 70s in Black and White. The Next Generation was broadcasted with a four or five year delay. And it stopped for several years just at the end of the third season. It took me years to know what happened with Picard and Locutus! Also, the movies were not big hits.
But everyone knew who Spock was. A lot of people told my uncle he looked like “Dr” Spock. He beared a clear resemblance with Leonard Nimoy when he was young.
In a lot of ways I’ve always consider Star Trek my “prime geeky love“. That iconic status and mystery, made me search for it. After having access to Internet, I devour everything about it. It was “my thing“, more than other stuff I love, because it was less known among my circles. I bought the whole DS9 series in VHS from UK, over the course of several years. I say that I learned spoken English through Star Trek. And written English through RPG manuals.
Spock is the most genuine example of “cool scientist“, in opposition from the “mad scientist” trope . Probably the first example. And was the one getting the spotlight in the series. He was the character to follow.
He is iconic to a level that only competes with a bunch of icons. I’ve seen him in posters of parties or t-shirts in big stores. He’s know by people that don’t even know what Star Trek is.
And none of this should been possible without the extraordinary performance by Leonard Nimoy. It is clear that he contributed heavily to a lot of things that define Spock. From the Vulcan Nerve Pinch to the Vulcan salute. But his biggest legacy is the fact that his performance is replicated constantly. Every single Vulcan in Star Trek is impersonating Leonard Nimoy playing Spock. In tons of other media we have similar interpretations of “emotionless aliens/outsiders“. He created an archetype, which is a huge achievement. That is also a heavy burden, and Leonard Nimoy seemed only to embrace it after years of struggle.
There is also an overlooked idea in Vulcan philosophy that has always been appealing to me. IDIC. Infinite Diversity in Infinite Combinations. It resonates in me in a lot of diff
erent levels. On the personal level, on how important is to embrace the diversity that us humans are capable of produce.
And to the software creation, on how wonderful is the combination of different code, assembled to produce something magical together.
Leonard Nimoy will be greatly missed. Rest In Peace.
Life long and prosper.
Isn’t it quite absurd that we haven’t nail this yet?
We recently heard about all the great advances in terms of image quality, 4K, bending screens… Yet controlling a TV feels clunky and awkward.
Even worse, given the never ending increase in devices connected to the TV (DVDs, Blu Rays, AppleTVs, consoles…) the usage of different remote controls is painful, unless a Universal Remote is used. But even in that case, the process itself of selecting the activity is weird, for very common operations.
For example, if you want to play a movie on an AppleTV (or DVD player, or TiVO, etc), you may have to turn it on, then (using the remote), turn on the TV, select the input (which may involve cycle through a lot of unused inputs, like Composite), adjust the volume, and then get the AppleTV controller.
If at any point we need to adjust the volume, we’ll need to get again the TV remote.
All this is very weird and not the best user experience…
So, just for the sake of discussion, let’s say a couple of ideas for what I think could be a TV with a much better interface:
– Everything is a channel. Why HDMI is “an input” but TV channels are numbered? Allowing to select HDMI input as channel 3 allow easy access to it, and simplifies the interface.
If there is no input (e.g. the DVD is off), the channel will be skipped for channel surfing purposes, unless the user specifically request the channel number.
– A good remote. I don’t think that there’s any excuse for not being Universal and Programable Remote. The included TV Remote should aim to control more things than just the TV. It should have as few buttons as possible, more on that later. And, another important feature, it should be possible to make it beep if lost (e.g. clicking a button on the TV)
– Programmable channels. When a channel is on selected, all the settings (mainly image controls, but may also include a volume adjustment) will change accordingly. The remote will also know that the controls are now related to this channel. For example, if the DVD channel is selected, most of the keys are related to the DVD remote (except volume keys); if the AppleTV channel is selected, the keys refer to the AppleTV controls.
A list of channels (ideally showing their current input) that allows easy rearrangement and enabling disabling them will be great to navigate and give an overview.
– A companion app that allows to configure all the parameters easily on a PC or mobile app.
I understand that the complexities of the different devices in the living room is complex, and that there are problems that can be impossible to fix (for example, having an HDMI router may still present problems given that more than one “channel” will be coming from the same “input”)
But I find quite ridiculous that we are still dealing with such terrible interfaces in TVs these days…
A few weeks ago I came by a couple of articles my Marco Arment that share the theme of the current status of accelerated change within the development community as a way of stressing up, and being difficult to be up to date. After all, one gets tired of learning a new framework or language every size months. It gets to a point where is not funny or interesting anymore.
It seems like two different options are presented, that are available for developers after some time:
- Keep up, meaning that you adopt rapidly each new technology
- Move to other areas, typically management
Both are totally valid options, as I already said in this blog that I don’t like when good developers move to different areas (to me it’s sort of a surgeon deciding she had enough after a few years and move to manage the hospital). Though, obviously each person has absolutely every right to choose their career path.
But I think that it’s all mostly based in an biased and incorrect view of the field of technology and the real pace of changes.
In the last years, there has been an explosion of technologies, in particular for web. Ruby on Rails almost feels introduced at the same time as COBOL. NodeJS seemed to be in fashion for a while. The same with MongoDB or jQuery.
In the last 6 or 7 years there has been an incredible explosion in terms of open source fragmentation. Probably because GitHub (and other online repos) and the increase in communication through the Internet, the bar to create a web framework and offer it to the world has been lowered so much, that a lot of projects that would’ve been not exposed previously, has gotten more exposure. As a general effect, is positive, but it came with the negative effect that every year there is a revolution in terms of technologies, which forces everyone to catch up and learn the brand new tool that is the best for the current development, increasing the churning of buzz words.
But all this is nothing but an illusion. We developers tend to laugh at the common “minimum 3+ years of experience in Swift”, but we still get the notion that we should be experts in a particular language, DB or framework since day one. Of course, of the one on demand today, or we are just outdated, dinosaurs that should retire.
Software development is a young field, full of young people. That’s great in a lot of aspects, but we need to appreciate experience, even if it comes from using a different technology. It doesn’t look like it, but there’s still a lot of projects done in “not-so-fancy” technologies. That includes really old stuff like Fortran or COBOL, but also C++, Java, Perl, PHP or Ruby.
Technologies gets established by a combination of features, maturity, community and a little luck. But once they are established, they’re quite resilient and don’t go away easily. They are useful for quite a long time. Right now it’s not that difficult to pick a tool that is almost guaranteed to be around in the next 10-15 years. Also, most of the real important stuff is totally technology agnostic, things like write clean code, structure, debug ability, communication, team work, transform abstract ideas into concrete implementations, etc… That simply does not go away.
Think about this. iOS development started in 2008. Smartphones are radically different beasts than the ones available 6 years ago, probably the environment that has changed more. The basics are the same, though. And even if Swift has been introduced this year, it’s based in the same principles. Every year there has been tweaks, changing APIs, new functionalities. But the basic ideas are still the same. Today a new web development using LAMP is totally viable. Video games still relay on C++ and OpenGL. Java is still heavily used. I use all the time ideas mainly developed in the 70s like UNIX command line or Vim.
Just because every day we get tons of news about new startups setting up applications on new paradigms, that doesn’t mean that they don’t coexist with “older” technologies.
Of course, there are new tricks to learn, but it’s a day by day additive effort. Real revolution and change of paradigm is rare, and normally not a good sign. Changing from MySQL to PostgreSQL shouldn’t be considered a major change in career. Searching certain stability in the tools you use should be seen as good move.
We developers love to stress the part of learning everyday something new and constantly challenge ourselves, but that should be taken also in perspective with allowing time to breathe. We’ve created a lot of pressure on ourselves in terms of having to be constantly pushing with new ideas, investigating in side projects and devoting ourselves 100% of the time to software. That’s not only not realistic. It’s not good.
You only have to breathe. And just worry on doing a good work and enjoy learning.