Agents and Agency

There’s a lot of talk about the capabilities of AI Agents, and a lot of promise on the things that they may be able to achieve. Or not achieve. As well as a lot of excitement, there’s a lot of skepticism, where not direct rejection.

The term of AI Agent is still a bit in flux, but it refers to a program that runs independently with a particular goal and it’s capable of taking actions. The instructions to it are either specific to be done right now (for example: “Go to this codebase and implement this feature I’m describing”) or reacting to the environment (for example: “Monitor this email inbox. If you receive an email from a customer describing a problem, produce actions to create a ticket and notify the right team”.

AI Agents are growing in complexity and capabilities, though is still a field full of experimentation and we probably don’t have a killer app that has been massively adopted. Software development is probably the leading field where AI Agents are used to produce code.

While Agents are capable of somewhat independent action, they do so based on instructions. These instructions are less precise and more abstract than the traditional instructions required for computers, it’s true. But their field of action is still limited. Agents are diligent workers, but with very little initiative. They won’t push back, and they won’t make suggestions.

Ideators and implementers

I think this makes a very fundamental distinction on the kind of person that approaches AI Agents and how they see them, even philosophically. Let’s call them ideators and implementers1.

  • Ideators. They like to ideate something, and see it implemented, normally in an experimental way. They like to work alone, or with only a small number of people. They see GenAI as “wow, I don’t need anyone else to do things” and “I can create obedient AI Agents that do exactly what I tell them, and they are very capable!”. Previously, they required implementers to get their ideas into the real world, and they love the possibility of removing that dependency, as finding the right people to work with is difficult, and having to communicate their brilliant ideas is frustrating.
    Ideators have a high agency. They are constantly looking for more things that can be done, and producing ideas. They are not worried that they don’t have the capabilities to actually make their ideas possible. Those are details to be worked out later, and probably not by them.
  • Implementers. They tend to work on more established or complex environments, alongside more people. They have to manage tech debt or legacy systems. They sweat the details, and think that a “brilliant idea” is actually very hard to get successfully implemented, with a ton of small decisions that need to be made. They’ve seen many “brilliant ideas” to be impossible, stupid, or, directly, fraudulent. When dealing with ideators, they are the ones that push back on technical considerations, and ask for clarification many times.
    Implementers have a lower agency. They don’t produce as many ideas, and normally follow others. They like to build things, and develop their technique. They don’t like to delegate that much and tend to think on the things that are possible within their area of expertise.
The implementer is more concentrated over angry, but everything is possible

Ideators are extremely hyped on the AI Agents. They are going to be able to produce their ideas quicker, cheaper and without resistance by other humans. They will be able to experiment more. They see a glorious future, AI is the Real Thing™.

Implementers are way more skeptic on the AI Agents. They have seen other hyped technologies before, and many have faded. They are experts in their fields, and probably had a bit of tunnel vision. They’ve tried GenAI, but they see problems and limitations. They are also worried about being replaced, as they are more dependent on having a job and being told what to do.

Seniority

I think that the distinction is, in a big part, a personality one. We all know teenagers that are full of ideas from an early age.

But as you grow in your career, you are sort of forced to move a bit more into the ideator field2. Even if it doesn’t come naturally, the responsibility and ownership will naturally to produce ideas to implement and be more proactive. As well as learn to delegate and don’t be as involved in sweating the details as before.

Going upper in the corporate ladder means going both in higher abstraction and a higher degree of freedom on the specifics, as well in proactivity. You need to come with ideas to implement in your area, big or small. You need to think out-of-the-box, and the instructions will be more and more abstract. The degrees of freedom are not the same in “implement this feature into your module” over “increase the revenue of your division by 20%”.

AI Agents as implementers

Because of the capabilities of AI Agents, is possible that we will end up getting a bit higher into the ideator ladder. We all will need to learn on how to delegate, oversee, and take a higher level approach to think on what’s possible to do, even if we are not able to implement it.

Depending on how capable AI Agents end up being3, and working on software development, which seems like the sector that’s being the most affected, that may be from “a little bit” to “a lot”.

A little bit will mean that GenAI and AI Agents are useful, but they don’t change how we generally work at the moment.

A lot means that the way in which we work is massively affected. In the most extreme scenario, any programmer may end up being forcefully “promoted” to a CEO. To have power over every decision in the system, including sales, as AI will be able to perform all those duties efficiently. This is not a comfortable for many people, including experienced people, as they like certain degree of certainty and make decisions within some frame4.

Cute little Agents, obliterating employment levels

The shift in responsibility in that case would be huge, and it would be such a big chance that will probably drive many people out of the industry.

At the same time, this promise makes a lot of already CEO-type people really exciting, as they’ll be empowered to a huge levels, without the inconvenience of dealing (and paying!) with a team of people which is difficult to recruit, to manage and to pivot in case different skills are required5.

Reality check

So far, we haven’t seen the kind of massive leverage of AI Agents or even LLMs that correspond to the full AI promise. My personal opinion is that, so far, they are useful tools and they have a real impactful effect in the software development world, so probably cannot be compared with other overhyped technologies like NFTs that ended being mostly pointless.

But at the same time, I find very difficult to believe that they are going to totally reshape the landscape of the industry or put out of work the majority of developers.

My main philosophical aspect of that is that software development is a discovery problem, way more than a construction one. It is not enough to have a general idea, but the idea needs to be challenged, shaped and moulded. Outside of very small projects, it requires the interaction of multiple people. Real people with real agency, that can push back6, add their own ideas and perspectives to the project and can keep abstract ideas in check. Generating a pilot or PoC is one thing. Transforming that into a viable project is another.

But, on the other hand, I’m aware that my instincts are more into the implementer side. Which makes me a bit biased in all of this.

  1. Both are an oversimplification, obviously. There are two wolves inside everyone… ↩︎
  2. Probably “true” ideators look for roles that play to their advantage, like sales or being entrepreneurial, from the beginning. A strong ideator may have a bad time following instructions. ↩︎
  3. This is still under heavy debate, and I personally think is still early to find out ↩︎
  4. Which can be described as “being told what to do” ↩︎
  5. It would also probably change a lot of skills required for the CEO-type person themselves, which will create interesting second-order effects. Perhaps your ideas weren’t as great outside of the ecosystem you had them! ↩︎
  6. Saying “no” is a critically important ability ↩︎

Leave a comment