LLMs and Context

Let’s start for something simple. LLMs don’t have memory. Every time that you make a call to an LLM, you need to provide as an input all the context related to the request. The LLM will pick up this context and produce a result. But, when using AI tools, for example a chat bot, you see that you have a conversation. And it continues the conversation and references things said before. And that is because the chat bot itself maintains and references the context back and forth. In a naive way, it… Read More

Agents and Agency

There’s a lot of talk about the capabilities of AI Agents, and a lot of promise on the things that they may be able to achieve. Or not achieve. As well as a lot of excitement, there’s a lot of skepticism, where not direct rejection. The term of AI Agent is still a bit in flux, but it refers to a program that runs independently with a particular goal and it’s capable of taking actions. The instructions to it are either specific to be done right now (for example: “Go to this… Read More

Interacting with GenAI Models

The evolving landscape of AI features a growing interconnectivity between GenAI models and frontend tools. Companies are rapidly developing both open-source and proprietary models, challenging the dominance of established players like OpenAI. While backend models are costly to create, both them and frontends may become commoditised, leading to innovative collaboration through APIs and agents.