Stop Pretending AI Agents Arn't Just Cron Jobs
Are we building artificial intelligence or intelligent tech debt?
Agentic flows, or AI agents, are the latest buzzword coming from the AI pilled community in Silicon Valley. The concept symbolizes the next application of AI, since we seem to have plateaued on chat bots. Agents are pitched as adaptable digital workers that can handle impromptu tasks after receiving commands from humans or applications. The agent is our little corporate minion that responds to emails and formats documents. They are the clanker everyone should fear as a potential job replacer. Except for the fact, we have already had these "agents" in software engineering for decades.
In software engineering we have a concept of a "job", "worker", and even “agent”. It is typically a simple and stand alone application that waits for a stimuli in the form of an application request or user input and performs a discrete task. These jobs can also operate on a schedule, in the form of a "cron job". These jobs can execute a specific task or they can trigger other downstream jobs. For years now these jobs have been used to perform quick calculations, move data around, and, yes, respond to emails.
All AI agents do is introduce a connection to an LLM API. Now, your job doesn't just interact with your database, but maybe it passes some data to OpenAI's API.
I can already hear the proponents. “The AI is adaptive, it can respond to new inputs and respond in novel ways”. “It's intelligent, it can decide that an email shouldn't just be replied to, but also forwarded to your manager!” - No it cannot. The AI agent still needs to land into predictable "states". The AI agent cannot truly go rouge because that will be totally useless for the creator. It cannot interact with APIs or networks that it hasn't been granted access to. To be useful, the agent still needs to act as a finite state machine, a limited set of actions and conditions it can find itself in.
If we ignore that we are still building software that need to behave predictably, then we will magnify the worst burden that comes with any software agent - maintenance. In every software company I have started at, I was always greeted by an army of alerts, monitoring jobs, and workers that nobody quite knew whether they functioned properly. An alert would go off for a broken system, but after inspection engineers would find a perfectly working system. What happens with all these jobs, agentic or otherwise, is people forget to support them. They grow stale. The code for the job doesn't get updated along with other larger applications. The workers don't get properly decommissioned and harass new naive employees, that are unsure of how to interpret them. Countless of days are burned trying to debug or understand the relevancy of dated software jobs.
We are about to do the exact same thing with all these "AI Agents". Novice engineers who do not think in terms of systems and product managers that are concerned with pushing the latest buzzword are about to introduce a slew of new tech debt. A decade from now, I fully expect to by spending several months deprecating many of these agents.
LLMs and AI certainly have their practical use cases, and there are many applications for incorporating them with existing software jobs. But, we need to stop pretending that they are a novel and revolutionary technology. Sam Altman would have you believe that AI Agents are like Tony Stark’s Jarvis, computing millions of calculations while Tony fights bad guys. They are not. Agentic systems should be designed and built with the same rigor as any other software job or worker. Otherwise we are simply building an enormous amount of legacy code.