Agents are Minions

I had an idea today: AI agents remind me of minions from "Despicable Me".

They are eager to help and quite often are surprisingly helpful. But at the same time they are chaotic creatures who only understand the narrow context you’ve explicitly provided to them. They always need clear, specific instructions and constant supervision to avoid going off track. And they are dangerous if left unsupervised (especially when given access to things that can blow up).

But they excel at repetitive tasks and will persistently try different approaches until they achieve the desired outcome. And that’s their main strength - we can automate many things that we couldn’t before.

I often see people expecting agents to be smart in the human way (and I kind of understand why). People expect an agent that "knows" what to do with a little guidance. They want agents to be successful in 100% of tasks they do. But in reality they fail, often in very non-human ways. And then people become frustrated with the results.

In the end, maybe the real question isn’t making them less like minions, but getting better at managing them? Give your agents more specific tasks. Help them plan their work. Don’t hope they won’t break anything in the process. Make sure they have enough of everything to succeed.

Maybe "AGI" after all will be billions of agents working together and not a single super intelligent model (LLM or whatever comes next)?