Coding with AI Agents

Why trust AI agents with your code? Because, when well-handled, they make you a faster software engineer.

Any coding agent is merely a tool. It’s important to manage your expectations. While LLMs are good at imitating humans, they aren’t humans and sometimes fail spectacularly.

Never forget that an engineer is always responsible for the final result. Don’t treat the output of an agent as something produced not by - it’s still the product of your work. Always treat it as your code. If you don’t feel that the produced code represents you, customize and refine your setup until you’re happy with the results.

At a high level, the process consists of the following steps:

  1. Provide coding standards and style

  2. Describe the architecture

  3. Break the task into sub-tasks

  4. Create a detailed prompt

  5. Review the generated output

  6. Prepare feedback based on the output

  7. Refine and jump back to the appropriate earlier step

This loop is fractal. It scales from tiny tools to enterprise systems.

LLMs shine when the input is larger than the output (e.g. preparing summaries). For instance, give the agent a twenty-page API specification and instruct it to return a lean, two-hundred-line implementation-skeleton; the surplus context keeps it grounded and curbs hallucination.

A large portion of the input can be reused; extract the common knowledge and make sure it’s accessible to your coding agent.

Always expect failures. Any agent will fail to produce meaningful and useful output. Use such cases to learn more and refine your process and setup.

Use AI agents as amplifiers, not autopilots. A disciplined loop turns their speed into code you trust and own.

References