Stephan Schmidt - January 2, 2026
Reasons I Found Why AIs Struggle With Coding
Why AIs fall into doom loops on some coding tasks
Coding is over. I have been a proud code monkey for 45 years, but at the end of 2025 I didn’t write a single line of code. Claude Code was doing that for me in several side projects—greenfield and brownfield projects. Overall, I’m very happy: Claude Code writes the code, and the feature works. Sometimes, though, Claude Code gets confused and into a doom loop it can’t get out of.
In a doom loop, it makes a mistake, wants to correct it, but makes it worse, struggles, and ends up where it started—sometimes deleting all changes in the process and declaring the work is done. There might be failing tests; Claude Code struggles to fix them, deletes all changes, and declares the tests work again.
To understand what is going on, first understand why Claude Code mostly works for me:
Go I use Go for my projects. Go is very explicit and not a lot of magic going on—and the magic that is going on is simple, like
defer. Go as a language and API is not changing a lot, Go is backwards compatible, and what has been written five years ago in tutorials still works. Go has a very fast compiler, and the LLM agent quickly compiles the code to check if it works—with fast turnaround times.Moduliths I use Moduliths. All functionality is split into vertical modules, each module contains its SQL (GORM), templating, controllers, business logic, and domain models. The context of each module is limited, side effects of changing code are limited—contrary to layers where a change in a layer has impact on many different areas of the code.
Claude.md and claude_mistakes.md Using a good CLAUDE.md and a claude_mistakes.md file. Every time Claude struggles, I let it add to the mistakes file to not make them again.
Linters I use all the linters I can get my hands on and Claude runs all of them before declaring it is done. This catches problems that Claude can easily fix—the result is more stable and code with fewer bugs.
Plan, plan, plan My workflow is: Plan, plan, plan. I never let Claude Code write code without a plan. I let it create a plan in planning mode, explore the plan, explore the edge cases, then ask Claude to check if: (1) The plan works with the code base (2) Reread all API documentation and check if the plan would work (3) Is it confident in the plan? (It often finds problems here before it is “confident” the plan works.) Then Claude can start to write the code.
Overall, Claude Code with Sonnet/Opus 4.5 does great work. From time to time, it struggles. Looking at those times, I found the reason is always the same.
Similarities.
Claude Code gets confused with APIs—web and programming language APIs, domain models—when they are too similar. When two different things in one domain, one API or one project are too similar, it confuses the concepts and uses the wrong concept in the code.
Examples:
Implicit knowledge - Hugo I run my website with Hugo. Hugo has lots of magic happening, in which way partials and templates are interacting to build the site. Partials and shortcodes are similar; quite often Claude Code creates a shortcode with the API of a partial.
Zig I recently experimented with translating Go projects to Zig and Rust. And while Rust worked fine, the AI struggled with Zig. The problem: Zig changed its API a lot over the last years, especially around async IO. Claude wanted to implement IO the old way; it does not compile, it tries to change it, it does not work, it changes it back and again fails—entering a doom loop which it did not get out of.
Zoom API for my coaching operating system, that helps me manage the many CTO clients I have, I integrated the Zoom API. In Zoom there are different calls that do the same thing, and concepts that are similar like meeting transcripts and meeting summaries. Claude was confused and couldn’t make it work. Because of the density of the API it also started to hallucinate non-existing API calls—those that are not there but should be there. Having not seen Claude hallucinate for a long time, I was surprised how quickly it started to hallucinate when the concepts were too similar.
CustomerID and CustomerNo In a project, I have CustomerID and CustomerNo—which are different things. The ID is the internal ID and the Customer number is the official number for the customer to see. When integrating with a payment API, which also has these concepts, Claude was confused and tried to use the ID in places where the Customer number should be used and vice versa.
Similar (templating) languages One project, for various reasons, uses three different templating engines: JET for headers and footers, a-h/template for component templating, and Go text templates for sending emails. Claude Code consistently confuses those three, using Go syntax in JET templates or JET syntax in a-h/template templates.
What can we learn from this? For your project, the selection of technologies can’t be too similar. The way LLMs work is, they get easily confused when two different things in one domain, one API, or one project are too similar. Concepts need to be clear-cut. What I have seen in the past—the same name for two different things (something developers could work with), or different names for the same thing (User, Customer, etc.) by different departments—no longer works.
About me: Hey, I'm Stephan, I help CTOs with Coaching, with 40+ years of software development and 25+ years of engineering management experience. I've coached and mentored 80+ CTOs and founders. I've founded 3 startups. 1 nice exit. I help CTOs and engineering leaders grow, scale their teams, gain clarity, lead with confidence and navigate the challenges of fast-growing companies.
