Redirecting to https://www.makingdatamistakes.com/ai-first-development/ - click if you are not redirected.


Note: You will be redirected to the original article. A local copy is included below for convenience.

Want to know how to build a well-architected 60,000-line project from scratch in 6 weeks, without writing a single line of code by hand? This was my second AI-first project, and it went smoothly, using techniques and prompts from the first one. (See Postscript below if you're curious about how things turned out.)

This approach relies heavily on human software engineering expertise & judgment, and has been optimised for: greenfield, solo experienced developer, ideal architecture for AI-first coding, and speedy progress, in June 2025 (Claude 4, o3, Gemini 2.5).

Of course, if you're developing as part of a team, on a larger legacy codebase, with users in production, and place a premium on reliability, then you have a completely different set of constraints. If so, then human-first, AI-assisted programming makes more sense for now (i.e. treating AI like a junior pair programmer).

But as models improve, AI-first will eventually become the dominant mode of development. The AI will write pretty much all the code, with the human as engineering & product manager.

Let go emotionally of the idea that you are there to write code.

Your job (as developer) is to be a manager of an AI team of developers.

There's an old joke: "in the future, all factories will have two employees - a person and a dog. The person is there to feed the dog, and the dog is there to bite the person if they try to touch the machines.”

I found it very hard to resist the temptation to obsess over every line of code, especially for the bits I like building. To break that habit, it helped enormously at first to be using TypeScript & React for this project, because I don't know either of them very well! It forced me to apply generalised best practices and solve meta-problems (how to instruct the AI to make good decisions itself). But there have been lots of moments where it would have helped a lot if I knew React better.

Take joy in the pace at which your vision becomes real, in product and architectural decisions, in meta-building (building the factory that in turn builds the product), and in the value people get from what you've built.

AI-first development is a skill

AI-first coding is a skill that can be learned. It is also a toolset that you can build or adopt. It's immature, and changing fast. It's a bazooka strapped to a chainsaw, and the default is that you cut off your arm rather than build a house with it. Steve Yegge puts this well.

Work as though AI is free, instantaneous, and smarter than you.

This will be true in the not-so-distant future, and we can live in that future today by working around the ways in which we're not there yet.

"If I have six hours to chop down a tree, I will spend the first four sharpening my axe".

A lot of the software engineering best practices that we've known about for a long time are quite effortful. With AI, the effort/cost has dropped enormously, because you're not one that actually has to write the tests or update the docs. And if anything, the benefits are even greater, because a good doc makes it much more likely that the AI will follow the right coding style, reuse existing machinery, avoid gotchas, and generally make good decisions. So the cost-benefit ratio of good practices is suddenly much higher for AI-first coding.

For big features, use the following workflow

Sounding board mode

Write a planning doc.

Execute the planning doc, stage-by-stage.

Invest in your evergreen documentation

(Obviously don't write the docs yourself.)

Pick AI-friendly architecture

What makes for an AI-friendly choice?

Optimise for correctness, throughput, and minimal human-intervention/blocking-time, rather than latency, wall-clock time, or AI-effort

Multiple models

Invest in your rules file

I usually create a RULES.md, and then symlink it to CLAUDE.md, GEMINI.md, .cursorrules, etc, because these get fed in automatically in every conversation. This is the file I pay the most attention to in the whole codebase.

Give it access to the same information that a human developer has/depends on to succeed at their job

For example:

Prefer Claude Code if you can put up with the UI

Mainly because a) of subagents; b) /compact; and c) I trust them to get the best out of their models with fewer weird gotchas and misaligned incentives.

That said, o3 is phenomenal for writing and debugging code, the Claude Code CLI interface for editing is painful, it doesn't stream the output, it's hard to see the changes that have been made, it's a bit buggy too, and it's hard to scan through a long conversation 🙁 So you'll also need Cursor (or perhaps Windsurf) for editing by hand, and for non-Claude models.

Context management is key

The models need all and only the relevant context.

Be as lax with permissions as you can, but no more

For models I trust, allow it to run most commands, except Git commits and anything destructive and hard to reverse. Provide guidance, especially in your rules file. But for this to work, you also need to plan for the worst, and have good safety nets. Here's a more detailed set of recommendations:

"Don't say you're sorry - just don't do it again"

Your job is to build a factory that in turn builds the product. "Fix inputs, not outputs" (thanks to John Rush for this framing!). In other words, when your 'factory' produces the wrong output, improve the inputs (i.e. the setup/automatic scripts/instructions/prompts) rather than fixing the code by hand yourself.

Get used to a different cadence

Progress will feel very bursty. There will be troughs, and things will sometimes feel out of control. Your bicycle has been upgraded to a broomstick, your lightsaber has become a bazooka. On a good day, you’ll co-produce literally dozens of features and many thousands of production-ready lines of code. But there will be days when nothing works, and you don't feel you understand your own codebase. If you get stuck in a broken state:

Quality control and automated testing

Overall, blended productivity is driven by the number of times we get stuck in quicksand, rather than how fast we can go in bursts at top speed.

And the AI still drives into quicksand pretty regularly, getting into a broken state that takes a little while of both AI and human time to unwind. And it still does dumb things and papers over the cracks with bandaids. Even taking all this into account, the rate of progress is worth it. But it's painful sometimes, and I'm sure we can do better here.

Automated testing is part of the answer, though I'm still experimenting with the right approach.

Here's my current approach:

In practice, I find that I'm still doing a lot of manual review & testing at the end of each planning doc stage. This is a nuisance, but as the AIs are getting better, they can do larger and larger chunks of work correctly, and fix bugs quickly. So my main goal for automated testing is to notice regressions, i.e. so that the AI will notice new bugs that it accidentally introduced as its changing code.

Invest in handy scripts

MCP is great. But it's complicated, often verbose about context, non-deterministic, and often overkill. So if you find yourself wanting to do the same thing often, you're much better off asking the AI to write a script (referring to your code guidelines doc for command-line libraries and patterns), with nice documentation, optional arguments, examples in your rules file, etc. Then you can just tell it to call that script later.

e.g. to sync across your Git worktrees, generate date/time prefixes for your file names, restart the dev webserver in the background, etc.

Run multiple Git worktrees

Each worktree should have its own port, dev server, desktop, etc - but share the same dev database.

Misc tips

These look promising but I haven't properly tried them yet

Postscript

I'm about to launch Spideryarn in alpha (a tool for researchers to get more from what they read), where every line of code was written by AI. And before that, AI did the late-stage work on Hello Zenno.

If you found this useful, subscribe for monthly posts on software, AI, and management.

Or drop me a line at consulting@gregdetre.com if you'd like help training your software engineering team on AI techniques or building products with AI.

Subscribe for monthly posts on software, AI, and management.

Acknowledgements

Thank you to Marc Zao-Sanders, Johnnie Ball, Joshua Wohle, Ed Dowding, Ian Broom, David Hathiramani, Peter Nixey, Glenn Smith, Ian Strang, Martijn Verburg, for ideas and comments.

Useful references

On the importance of managing context

Other field guides:

Prompt engineering:

Comments welcome here

Want to know how to build a well-architected 60,000-line project from scratch in 6 weeks, without writing a single line of code by hand?

www.makingdatamistakes.com/ai-first-dev...

Greg Detre (@gregdetre.bsky.social) 2025-07-30T19:47:57.246Z