The 70% Problem: Why AI-Assisted Coding Is Amazing — and Frustrating
AI-based coding tools have become astonishingly good at certain tasks. They excel at generating boilerplate code, writing routine functions, and moving projects most of the way toward completion. In fact, many developers find that an AI assistant can quickly implement an initial solution that covers about 70% of the requirements.
A tweet from Peter Yang captures this experience perfectly:
Honest reflections from coding with AI so far as a non-engineer:
It can get you 70% of the way there, but that last 30% is frustrating. It keeps taking one step forward and two steps backward with new bugs, issues, etc.
If I knew how the code worked, I could probably fix it myself. But since I don’t, I question if I’m actually learning that much.
Non-engineers (and even some engineers) using AI find themselves hitting the same wall:
AI gets you surprisingly far, surprisingly fast—but that final 30% becomes a grueling battle.
The Magic — and the Reality Check
The “70% problem” reveals a deeper truth about AI-assisted development.
The first 70% feels magical:
You describe what you want, and tools like v0 or Bolt produce working prototypes that look impressive. But then reality sets in.
That 70% is usually the straightforward, patterned work—code that follows well-trodden paths or common frameworks.
As one commenter on Hacker News put it:
AI is superb at handling the accidental complexity of software—the repetitive, mechanical parts—while the essential complexity, the deeper problem-solving and architectural thinking, still rests on human shoulders.
In Fred Brooks’ classic language:
AI tackles the incidental difficulties, but not the intrinsic ones.
Where AI Falls Short
The trouble begins in the last mile.
AI tools can create plausible solutions, but the final 30%—covering edge cases, maintainability, performance, and scalability—still requires serious human expertise.
An AI-generated function might technically “work” for basic scenarios, but it often:
- Fails to handle unusual inputs,
- Ignores race conditions,
- Misses performance constraints,
- Doesn’t anticipate future requirements.
Without explicit human intervention, AI misses these subtleties.
The Reliability Trap
Another major problem:
AI often generates convincing but incorrect output.
It might introduce subtle bugs or “hallucinate” functions and libraries that don’t actually exist.
Steve Yegge famously described today’s LLMs as:
Wildly productive junior developers — incredibly fast and enthusiastic, but potentially whacked out on mind-altering drugs, prone to inventing crazy or unworkable approaches.
At a glance, the code may look polished.
But without experienced eyes, you might only realize the flaws (and the disasters) weeks later.
Simon Willison also warned about AI proposing clever-looking designs that only a senior engineer could immediately recognize as deeply flawed.
Key lesson:
AI’s confidence far exceeds its reliability.
AI’s Fundamental Limits
Current AI systems:
- Remix known patterns — they don’t invent new abstractions or algorithms.
- Offer no true strategic thinking — they can’t design new architectures or solutions.
- Take no responsibility for decisions.
As one engineer succinctly put it:
AIs don’t have better ideas than their training data. And they don’t stand by their work.
Human Judgment Still Rules
This leaves the true creative and analytical work—deciding what to build, how to structure it, and why—firmly in human hands.
To summarize:
- AI is a productivity booster, a turbocharger for the 70% of repetitive tasks.
- But it’s not a silver bullet.
- The hard parts of software engineering—the final 30%—still require skilled, thoughtful developers.
As one discussion concluded:
AI is a powerful tool, but it’s not a magic bullet… human judgment and good software engineering practices are still essential.
Two Ways Teams Are Winning with AI
Through observation, two dominant patterns have emerged for how developers are leveraging AI:
The Bootstrappers and The Iterators.
1. The Bootstrappers
These teams are using AI to go from idea to MVP incredibly fast.
Their typical workflow:
- Start with a design or rough concept,
- Use AI (like Bolt, v0, or screenshot-to-code tools) to generate a complete initial codebase,
- Get a working prototype in hours or days instead of weeks,
- Focus on rapid validation and iteration.
Example:
I recently watched a solo developer use Bolt to turn a Figma design into a live web app within hours.
It wasn’t ready for production, but it was perfect for gathering initial user feedback.
2. The Iterators
This group integrates AI deeply into their daily development flow, using tools like:
- Cursor
- Cline
- Copilot
- WindSurf
Their typical usage:
- Code completion and intelligent suggestions,
- Automated refactoring,
- Test and documentation generation,
- Using AI as a “pair programmer” for complex problem-solving.
But There’s a Catch
When you watch senior engineers using AI tools, it looks almost magical:
They scaffold entire features in minutes, complete with tests and documentation.
But if you pay close attention, you’ll notice:
- They don’t blindly accept AI’s suggestions.
- They refactor aggressively into smaller, focused modules.
- They strengthen error handling and expand edge case coverage.
- They question architectural decisions made by the AI.
- They enhance types and interfaces meticulously.
In short:
AI accelerates their work, but experience and judgment keep the output solid, maintainable, and future-proof.
Final Thought
AI isn’t making good developers obsolete.
It’s making good developers faster—and making great developers absolutely unstoppable.
But without that human expertise, relying on AI alone is like flying an airplane on autopilot with no one in the cockpit.
The future belongs to those who can master both:
Harness the 70% boost — and conquer the crucial final 30%.