What AI Coding Got Right and What It Still Can’t Replace

Published on
Share
Illustration of a developer coding with an AI copilot

Last Tuesday, your AI assistant fixed a bug in ten seconds that would've taken you an hour to track down manually. It generated clean code, handled the edge cases, and you shipped the feature before lunch.

This morning, that same AI confidently suggested code using a library that doesn't exist. You wasted twenty minutes before realizing it had completely fabricated the solution.

Welcome to coding in 2025, where 82% of developers use AI assistants daily or weekly, yet nobody's quite sure whether these tools are revolutionary partners or sophisticated autocomplete with confidence issues.

The truth is more nuanced than either extreme. AI coding has transformed how we build software, but not in the ways the hype suggested it would. After working with these tools across hundreds of projects, the pattern becomes clear: AI excels at solving known problems with established patterns, but struggles the moment you step outside those boundaries.

Let me walk you through what AI coding actually got right versus what still requires human judgment—and why understanding this distinction matters more than ever as these tools become standard parts of developer workflows.

The genuine breakthroughs: where AI coding delivers

Speed that actually changes workflows

The productivity gains aren't marketing fluff. Research from MIT, Princeton, and the University of Pennsylvania analyzing over 4,800 developers found a 26% increase in completed tasks when using AI assistants. More importantly, developers completed 55.8% more tasks than those using traditional methods.

But hold on—this isn't about typing faster. The real transformation happens in how these tools compress the feedback loop between idea and working code. When you're prototyping a new feature, AI assistants let you test five different approaches in the time it used to take to implement one.

In my personal experience, this speed advantage is most dramatic for boilerplate-heavy tasks. Setting up API endpoints, writing test scaffolding, generating database migrations—the kind of code you've written dozens of times but still needs to be done correctly. AI tools handle this in seconds while you focus on the interesting problems.

Context awareness that keeps improving

Modern AI assistants don't just generate code in isolation. They understand your entire codebase, your naming conventions, your architectural patterns, and even your team's coding style. When you're working in a Next.js project using Stripe, the AI suggests routes and payment handlers specific to those technologies.

This contextual understanding eliminates the constant context-switching that used to slow development. You don't need to search documentation, check existing implementations, or remember syntax details. The AI provides suggestions that match your project's existing patterns.

Let me elaborate on why this matters: developers with better context report 1.3× higher likelihood of seeing code quality gains. When AI suggestions align with your codebase architecture and team conventions, you spend less time reviewing and more time building.

Learning acceleration for developers at all levels

Here's something that surprised me: AI assistants have become powerful learning tools. Younger developers aged 18-34 are twice as likely to use AI coding assistants, but not just for productivity—they're using them to learn new frameworks and languages faster.

The key insight: AI can show you how experienced developers approach problems you haven't solved before. When you're learning Go's concurrency patterns or React's new hooks system, AI assistants provide working examples that follow current best practices.

For context, junior developers see disproportionate benefits from AI tools. The MIT research found that less experienced developers gained more from AI assistance than their senior counterparts, helping close experience gaps within teams.

The boilerplate elimination breakthrough

This is where AI coding truly shines. Authentication flows, CRUD operations, form validation, API route handlers—the foundational code that every application needs but nobody wants to write for the thousandth time. AI generates 41% of all code globally, with much of that being exactly this kind of repetitive infrastructure work.

But hold on just yet—this isn't about lowering code quality. Good AI-generated boilerplate follows established patterns, includes proper error handling, and implements security best practices. For SaaS founders particularly, this means you can focus development resources on features that differentiate your product rather than rebuilding authentication systems.

Speaking of SaaS foundations, this is exactly why modern SaaS boilerplates combine human-built architecture with AI-accelerated development. You get the structural decisions from experienced developers while AI handles the implementation details.

The critical gaps: what AI still gets wrong

The hallucination problem that won't disappear

Let's address the elephant in the room: 25% of developers estimate that 1 in 5 AI-generated suggestions contain factual errors or misleading code. These aren't minor syntax issues—they're confident suggestions for functions that don't exist, packages that were never published, or APIs that work completely differently than the AI claims.

You might be wondering why this happens. AI models predict what code "should" look like based on patterns, but they don't actually verify that the packages, methods, or APIs they reference are real. They generate plausible-looking code that can pass syntax checks but fails when you try to run it.

The insidious part? Research analyzing 16 LLMs found that 21.7% of package names recommended by open-source models were complete hallucinations. Even commercial models showed 5.2% hallucination rates. What makes this dangerous for software supply chains is that attackers can create malicious packages matching these hallucinated names—a threat researchers are calling "slopsquatting."

I can see the skepticism: "Just verify the code before using it." That's absolutely the right approach, but it fundamentally changes the productivity equation. When you need to carefully review and test every AI suggestion, you're not saving as much time as the raw generation speed suggests.

Architecture decisions that need human judgment

AI can write individual functions brilliantly. What it struggles with is determining whether those functions should exist in the first place.

For context, architectural decisions involve tradeoffs between competing concerns: performance versus maintainability, flexibility versus simplicity, current needs versus future scalability. These aren't pattern-matching problems—they require understanding business context, team capabilities, and long-term strategy.

Let me walk you through a real scenario: You're building a SaaS product with both individual and team accounts. AI can generate perfect code for either model. What it can't do is analyze your target market, evaluate your pricing strategy, assess your support capacity, and decide which architecture supports your business model better.

This being said, the most successful developers in 2025 use AI for tactical implementation while maintaining strategic control. They make the architectural decisions, then delegate the mechanical work to AI assistants.

Security vulnerabilities hiding in clean code

Here's what concerns security teams: research found vulnerabilities in 29.1% of AI-generated Python code, with 6.4% containing secret leakage issues. The code looks clean, passes basic security scans, and might even follow security best practices for simple cases—but misses the subtle edge cases that create real vulnerabilities.

And this is where AI coding reveals its pattern-matching nature. It generates code that looks like secure implementations, but doesn't truly understand the security model. SQL injection protection might work for simple queries but fail for complex joins. Authentication middleware might handle standard flows but miss crucial edge cases.

The practical implication: every line of AI-generated code needs security review with the same rigor you'd apply to code from an junior developer who's never worked in production systems. For projects requiring enterprise-grade security, that human review becomes non-negotiable.

Business logic that doesn't understand business

AI assistants can implement algorithms perfectly. What they struggle with is understanding why you need that algorithm and whether it solves the actual business problem.

In my personal experience, this gap is most obvious when building domain-specific features. AI can generate beautiful code for processing subscription proration, but it doesn't understand your pricing strategy, your customer segments, or whether proration aligns with how you want to handle plan changes.

For SaaS founders specifically, your competitive advantage lives in business logic that reflects deep understanding of customer problems. AI can accelerate implementation once you've defined the requirements, but it can't determine what those requirements should be. That's why successful SaaS development approaches focus on clearly defining your unique value proposition before writing code.

Complex system integration where context matters

Modern applications rarely exist in isolation. They integrate with payment processors, CRM systems, analytics platforms, authentication providers, and dozens of other services. Each integration involves nuances that AI struggles to grasp.

Let me elaborate: When integrating Stripe payments, AI can generate code that handles successful charges. But does it handle failed payments correctly? Does it process webhook events in the right order? Does it implement proper idempotency? Does it account for network failures and retry logic?

These integration details require understanding how systems interact over time, often across network boundaries with unpredictable failure modes. AI assistants struggle with the subtleties of human collaboration and aligning with project-specific conventions—challenges that multiply when dealing with external system integrations.

The reality for production systems: You need developers who understand distributed systems, failure modes, eventual consistency, and other concepts that don't emerge from pattern-matching code examples.

The collaboration gap: working with teams and stakeholders

Communication that requires human nuance

Software development isn't just about writing code—it's about understanding requirements, clarifying ambiguity, negotiating tradeoffs, and explaining technical decisions to non-technical stakeholders. AI can't participate in these conversations meaningfully.

You get the idea: When a product manager asks whether a feature is feasible, they're not asking for code—they're asking for judgment. They want to understand effort, risk, alternatives, and implications. AI can't assess whether a technical approach aligns with business strategy or team capabilities.

Now this might have been obvious, but it's worth stating explicitly: developers have shifted from being pure producers to becoming curators and orchestrators who shape how AI intelligence is applied to real business problems. The communication skills matter more than ever, not less.

Code review and quality standards

Here's something every developer knows: code quality isn't just about working functionality. It's about maintainability, readability, consistency, performance, and a dozen other concerns that vary by context.

AI can generate code that works. What it struggles with is understanding your team's quality standards. Should this function be split into smaller pieces? Does this naming convention match your codebase? Is this abstraction appropriate for the complexity level? These questions require judgment based on team context.

Based on my experience, the most effective use of AI in team environments involves establishing clear code standards that both humans and AI follow. Senior developers define patterns and conventions, then AI helps implement them consistently. But the standard-setting itself remains a human responsibility.

The economic reality: what this means for developers

Job transformation, not elimination

The data contradicts the replacement narrative. 76% of developers either use AI coding tools or plan to adopt them soon, but hiring demand for developers remains strong. Why? Because AI increases what developers can accomplish, not whether you need developers at all.

For context, Google reports that 25% of their code is now AI-assisted, yet CEO Sundar Pichai stated they plan to hire more engineers because AI has expanded what's possible. The opportunity space is growing faster than AI can compress development timelines.

This being said, the skills developers need are evolving. Pattern-matching code production becomes less valuable as AI handles more implementation. Strategic thinking, architectural design, business understanding, and communication skills become more critical.

The value of specialized knowledge

AI assistants work brilliantly with common patterns and popular frameworks. They struggle with specialized domains, proprietary systems, and unique business logic. This creates opportunities for developers who build deep expertise in specific areas.

Let me walk you through why this matters: If you're building standard CRUD applications with common frameworks, AI compresses your competitive advantage. But if you're building industry-specific solutions that require domain expertise—healthcare systems, financial platforms, regulatory compliance tools—your specialized knowledge becomes more valuable, not less.

The strategic implication for developers: invest in deep expertise that complements AI capabilities rather than competing with them. Understand complex business domains, master system architecture, develop security expertise. These skills amplify AI's strengths while compensating for its weaknesses.

What changes for founders and businesses

For SaaS founders particularly, AI coding changes the build versus buy calculation. Tasks that used to require full development teams can now be handled by smaller teams moving faster. But the strategic work—defining what to build and why—requires just as much human judgment as before.

You might be wondering how this affects development approaches. The answer: it makes starting with proven foundations even smarter. When AI can accelerate custom development significantly, beginning with a battle-tested SaaS boilerplate that handles authentication, payments, and infrastructure lets you focus AI-accelerated development on features that differentiate your product.

The economics shift from "can we afford to build this?" to "what's the fastest path to validating this with customers?" AI doesn't eliminate the need for strategic thinking—it makes strategic thinking more important because you can execute faster once you've determined the right direction.

Working effectively with AI assistants in 2025

Treating AI as a junior developer who needs review

The most productive mental model I've found: treat AI assistants like talented junior developers who write code quickly but need senior oversight. They're excellent at implementation once you've provided clear direction, but they need guidance on approach, architecture, and edge cases.

This means establishing review processes for AI-generated code that match what you'd do for human team members. Does the code handle errors properly? Does it follow your security standards? Does it integrate cleanly with existing systems? Teams using AI review see quality improvements soar to 81% compared to teams moving fast without review.

And this is where AI coding reveals its true value: when you treat it as a collaboration tool rather than a replacement for thinking. The AI handles mechanical work while you maintain strategic control and ensure everything fits together coherently.

Prompting strategies that get better results

AI coding quality depends heavily on how you interact with it. Vague prompts produce generic code. Specific prompts with clear context generate much better results.

For context, effective prompts include: the specific technology stack you're using, the architectural patterns you follow, the edge cases you need handled, the error conditions to consider, and the integration points with existing code. The more context you provide, the more likely the AI generates code that actually fits your needs.

Let me elaborate on a practical example: Instead of "write a function to process payments," try "write a TypeScript function that processes Stripe payments for our subscription SaaS, handles webhook verification, manages idempotency keys, and integrates with our existing user management system that uses Supabase for data storage." The second prompt leverages AI's context understanding to generate much more appropriate code.

Leveraging AI for documentation and testing

Here's where AI assistants consistently deliver value with less risk: generating documentation, writing test cases, and creating code comments. 41% of engineers now use AI tools for documentation generation, reducing time spent on non-coding tasks.

The key advantage: documentation and tests don't directly affect production behavior. If AI generates imperfect documentation, you can refine it. If test cases miss edge cases, you can add more. The risk of AI errors in these areas is lower than in production code, while the time savings remain substantial.

In my personal experience, using AI for test scaffolding, API documentation, and inline comments has saved countless hours while maintaining code quality. You still review everything, but you're editing rather than writing from scratch—a much faster process.

When to trust AI versus when to write manually

Pattern recognition helps here. Trust AI for: standard CRUD operations, boilerplate code, common algorithmic implementations, framework-specific patterns, and API integrations with well-documented services.

Write manually when dealing with: complex business logic specific to your domain, security-critical code paths, architectural decisions that affect system design, novel algorithms or approaches, and edge cases unique to your application.

Based on my experience working with AI tools extensively, this division of labor maximizes productivity while maintaining quality. You're not trying to do everything with AI or nothing with AI—you're strategically choosing where AI adds value versus where human expertise is essential.

The path forward: building with AI in 2025

Hybrid workflows that combine strengths

The most effective development approaches in 2025 treat AI as one tool among many, not the solution to everything. You use AI for rapid prototyping and boilerplate generation. You use human judgment for architectural decisions and business logic. You use automated testing to verify AI-generated code. You use code review to catch subtle issues.

For SaaS development specifically, this hybrid approach means starting with proven infrastructure—whether through carefully selected boilerplates or custom foundations built by experienced developers—then using AI to accelerate the custom feature development that differentiates your product.

Now this might have been surprising, but 59% of developers use three or more AI tools regularly, mixing different assistants for better results. The future isn't about finding one perfect AI solution—it's about orchestrating multiple tools effectively.

Skills developers actually need

As AI handles more implementation work, developer value increasingly comes from: understanding business requirements deeply enough to define what to build, making architectural decisions that AI can't evaluate, reviewing and debugging AI-generated code effectively, communicating technical tradeoffs to non-technical stakeholders, and integrating AI capabilities into effective development workflows.

Let me walk you through why this matters for your career: developers who can translate business problems into technical solutions, architect systems that scale, and work effectively with both AI tools and human teams remain in high demand. The implementation details become less differentiating as AI improves, but the strategic and communication skills become more valuable.

What changes for how we learn development

Here's something that surprised me: developers use AI assistants primarily to learn and practice, not just boost productivity. This has major implications for how people develop technical skills.

The concern: if you rely on AI too heavily early in your development journey, you may not build the foundational understanding needed for complex problem-solving. But the opportunity: AI can accelerate learning by providing immediate feedback and examples that help you understand patterns faster.

The balanced approach involves using AI as a learning tool that shows you how things work, then building similar functionality manually until you understand the patterns deeply. You're not choosing between AI assistance and skill development—you're using AI to accelerate skill development while ensuring you develop genuine expertise.

Making the right choice for your project

When AI acceleration makes the biggest difference

AI coding delivers maximum value in specific contexts: rapid prototyping where speed matters more than perfection, early-stage SaaS development where you're validating product-market fit, projects with significant boilerplate or standard patterns, and teams that lack specialized expertise in certain technical areas.

For context, this is exactly why many founders find success with SaaS boilerplates combined with AI-accelerated custom development. The boilerplate provides battle-tested infrastructure built by experienced developers, while AI helps quickly implement the unique features that define your product.

You get the idea: start with a solid foundation, use AI to accelerate custom development, maintain human oversight for quality and architecture. This approach combines the best of proven infrastructure, AI speed, and human judgment.

Where human expertise remains essential

Certain project types still require significant human expertise: systems with complex business logic specific to your industry, applications with stringent security or compliance requirements, platforms requiring sophisticated scalability architecture, and products where performance optimization is critical to user experience.

This being said, even in these contexts, AI can accelerate development—you just need more experienced developers making decisions and reviewing output. The relationship between AI capabilities and required human expertise isn't inverse; they're complementary. Better AI tools let skilled developers accomplish more, not replace the need for skilled developers.

Evaluating AI capabilities honestly

Marketing claims suggest AI can handle everything. Reality shows AI excels at specific tasks while struggling with others. Making good decisions requires honest assessment of what AI actually delivers versus what it promises.

Based on my experience, the best question isn't "can AI do this?" but rather "what's the most effective division of labor between AI and human developers for this specific project?" Sometimes that means heavy AI usage for standard functionality. Sometimes it means AI primarily handles documentation while humans write production code. The answer depends on your specific context, not generic capability claims.

Where we actually stand with AI coding

After working with AI coding tools across hundreds of projects, here's what I've learned: AI has transformed software development, but not by replacing developers. It's accelerated implementation while making strategic thinking more important, compressed timelines while making architectural decisions more critical, and automated routine work while elevating the value of specialized expertise.

The developers succeeding in 2025 aren't those who resist AI or those who trust it blindly. They're the ones who understand what AI does well versus what requires human judgment, who use AI to accelerate execution while maintaining strategic control, and who treat AI as a powerful tool that amplifies human capabilities rather than replaces them.

For SaaS founders and development teams, this means rethinking how you approach projects. Starting with proven infrastructure through quality SaaS boilerplates, using AI to accelerate custom feature development, maintaining human expertise for architecture and business logic, and implementing rigorous review processes for AI-generated code.

The future of development isn't humans versus AI. It's humans and AI working together effectively, with clear understanding of where each adds value. Get that relationship right, and you'll build better software faster than either could alone.

Skip Months of Infrastructure Work

Building a SaaS? Stop rebuilding authentication, payments, and billing from scratch. Two Cents Software provides a production-ready SaaS boilerplate with everything you need to launch in weeks, not months.

Katerina Tomislav

About the Author

Katerina Tomislav

I design and build digital products with a focus on clean UX, scalability, and real impact. Sharing what I learn along the way is part of the process — great experiences are built together.

Follow Katerina on