Alright, settle in. We need to talk about The Robots. Specifically, the code-generating ones. The hype says they're taking over, turning us into prompt-jockeys while they build the future. But hold up. GitHub CEO Thomas Dohmke just dropped a dose of reality that hits home for anyone who's wrestled with real-world codebases: manual coding skills are still damn important.
In a recent chat, Dohmke warned that relying too much on AI-generated code without the ability to tweak and understand it manually is a recipe for disaster. He talked about the inefficiency of trying to explain simple code changes in natural language instead of just... writing the code. And he nodded to Andrej Karpathy's term, "vibe coding"—that slightly scary feeling where you're just vibing with the AI's output, hoping it works, without truly knowing why.
This isn't just corporate talk; it's a core challenge we face daily.
The Context Conundrum Strikes Again #
Why isn't "vibe coding" enough? Because, as I've said before (like in "LLMs vs. Junior Devs in 2025: Context Is King (and Still a Pain)"), Context is King. LLMs are atomic wizards in greenfield, but drop them into a tangled legacy system? They struggle because they lack the deep, interconnected context of your specific codebase.
"Vibe coding" happens when you don't provide that context, or worse, when you can't fully grasp the context yourself to even know what context the AI needs. The AI might generate plausible-looking code, but if it doesn't understand the surrounding system, the dependencies, the implicit business rules baked into years of legacy spaghetti, you get code that looks right but fundamentally misses the mark. It hallucinates not just facts, but system compatibility.
This is where manual coding skills kick in. Not just writing new code, but the ability to read, understand, and modify existing code, even the gnarly stuff.
The Hybrid Developer: Architect, Editor, Context Crafter #
Dohmke's vision, and what we're seeing work in practice, is a hybrid model. AI proposes, we dispose (or refine). Deloitte sees this too – developers are using AI for specific tasks (boilerplate, first drafts) but keeping the human in the loop for oversight and refinement. Those daily 10-20 minutes saved? They come from offloading the tedious, not the critical.
This isn't replacing developers; it's evolving the role. As I discussed in the junior dev post, the future isn't "LLM vs. Human," it's "Human with LLM." The human becomes the architect, the critical reviewer, the one who understands the system context the AI lacks.
And how do we give the AI better context? We build tools and processes. My exploration into Code Maps: Blueprint Your Codebase for LLMs Without Hitting Token Limits is all about this. By generating structural maps instead of dumping raw code, we provide LLMs with the architectural context they need to reason more effectively about our specific systems, reducing "vibe coding" based on generic internet patterns and encouraging output that fits our code.
Beyond Code Generation: AI in the Workflow #
The hybrid model isn't limited to just generating new functions. Think about refactoring legacy code – a task I know many of you love to hate. As explored in "That Legacy Monster? Tame It (And Test It!) With Your LLM – If You Know The Secret Handshake," LLMs can be better at refactoring existing code and generating tests for it than writing new code, provided you give them sufficient context (Contextual Lockdown!). Again, this isn't hands-off automation; it requires the developer to curate the context and critically evaluate the output.
It's also about building new tools to enhance the workflow. Terminaut: A CLI Coding Agent Forged with AI is an example of creating an agentic tool that you control, with crucial human-in-the-loop guardrails (like explicit approval for bash commands). It’s about leveraging AI's capabilities within a framework you design and oversee, rather than just blindly accepting its suggestions.
Even something like automating documentation, as discussed in "Automating Documentation with LLMs: A Smarter Way Forward?", relies on building custom tooling to extract necessary context (like Code Maps and diagrams) before the LLM even starts generating text. It's humans building systems to feed AI the right context for complex tasks.
The Real Productivity Picture (It's Not About Pay... Yet) #
Does all this AI assistance mean higher paychecks or fewer hours right now? Not necessarily. As the research I touched upon in "My AI Tools Save Me Time, But My Paycheck Looks the Same" suggests, while developers perceive time savings and quality improvements, these haven't translated into significant changes in earnings or hours worked across the board yet. This points to the "jagged frontier" – AI is brilliant at some things, useless at others, and integrating it effectively across an entire economy (or even just a large company) is complex. Productivity gains at the micro level don't instantly restructure the labor market.
What is changing is the nature of the work. Less boilerplate, more critical analysis, more context crafting, more building and integrating AI tools into our workflows.
Sharpen Your Axe (And Learn How to Point the Robot) #
So, Dohmke is right. Manual coding skills aren't obsolete; they're foundational. They enable you to:
- Understand the context the AI needs.
- Critically evaluate the AI's output ("vibe coding" is insufficient).
- Refine and integrate the AI's suggestions into your complex, often legacy, systems.
- Build the tools (like Code Mappers or custom agents) that make AI truly effective in your specific environment.
The future belongs to developers who can wield AI effectively – not just as prompt users, but as architects of human-AI workflows. Keep honing your craft, learn to see the system, and for the love of prod, don't just "vibe code."