It’s 2025. Every dev on your team—yes, even the new grads—has an LLM at their fingertips. You’re juggling a shiny new microservice and a legacy monolith that’s seen more migrations than your company’s Slack channels. A bug pops up. Who do you trust: the LLM, the junior dev, or some tag-team combo?
This isn’t just a thought experiment. It’s the new normal in the trenches.
LLMs: Atomic Wizards, but Context-Challenged #
LLMs are absolute monsters in greenfield projects. Give them a clean, modular codebase and a well-scoped prompt, and they’ll crank out code that’s not just correct—it’s idiomatic, documented, and sometimes even clever.
But drop that same LLM into a legacy codebase? Good luck.
Context is everything, and legacy code is where context goes to die. LLMs don’t magically “get” your system. You (or your tools) have to assemble the right snippets, configs, and business logic for them. Even with slick retrieval plugins, you’ll spend more time wrangling context windows than writing code. The model isn’t any smarter about your weird edge cases than it was last year—it just has better scaffolding.
Context Crafting: The Skill That Separates the Pros #
Here’s the real unlock: The best LLM users aren’t just prompt engineers—they’re context crafters.
What does that mean? It’s about knowing what information the LLM needs, how to frame your requirements, and when to zoom out for the big picture. Even with “smart” agentic tools, you still need to ask the right questions and spell out your needs clearly. Garbage in, garbage out—just faster and with more confidence.
Most real-world problems aren’t neatly scoped. Legacy code, cross-cutting concerns, business logic that’s been duct-taped for years—if you don’t understand how the pieces fit, you can’t give the LLM what it needs to help you.
Example:
- Minimal context: “Fix this bug.” (Paste random function.)
- Useful context: “Here’s the error log, the function where it happens, and the config file. The bug only appears when X and Y are true. What’s likely going wrong?”
The folks who get the most out of LLMs know how to assemble just enough of the right code, docs, and requirements to get useful, accurate output. That’s a skill that takes experience, curiosity, and a willingness to see the whole system—not just the line in front of you.
Junior Devs in 2025: LLMs in Hand, Still Building Judgment #
Junior devs aren’t competing against LLMs—they’re using them, too. The real question is: how well do they wield the tool?
On the plus side, juniors can ship features faster, get unstuck on syntax, and even ask the LLM for code reviews or test scaffolds. But here’s where things get tricky:
- Prompting is an art. Juniors often don’t know what context to provide, or how to ask the right questions.
- Quality assurance is shaky. It’s easy to trust the LLM’s output a bit too much, skipping the “does this actually work?” step.
- Testing gets neglected. LLMs can write tests, but juniors may not know how to validate them—or spot what’s missing.
The gap isn’t “LLM vs. junior”—it’s between those who can critically use LLMs, and those who just copy-paste.
Head-to-Head: Greenfield vs. Legacy #
Greenfield Glory #
- LLM: Give it a clean module and a clear spec, and it’ll deliver production-ready code in minutes.
- Junior: With LLM help, they’re shipping features at a pace that would’ve made 2020s seniors jealous.
Legacy Labyrinth #
- LLM: Struggles unless you spoon-feed it the exact context. Miss a dependency, and you get plausible nonsense.
- Junior: Can (sometimes) piece together the story by reading code, asking teammates, and—crucially—knowing when something “smells off.”
Quality Control #
- LLM: Will happily generate code and tests, but won’t tell you if the tests are garbage.
- Junior: May run the tests, but often lacks the experience to spot subtle bugs or missing cases—especially if they trust the LLM too much.
So, Who Gets the Keys? #
Trust LLMs for atomic, well-scoped tasks in clean codebases. Trust juniors to learn, adapt, and (eventually) grok the messy, interconnected reality of your systems. But the real magic happens when you combine the two—critically.
Want to future-proof your team (and yourself)? Double down on context crafting:
- Practice breaking down problems and assembling the minimum viable context for an LLM.
- Build workflows and tooling that help surface the right code, docs, and requirements—especially in legacy code.
- Encourage curiosity, skepticism, and a healthy dose of “trust, but verify.”
Don’t trust LLMs blindly. Don’t throw juniors into the deep end without a life jacket. Pair them up, give them guardrails, and teach everyone to become a context crafter. That’s how you get the best of both worlds.