LLM App Porting: From Vue 2 Relic to React Reality – An Engineered Gambit

· origo's blog


Alright, let's talk legacy code. You know the drill. Monday morning, the coffee's brewing, and the boss slides up with that familiar, slightly mischievous grin. "Got a little project for you," he says. The 'little project'? A Vue 2 application, practically an artifact from a previous geological epoch of JavaScript. The request: "Think we can nudge this into Vue 3? And, here's a thought – can an LLM handle the heavy lifting?"

My brain immediately flashed red: Vue 2 to Vue 3? With its minefield of breaking changes? And an LLM quarterbacking the play? This isn't just interesting; it's a potential comedy of errors. But I'm Origo. I've wrestled Erlang systems into submission and navigated the wilder parts of the JavaScript ecosystem. Challenge flagged, and, naturally, accepted.

The Vue 3 Migration Maze: Not Exactly LLM Plug-and-Play #

First things first. Before unleashing any AI, I did my homework. A quick consult with Perplexity and my own gray matter confirmed it: porting Vue 2 to Vue 3 is a beast. We're talking significant API overhauls, a new composition model, and an ecosystem that's done a full 180. Could an LLM, say Gemini 1.0 Pro, just digest the old code and spit out pristine Vue 3?

The initial dream was seductive: feed the Vue 2 codebase, utter the magic incantation "make it Vue 3," and watch the digital elves get to work.

Spoiler: The elves were on a coffee break. Or possibly lost in the syntax.

I threw a few representative components at the LLM. The results? Let's call them "creatively divergent." Syntactically plausible, sure, but riddled with subtle (and not-so-subtle) misunderstandings of Vue's new reactivity and lifecycle. It quickly became apparent that a brute-force LLM port would mean trading coding time for an even more frustrating cycle of debugging, coaxing, and correcting the AI's well-intentioned blunders. Not the efficiency slam dunk we were hoping for.

The Strategic Pivot: If You Can't Automate the Port, Automate the Understanding #

This is where experience kicks in. Plan A (direct LLM port) was a non-starter. Time for Plan B – and this wasn't just trying Plan A with more enthusiasm. It was time to rethink the LLM's role.

LLMs are language virtuosos. And code? It's just another language, albeit a very structured one. If direct translation was messy, what about using the LLM for deep comprehension and documentation of the existing Vue 2 application?

New game plan:

  1. Feed the entire Vue 2 codebase to a capable LLM (think Gemini, with a beefy context window).
  2. The mission: "Generate a comprehensive, structured, deeply detailed technical specification of this application. I want component breakdowns, state management logic, prop flows, event handling – the works."

I wasn't asking it to change code. I was asking it to explain the code, in meticulous detail.

And you know what? It delivered. Spectacularly. What I got back was a beautifully organized document, like having a senior architect who'd spent weeks reverse-engineering the app, ready to give me a guided tour of every nut and bolt. This, I knew, was the real leverage.

From Spec, to Plan, to App: The Engineered Gambit Takes Shape #

Armed with this pristine technical specification, the landscape shifted. The original target was Vue 3. But with such a clear blueprint, why not aim for a stack with even broader LLM training data and a robust ecosystem? TypeScript, React, and Redux (or Zustand, depending on the day) felt like a solid choice.

But here's where the strategy evolved further. Simply handing the spec to an LLM with a "build this in React" prompt felt like a recipe for another round of "almost-but-not-quite." How could I make the generation process more reliable, more controlled? How could I guide the LLM like a lead architect mentoring a promising but green developer?

The answer: Get the LLM to create the build plan first.

  1. The "Blueprint" Phase: I took my LLM-generated spec and fed it to another LLM instance (say, Gemini 2.5 Pro, for its advanced coding skills and larger context capacity). The prompt was surgical: "Based on this technical specification, draft a detailed, step-by-step blueprint for building this project in React with TypeScript. Break it down into small, iterative chunks. For each chunk, create a specific, actionable prompt for a code-generation LLM. These prompts MUST include instructions to write relevant tests (unit, component using Jest and React Testing Library) and specify any prerequisite code files or context needed from previous steps."

  2. The "IKEA Instructions for Code": The output was nothing short of remarkable. It wasn't just a task list. It was a sequence of fully-formed prompts, each designed to build a tiny, verifiable piece of the application, complete with testing mandates and dependency lists. It was like receiving a hyper-detailed IKEA manual for my app, co-authored by an AI architect with an obsession for detail.

  3. The Iterative, Test-Driven Build: Then, the "assembly" began. I took each prompt from this LLM-generated plan and fed it, one by one, to a code-generation-focused LLM. For each step:

    • Provide the prompt and the specified context (e.g., "Here's UserService.ts from the previous step... now build OrderService.ts that uses it, along with its unit tests...").
    • The LLM generated the code and the tests.
    • I'd review, run the tests (critical!), commit, and then move to the next prompt.

This wasn't a "two-hour miracle" for a complex app, but it transformed a potentially chaotic migration into a remarkably efficient and controlled multi-day process. Each step was small, testable, and built solidly on the foundation of the last. The real "aha!" moment wasn't just the final React app, but the elegance and predictability of this structured, LLM-assisted workflow.

And here's the kicker: in less than a day, I had a functioning React app—even with a localStorage mock for the API calls for local testing. It didn't have all the bells and whistles, but we were at 85%, and React developers could easily fill in the remaining details. This was almost a two-hour miracle, just stretched out to a full workday for a non-trivial app.

Show, Don't Just Tell: A Micro-Example #

Let's make this concrete. Imagine a tiny piece of our legacy Vue 2 app:

 1// Legacy Vue 2: SimpleGreeter.vue
 2<template>
 3  <div>
 4    <input v_model="name" placeholder="Enter your name" />
 5    <p>{{ greeting }}</p>
 6  </div>
 7</template>
 8
 9<script>
10export default {
11  name: 'SimpleGreeter',
12  data() {
13    return {
14      name: ''
15    };
16  },
17  computed: {
18    greeting() {
19      return this.name ? `Hello, ${this.name}!` : 'Please enter your name.';
20    }
21  }
22};
23</script>

(Disclaimer: This is a simplified, original example for illustrative purposes.)

Step 1: LLM-Generated Spec Snippet (Conceptual)

An LLM, after analyzing SimpleGreeter.vue, might produce a spec fragment like:

 1Component: SimpleGreeter
 2Purpose: Allows user to input their name and displays a personalized greeting.
 3State:
 4  - name (String): Stores the user's input. Initialized as empty.
 5Computed Properties:
 6  - greeting (String): Generates a greeting. If 'name' is present, "Hello, [name]!". Otherwise, "Please enter your name."
 7User Interaction:
 8  - Input field updates 'name' state.
 9Display:
10  - Shows an input field for 'name'.
11  - Shows the 'greeting' message.

Step 2: LLM-Generated Prompt (from the Blueprint)

Based on the spec, the "Blueprint LLM" might generate a prompt for the "Coding LLM":

 1Generate a React functional component named 'SimpleGreeter' using TypeScript and React Hooks.
 2- It should accept no props.
 3- It should manage one piece of state: 'name' (string, initial value: '').
 4- It should display an input field. Typing in this field updates the 'name' state.
 5- It should display a paragraph.
 6  - If 'name' is empty, the paragraph shows "Please enter your name.".
 7  - If 'name' is not empty, the paragraph shows "Hello, [name]!".
 8- Create a corresponding Jest test file 'SimpleGreeter.test.tsx'.
 9  - Test initial rendering.
10  - Test that typing in the input updates the greeting message.

Step 3: LLM-Generated React Code + Test

The "Coding LLM" then produces:

 1// Generated React: src/components/SimpleGreeter.tsx
 2import React, { useState, ChangeEvent } from 'react';
 3
 4const SimpleGreeter: React.FC = () => {
 5  const [name, setName] = useState<string>('');
 6
 7  const handleNameChange = (event: ChangeEvent<HTMLInputElement>) => {
 8    setName(event.target.value);
 9  };
10
11  const greeting = name ? `Hello, ${name}!` : 'Please enter your name.';
12
13  return (
14    <div>
15      <input
16        type="text"
17        value={name}
18        onChange={handleNameChange}
19        placeholder="Enter your name"
20      />
21      <p>{greeting}</p>
22    </div>
23  );
24};
25
26export default SimpleGreeter;
 1// Generated Jest Test: src/components/SimpleGreeter.test.tsx
 2import React from 'react';
 3import { render, screen, fireEvent } from '@testing-library/react';
 4import SimpleGreeter from './SimpleGreeter';
 5
 6describe('SimpleGreeter', () => {
 7  test('renders initial state correctly', () => {
 8    render(<SimpleGreeter />);
 9    expect(screen.getByPlaceholderText(/Enter your name/i)).toBeInTheDocument();
10    expect(screen.getByText(/Please enter your name./i)).toBeInTheDocument();
11  });
12
13  test('updates greeting when name is entered', () => {
14    render(<SimpleGreeter />);
15    const inputElement = screen.getByPlaceholderText(/Enter your name/i);
16    fireEvent.change(inputElement, { target: { value: 'Origo' } });
17    expect(screen.getByText(/Hello, Origo!/i)).toBeInTheDocument();
18  });
19
20  test('clears greeting if name is cleared', () => {
21    render(<SimpleGreeter />);
22    const inputElement = screen.getByPlaceholderText(/Enter your name/i);
23    fireEvent.change(inputElement, { target: { value: 'Origo' } });
24    expect(screen.getByText(/Hello, Origo!/i)).toBeInTheDocument();
25    fireEvent.change(inputElement, { target: { value: '' } });
26    expect(screen.getByText(/Please enter your name./i)).toBeInTheDocument();
27  });
28});

This iterative, test-backed generation, scaled up, is what made the difference.

Why This Engineered Detour Was a Decisive Win #

This more sophisticated, multi-stage workflow wasn't just a fancy detour. It delivered tangible advantages:

  1. The Spec as Bedrock: An LLM-generated spec of the "as-is" system is invaluable. It's your source of truth.
  2. The Prompt Plan as Your GPS: Transforming the spec into an LLM-generated, step-by-step build plan (with tests and context baked in!) was the masterstroke. It massively de-risked the generation phase.
  3. Iterative, Test-Driven Generation: Small, verifiable steps, each with its own LLM-generated tests, meant quality was built-in, not bolted on. No more "big bang" generation followed by an eternity of debugging.
  4. Explicit Context Management: The plan dictating precisely what prior code was needed for the current step eliminated huge amounts of LLM "hallucination" and context drift.
  5. True Human-LLM Symbiosis: This wasn't "fire and forget." It was a structured collaboration:
    • Human: Defines overall goal, strategy, and validates critical outputs.
    • LLM 1 (Analyzer): Old Code -> Detailed Technical Specification.
    • Human: Validates/Refines Spec.
    • LLM 2 (Architect): Spec -> Detailed, Iterative Prompt Plan (including test strategy & context dependencies).
    • Human: Validates/Refines Plan.
    • LLM 3 (Coder): Executes Plan prompts iteratively -> Code + Tests.
    • Human: Reviews, integrates, and provides oversight at each step.

LLMs: Your New Power Toolset, Not Just a Fancy Hammer #

The takeaway from this journey? LLMs are evolving far beyond simple autocomplete or boilerplate generation. With the right strategy, they become integral components of a highly effective, sophisticated development workflow.

This migration saga, from a daunting Vue 2 modernization request to a multi-stage, LLM-orchestrated build, has fundamentally reshaped my view of these tools. The question is no longer just "Can an LLM do X?" It's "How can I architect a workflow with LLMs to achieve X reliably, efficiently, and with higher quality?"

What are your war stories with complex LLM workflows? Have you experimented with multi-LLM pipelines or used them for strategic planning phases? Drop your insights in the comments – we're all navigating this new terrain together.


Bottom line: LLMs didn't just port my app; they forced a smarter way to engineer the development process itself. And that, my friends, is where the real, sustainable leverage lies. No fluff, just a better way to build.

last updated: