The Vector Prompting Framework

A model for understanding prompts as transformation vectors—and a system that makes them adapt automatically based on how you use them.

The Core Insight

Every prompt can be understood as a transformation function—a directional command that moves meaning from one form to another. When you write a prompt like “Summarize this article in simple terms,” you're not just giving instructions; you're defining a trajectory that moves text from one state (complex) to another (simplified).

Think of it like this: every possible piece of text exists somewhere in a vast meaning space—an abstract landscape where proximity represents similarity in meaning. A formal business letter sits in one region; a casual text message sits somewhere else. A technical manual occupies different territory than a children's story. This isn't physical space with left and right—it's semantic space, where the “coordinates” are qualities like formality, verbosity, warmth, technicality, tone, and countless other dimensions.

When you write a prompt, you're plotting a course through this space. You're saying: “Take this input from wherever it is, and move it there—to the region defined by these characteristics.”

The key realization: there exists an ideal transformation—the perfect prompt that would produce exactly what you want on the first try. But we rarely hit that ideal immediately. We see what's missing only after the fact. That gap between intent and result is where refinement happens.


An Intuitive Analogy: The Golf Course

Imagine you're on a golf course, trying to sink your ball into the hole. You take your first shot—it lands on the green, close but not quite in. You don't pick up and start over from the tee; you putt from where it landed, adjusting your aim and power. If you miss again, you adjust from that new position.

Now imagine recording every shot: where you started, where the ball landed, and the direction you aimed. Over time, you could calculate the exact angle and force that would have gotten you to the hole from the original tee position on the first try. That's vector arithmetic in action—each correction shot reveals something about the path to the target. Eventually, the pattern of adjustments reveals the optimal trajectory.

Prompt refinement works the same way. Each prompt attempt is a shot through semantic space. Each correction—“make it warmer,” “make it shorter,” “use my tone”—is a putt from a new position toward the target. These aren't corrections along a single line; they're adjustments along multiple dimensions simultaneously. “Make it warmer” moves along the warmth dimension. “Make it shorter” moves along the verbosity axis. Each revision surfaces key directional information that wasn't encoded in the original prompt.

Over time, those corrections—referred to in this framework as deltas—can be combined to produce a prompt that hits the target immediately, like developing the perfect drive from the tee.


The Geometry of Refinement

When an initial prompt gets close but not perfect, you can think of the ideal transformation like this:

Videal = Vinitial + Δformality + Δverbosity + Δtone + … + Δedit

Where:

  • Vinitial is the initial prompt—the transformation applied to the input
  • Δrevision values come from revision requests (“make it sound friendlier”)
  • Δedit values come from manual changes made before copying

Each delta represents a measured shift from where the model landed to where the user actually wanted to go. These aren't shifts in physical space—they're movements along the dimensions that define meaning: formality, verbosity, tone, structure, specificity, warmth, and countless others. The target exists at a specific coordinate in this semantic space, and each delta helps triangulate exactly where that coordinate is.


How Promethic Captures This

Promethic captures the entire refinement process—the movement from input to desired output—and turns it into structured, learnable signal.

  1. Input — The user's data or text.
  2. Prompt (Vinitial) — The transformation function applied to that input.
  3. Output — The model's first attempt.
  4. Revisions (Δrevision) — When you ask for a change (“make it less formal,” “add examples”), Promethic records that request and the resulting output as a delta.
  5. Edits (Δedit) — When you manually change the output before copying it, Promethic diffs the changes and lets you tag the reason (like “added personal signature” or “softened tone”). Only the final edit before copying is tracked.
  6. Desired Output — The version you copy, signaling success.

Why Only Track Revisions and Final Edits?

Promethic tracks revision requests and final edits, but not intermediate edits made between revisions. This design choice is crucial for maintaining clean signal.

The Sand Trap Problem: Think back to the golf analogy. You drive toward the hole but land in a sand trap. To escape, you don't aim at the hole—you aim sideways, just trying to get back onto the fairway. That sideways shot isn't a vector toward your goal; it's a setup move. Including it in your analysis of the “ideal path” would be misleading.

The same happens with intermediate edits. A user might delete a poor example (appearing to move away from including examples), then request a revision: “Add a better example.” The deletion wasn't meaningful direction—it was setup for the revision. Promethic records the meaningful waypoints and the final destination, not every exploratory detour along the way.


Prompt Refinement in Action

Once a prompt has accumulated at least one delta, Promethic enables prompt refinement. This runs a reasoning model that:

  1. Receives the original prompt.
  2. Receives all recorded deltas, each paired with its input and the desired output (both revisions and edits).
  3. Analyzes the relationships between them, interpreting each as a directional movement through semantic space.
  4. Uses a vector-style reasoning framework to merge these directional shifts into an updated prompt that more accurately captures the intended transformation.
  5. Selects representative examples to include directly in the refined prompt, leveraging few-shot prompting to help the model learn both accuracy and style.

This process uses the concept of vector arithmetic as a reasoning framework—integrating multiple small shifts in meaning into a single, improved transformation.


The Copy Signal: Why This Is Unique

What makes this approach different is that Promethic captures refinement data that every other AI interaction throws away—and does so with a critical signal that other systems can't access: the Copy signal.

When you use ChatGPT, Claude, or any LLM to draft content, you're generating incredibly valuable refinement data—but it's all lost:

  • User corrections are discarded — the sentences you tweaked, the tone adjustments you asked for, the examples you added—gone once you close the chat.
  • Revision requests vanish — even when you explicitly ask “make it warmer,” that instruction and its result disappear.
  • The completion signal is invisible — when you close the window, did you close it because you were pleased with the result? Or did you abandon your attempt? The system has no way to distinguish between the two.

Promethic solves this with a deceptively simple innovation: the Copy action is a ground-truth signal. When you copy the output, you're declaring: “This is it. This is the desired output.” That single action transforms everything that came before it into structured, learnable data.


Continual Learning at the Prompt Level

Large language models are trained once on massive datasets and then frozen. They can't currently learn from their own usage. A model that writes emails for thousands of users doesn't get better at writing emails—it stays exactly as it was on day one.

Promethic creates a feedback loop at the prompt level. While the underlying model stays frozen, the prompts continually improve based on real usage data. Each copy captures a successful transformation. Each revision reveals a gap. Each edit shows the final adjustment needed to bridge model output and human intent.

This is continual learning where it matters most—at the interface between human intent and model output.

The Personalization Imperative

Even when LLMs eventually solve continual learning at the model level, that wouldn't eliminate the need for prompt-level refinement. Why? Because every user needs their own voice, style, and use cases.

A general-purpose model that's great for everyone is optimized for no one. Your voice is unique. Your use cases are specific. Your preferences evolve. Even a perfectly learning LLM would need to balance billions of users' preferences, diluting what makes outputs work for you specifically.

This is why prompt-level learning isn't just a workaround for frozen models—it's the correct level of abstraction for personalization. The base model provides broad capability. Your prompts capture your specific intent, style, and domain. Together, they create AI that works your way.


The Foundation for Agentic AI

The AI industry is shifting toward agents—systems that break complex tasks into multiple steps, with different models handling each step. But building agents isn't as simple as just “switching to agents.” You have to:

  1. Break down the overall task into steps — decompose your process into discrete phases.
  2. Create clear prompts for each step — each agent needs precise instructions.
  3. Dial in each prompt — every step requires refinement through real usage to work reliably.

Your agentic flow is only as strong as its weakest agent. If one step produces mediocre output, the entire chain fails.

This is where the vector prompting framework becomes foundational. Start with manual prompts for each step. Let them accumulate deltas. Refine them based on real usage. Once each prompt is dialed in, they can be connected together into agentic workflows. The vector prompting framework isn't just for individual tasks—it's the foundation upon which reliable, production-grade agents will be built.


The Workflow Integration Advantage

Promethic doesn't require you to change how you work. You already iterate with AI: try an output, revise it, tweak it, use it. Promethic just makes that natural process observable and learnable.

The challenge of “incorporating AI into your work” becomes simple: identify tasks you do repetitively, break your work into these tasks, and bring Promethic into the loop. Your time and attention can focus on achieving more—your unique skills and gifts can be capitalized on, rather than tied up in repetitive work.

For teams, this transforms knowledge transfer. A team member who's refined a prompt for customer support emails can share it with the whole team. New hires inherit the accumulated wisdom of their predecessors. The knowledge doesn't live in someone's head—it lives in the prompts.

You don't need to stop and “engineer prompts” as a separate task. Refinement happens automatically, embedded in your workflow. Every revision and every edit becomes a recorded delta, each one improving the prompt the next time you use it.


Example: The Email Prompt Lifecycle

Imagine you have a prompt that helps you write professional emails. You use it ten times over a few weeks.

Each time, you might get a first draft, make a few quick edits, then ask for adjustments—“make it a bit warmer,” “add a friendly closing.” You might edit again after the revision, then finally copy the result, tagging it as “softened tone, added signature.” Promethic records the meaningful signals: each revision request, and the final edit with your tag.

After ten uses, you run a prompt refinement. The model reviews the entire history—the original prompt, all the revision requests, and the final outputs. It identifies the consistent directional changes (warmth, tone, personal signature) and produces a new version of the prompt that already includes those improvements.

Now, the next time you run it, you don't have to say “make it warmer” or add your name—it's built in. And as you keep using it, new refinements continue to make it better, naturally evolving alongside your needs.


The Philosophical Insight

The vector prompting framework recognizes a simple but profound truth: we often sense what we want before we can precisely say it. The space between “I'll know it when I see it” and “I can tell you how to make it” is the space of deltas. Each revision and edit captures a piece of that intuition.

By recording and learning from those natural interactions, Promethic turns ordinary use into ongoing refinement—evolving prompts, not by theoretical design, but by lived experience. It transforms prompt improvement from a chore into a side effect of simply doing your work—one delta at a time.