Prompt + Context Store
Link AI-generated code to the prompts that created it. Intent, requirements, and decisions stay with the code forever.
Without the original intent and architecture decisions, it's difficult to review, hard to extend, and risky to refactor all the AI-generated code in your codebase. Engineering orgs are beginning to see prompts as the new source code. Git AI lets you own those prompts and make them portable across agents — a session created in Cursor is readable by Claude Code, Copilot, or any other supported agent.
Git AI's Prompt + Context Store links every AI-generated line to the prompts that created it — so intent, requirements, and architecture decisions stay with the code forever. Enable the store and everything lights up: AI Blame surfaces the "why" behind every line, and /ask lets engineers use past prompts as context.
What the Store Unlocks
- Engineers reading unfamiliar code — Understand the requirements and decisions behind any AI-authored section without tracking down the original author.
- Code review agents — Review agents evaluate whether the implementation matches the goal, not just whether the code compiles, when they have access to the intent behind a change.
- Agents building on existing code — Agents become smarter when they can read the intent and architecture decisions behind any part of a codebase.
AI Blame + Prompts
AI Blame annotates every AI-authored line with the agent, model, and commit that produced it. With the Prompt + Context Store connected, AI Blame also surfaces the original prompts — directly in the CLI, in VS Code gutter decorations, and via the JSON API.
/ask — Query the Original Intent
The /ask skill lets engineers and agents ask
questions about how and why code was written. Instead of guessing from
code structure alone, /ask reads the original prompts and agent
responses to surface the engineer's intent.
/ask Why didn't we use the SDK here?Reading Code + Agent Session (/ask) | Only Reading Code (not using Git AI) |
|---|---|
| When Aidan was building telemetry, he instructed the agent not to block the exit of our CLI flushing telemetry. Instead of using the Sentry SDK directly, they came up with a pattern that writes events locally first, then flushes them in the background via a detached subprocess. | flush_logs.rs is a 5-line wrapper that delegates to flush.rs (~700 lines). Parallel modules like flush_cas, flush_logs, flush_metrics_db follow the same thin-dispatch pattern. |
How It Works
Storage
Agent sessions are stored in a private, encrypted object store — not in Git repositories. An API layer enforces strict access controls on every read and write.
Display Modes
The store supports two display modes:
| Mode | Description |
|---|---|
| Raw | The full, unmodified agent session — every prompt and response exactly as sent |
| Summary | A concise, structured summary generated from the raw session |
Summarization Templates
Summary mode runs each agent session through a configurable summarization template for each type of session (ie bug fix, new feature, refactor, etc). Define templates at the organization level and apply different templates per repository or team.
Security
PII and Secret Scanning
Industry-standard and customizable filters scan agent sessions on ingestion for personally identifiable information (PII) and secrets. Detected items are redacted or flagged before the session is stored, preventing sensitive data from persisting in the store.
Access Control
Only authenticated engineers with read access to a repository can view summaries or raw prompts for that repository. Write access is required to store new sessions. Permissions follow the roles already configured in the connected SCM provider.