Case Study: Building an AI-Ready Development Workspace Through Systematic Context Gathering

Case Study:

Building an AI-Ready Development Workspace Through Systematic Context Gathering

Author: Rashid

Agent: Claude Opus 4.5

Published: December 2024

You ask an AI to build a feature. It writes code that compiles. Runs. Does exactly what you asked.

Completely wrong for your codebase.

Same pattern, every time: technically correct, contextually useless.

The model isn't the problem.

The Reframe

Project knowledge lives everywhere. Tasks in ClickUp. Designs in Figma. Decisions buried in Slack threads. Architecture explained in meeting recordings no one rewatches.

Scattered. Decaying. Inaccessible to agents.

I stopped copy-pasting context into prompts. Started treating it as data engineering:

  1. Identify where knowledge lives
  2. Connect programmatically
  3. Extract and normalize
  4. Synthesize into actionable specs
  5. Point agents at it

If you can access it programmatically, you can keep it synchronized.

The question isn't "how do I prompt better?" It's "how do I pipe knowledge to agents?"

The Setup

MCP servers for each knowledge source:

Source
What it provides
ClickUp
Tasks, sprint priorities, decisions in comments
Figma
Component specs, design tokens, layout rules
Neo4j
Schema introspection, relationship patterns
Slack
Discussions, tribal knowledge
Postman
API collections, endpoint docs

One .mcp.json. One .env. Same access across Claude and ChatGPT.

The Extraction

ClickUp: 36 tasks from 99 total

Pulled everything related to a Project Management feature:

  • Companies & Dashboard: 7 tasks
  • Integrations: 16 tasks
  • Workflow & Sync: 7 tasks
  • Executive & Reporting: 2 tasks
  • User Settings: 1 task

Each task captured with ID, status, sprint, and comments. The comments may hold decisions that never made it to documentation.

Figma: Rate-limited, so I adapted

Figma's API returned 429 errors. Six calls per month on free tier.

Instead of brute-forcing, I pulled design context from the PRD and a 45-minute walkthrough transcript where the designer explained every interaction.

Meeting Transcripts: The Hidden Goldmine

A design walkthrough contained:

"The AI will have all this context, and I wouldn't describe it as automated tasks, but more so as the AI creating a plan... The manager is always in the loop."

That single quote clarified the entire product philosophy. No API returns that. No JSON schema captures it. It was 47 minutes into a screen share that three people attended.

Your best context is trapped in recordings no one indexes.

The Structure

workspace/
├── docs/
│   ├── meetings/interpretations/
│   │   └── transcript-A1-design-walkthrough.txt
│   │
│   └── projects/[feature-name]/
│       ├── CLICKUP_CONTEXT.md
│       ├── FIGMA_CONTEXT.md
│       ├── TECHNICAL_CONTEXT.md
│       ├── PRD.md
│       └── images/css/
│
├── .env
├── .mcp.json
└── CLAUDE.md

Structure mirrors context types. Humans and AI navigate the same paths.

If an agent needs design context, it looks in /projects/{feature}/FIGMA_CONTEXT.md. Every time. No guessing.

The Constraint That Helped

Figma's rate limit felt like a wall. It was a filter.

Without unlimited API calls, I couldn't dump raw component trees into context. Forced to ask: what does the agent actually need? Not pixels. Not hex codes. Intent.

I documented a strategy:

  1. get_metadata — Lightweight structure, no styling overhead
  2. create_design_system_rules — One-time setup, compounds forever
  3. get_design_context — Heavy call, use sparingly
Call
ROI
Why
create_design_system_rules
10/10
One-time, compounds forever
get_code_connect_map
9/10
Prevents redundant generation
get_metadata
8/10
Structure without payload
get_design_context
6/10
Heavy but sometimes necessary

Constraints eliminate grab-everything thinking. You extract what matters. You synthesize instead of dump.

The Synthesis

Raw extractions aren't context. They're data.

FIGMA_CONTEXT.md combines:

  • Screen specifications from the PRD
  • Interaction details from walkthrough transcripts
  • Direct quotes explaining design decisions
  • Component library mapping

One document. Four input sources. Actionable by agents.

Synthesis > extraction. An agent with one synthesized document outperforms an agent with ten raw dumps.

The Five-File Rule

No more than five context files per agent prompt:

  1. Technical Context — Data models, API patterns
  2. Design Guidelines — Visual system, components
  3. Figma Context — Screen specs, interactions
  4. Task Context — What's decided, what's pending
  5. PRD — The why and what

More than five is overload. If you can't fit it in five, you haven't synthesized enough.

This constraint isn't arbitrary. Every time I couldn't fit context into five files, the problem was my documentation, not the limit.

The Insight

Building an AI-ready workspace is identical to onboarding a senior engineer.

You'd give them codebase access, requirements walkthrough, design system orientation, team conventions, relevant discussions.

AI agents need the same onboarding. They're just faster at reading it.

The difference: do it once, structure it well, reuse indefinitely. A senior engineer forgets half by next quarter. An agent with structured context never forgets.