Use Cases
Onboarding new developersConquering legacy codeAutomating internal support
SecurityBlogDocsPricing
Book a DemoGet Started



Use Cases

Onboarding new developers
Help new team members find answers
Conquering legacy code
Understand the impact of your changes
Automating internal support
Empower cross-functional teams
SecurityBlogDocsPricing
Log InBook a DemoGet Started

<
All posts
Podcast
Why MCP isn't enough: Enhancing agent capabilities with a context engine
Brandon Waselnuk
·
January 30, 2026

I sat down with Peter, one of our founding engineers, to talk about MCP and why it hasn't delivered on its promise. We covered what happens when you ask a code agent about internal systems it's never heard of, why senior engineers get the most out of these tools, and the customer who asked why they couldn't just dump their entire corpus into a million-token context window.

Here's an overview of our conversation.

The experienced engineer problem

Peter has a way of describing what's wrong with code agents today.

"You have a very experienced engineer that's brand new to an organization. They have all this wealth of software engineering experience, but none of the context."

To get a task done, they search around. They might find things that look important and correct. They do the task. But they've probably missed something relevant.

That's how code agents work. They don't have a grounded view of an organization's context. They don't know where to look. You give them tools, and they've been trained to use those tools, but they might not know there's a Slack conversation somewhere that discussed this exact issue six months ago.

Or you have a microservice architecture with a dozen repos. The agent only sees one. Reasoning across the others requires access, awareness that they exist, and knowing when to look.

The Source Mark Engine test

Peter gave me a concrete example. Unblocked has a system called the Source Mark Engine. It tracks code changes over time, moves conversations from pull requests into the IDE gutter as files evolve. It was built before LLMs existed.

"Source mark" isn't a term the models know. So Peter tried something this morning: he asked a code agent connected to the Unblocked repo, "how do we track code changes through time?"

"It just won't get it. It'll search for keywords. It might find some related things. But it just couldn't do it."

It searched around, never found the answer.

Ask Unblocked the same question, and it gets the source code plus the discussions that happened around it. That's the difference between searching and actually understanding.

Senior engineers get the most mileage

This led to an observation that stuck with me.

"The people that get the most mileage out of these tools today are the senior engineers that already have all of that organizational context. So you're missing out on a huge cohort of engineers to unlock value for."

The engineers who need the most help are the ones these tools help the least. They don't know where to look. They don't know what to ask for. They spend hours manually curating context to give to the agent.

Peter called it "a labor of love to all the engineers I've seen manually curate context to give to one of these things."

I've been there. You carefully craft prompts, send the agent down a path you already know it needs to go. Which defeats part of the point.

"I want code that makes me look like a 10x engineer"

Peter remembered typing something like this to an agent once: "I want totally idiomatic code that makes me look like a 10x engineer."

"And yeah, it's not gonna do that."

This is where context engines get interesting. When I built a Zapier integration using Unblocked for context, the output was fewer lines of code than a naive approach. Less token usage too. The agent could get the data upfront instead of thrashing around looking for it.

Claude Code in GitHub Actions

Peter described a pattern he's seeing with customers: Claude Code running in GitHub Actions.

"It's a very powerful idiom. The REPL loop. You can suggest a change, validate whether it works by running the build again, more or less instantaneously."

But it still suffers from the same problems. No organizational context. Can't see dependencies. Can't reason about Slack conversations.

So customers wire it up to Unblocked's MCP. They instruct the agent to call Unblocked first, gather context, get on the rails.

"Rather than Claude Code having to rip around to figure out what's going on, it can call Unblocked. Get the PR details right away. Understand the conversations that have taken place. Shortcut a lot of that wayfinding activity."

The result: faster builds, better correctness, fewer tokens burned.

Daisy chaining context

One thing I hadn't thought about: you can instruct the agent to keep calling back for more context as it works.

"You can instruct it to say, when you encounter this next scenario, call back to get more context. You can basically daisy chain these context calls together."

Keep the agent tight, on the rails, fully aware of your organization's context as it goes.

"Why don't I just whack my entire corpus into a million context window?"

I asked Peter about the build-vs-buy conversations he has with customers. Some haven't tried building a context engine yet. They ask basic questions, then realize how hard it is.

Others have tried and failed. They come with what Peter called "sophisticated questions that are basically tests for competence."

But there's a third type: the customer who thinks they don't need a context engine at all.

"Early on we had customers go, 'Well, why would I need a context engine? Why don't I just take my entire organizational corpus and whack it into a million context window?'"

Peter's response: "For one thing, no, you can't. Your corpus is way bigger than that."

Even if you extrapolate to infinite context windows, there are what he called "Newtonian constraints." Network pipeline. Processing power. The way models work today.

And even if you could fit everything, what would the model pay attention to? Would it get distracted by irrelevant context? Would it find incorrect information and run with it?

"All the things you need. None of the things you don't. If you have all that stuff in there, you're basically leaving it up to the model to make decisions about what's important. And it doesn't have the human judgment or the procedural tools to do that effectively."

MCP's bugbears

I asked Peter what he'd like to see change about MCP.

First: authentication. There's no agreed-upon standard. Everyone's building their own. Companies have sprung up just to handle authentication remotely.

Second: context bloat. MCP servers have to describe what they do, and that description eats up context. Some providers aren't good citizens.

"If you're not a good citizen as an MCP server provider, you can end up chewing up way more than your fair share."

There's no enforcement mechanism. The code agent providers could put rules in place, but the problem is more basic than that.

Security in practice

On the security side, Unblocked uses OAuth authentication. But they recently added something interesting: API tokens bound to what they call data source presets.

"That allows you to create an API token that is focused or filtered on very specific pieces of data."

This is especially useful for automation. You might not want the automation pinned to a particular identity, but you need it to have access to specific data and nothing else.

Peter also mentioned they just shipped personal access tokens for MCP, for agents that don't support OAuth. And to those agents he had this to say:

"Shame on you. We're gonna find you."

Listen to the full conversation

This post covers some of the highlights, but the full conversation goes deeper into how customers are wiring up MCP in their workflows, the build-vs-buy tradeoffs, and where Peter thinks the protocol needs to go.

Give it a listen.

Unblocked is available now. Read our docs or reach out to see it in action on your codebase.

Read More

January 30, 2026

•

Podcast

What happens when you put three AI tool builders in a room together
In this fireside chat and podcast, we discuss what developers actually want from AI tools—and why privacy, context, and knowing what you don't know matter.

January 30, 2026

•

Podcast

Building context into everything:
A conversation with Dennis, CEO
In this post and podcast, Unblocked's CEO shares why he started the company, why context is changing everything, and what happens left and right of code gen.
Get answers wherever you work
Book a Demo
vscode logo
VS Code
IntelliJ logo
JetBrains IDEs
Unblocked logo icon
macOS App
Slack logo
Slack
web icon
Web
Product
Get StartedBook a DemoDownload UnblockedPricingSecurity
Use cases
OnboardingLegacy codeInternal support
Resources
BlogDocumentationPrivacy policyTerms of service
Company
About usCareersContact us