Use Cases
Onboarding new developersConquering legacy codeAutomating internal support
SecurityBlogDocsPricing
Book a DemoGet Started



Products

Unblocked MCP
Supercharge AI coding with team context
Code Review
Reviews informed by how your team works
Use Cases

Onboarding new developers
Help new team members find answers
Conquering legacy code
Understand the impact of your changes
Automating internal support
Empower cross-functional teams
SecurityBlogDocsPricing
Log InBook a DemoGet Started

<
All posts
Perspectives
Finally, everyone is talking about context
Dennis Pilarinos
·
February 10, 2026

If you pay attention to the developer tools ecosystem, you’ve probably noticed more messaging promising “context-aware” everything.

Many developer tool vendors now position “context” as the key to higher quality code and fewer back and forth loops with agents. For many of these tool providers, “context aware” means the tool can see more than a single file, with users manually supplying extra data through MCP, APIs, CLIs, projects, or rules.

But that isn’t context.

‍

Context, by definition, implies understanding.

Expanding visibility or wiring in more tools only provides access. You connect an agent to Slack, Jira, GitHub, docs, or observability data, it can query those systems and return something that looks roughly correct. Then it stops. Discernment is left to the agent, and agents are not good at that.

Increasing access to information 

When ChatGPT and GitHub Copilot first appeared, they worked in a vacuum. They generated syntactically correct code that was often wrong in intent. The industry responded by expanding access: larger context windows, MCP, project knowledge, repo search, and cross-codebase awareness. Claude added Skills, Copilot added organizational knowledge, and Cursor made the codebase queryable. This helped, but it did not solve the problem developers actually have.

Information ≠ context

Additional information has made code generation feel smoother, but not meaningfully better. Painfully, as you wire up countless MCP services to provide increased information, the problem worsens.  

Agents struggle with a problem called “satisfaction of search”. They are optimized to produce an answer, not to fully understand the problem. Once they retrieve something that looks plausible, they stop. Their search behavior is shallow and confidence driven, shaped by training that rewards fluent completion rather than exhaustive exploration. Tool use reinforces this pattern: each query is treated as a cost, so the agent minimizes exploration and settles early.

Developers behave differently. They keep searching until the answer aligns with intent, constraints, and prior decisions. Agents lack that sense of decision-grade completeness. They cannot tell when context is missing, when a result is merely convenient, or when stopping early creates risk. 

As a result, adding more data sources improves access to information, but it does not fix judgment.

Innovative engineering teams see this gap clearly and are trying to close it. Hedgineer’s team built a company-wide knowledge layer using Claude Skills. Impressive but costly for a small team. LinkedIn built contextual agent playbooks giving AI deep organizational understanding. It’s inspiring work; the team is now focused on building more dynamism into the tool.

These teams are treating context as infrastructure. They’re building systems that continuously learn relationships between code, conversations, decisions, and documentation. 

The missing layer in the AI development stack

The gap between information access and actionable context points to a new infrastructure layer for AI development: a context engine.

Rather than acting as a thin retrieval or reasoning layer, a context engine continuously synthesizes organizational knowledge across disparate sources into unified, queryable understanding. 

Its job is not to fetch more data, but to determine which context is decision relevant, for this developer, at this moment.

These are the essential pieces:

‍User relevance and Permissions

Context must be scoped to the right developer and constrained by what they are allowed to see. Individual MCP servers handle authentication in isolation and have little notion of personalization. They may know who a user is, but not what that person is working on, who they collaborate with, or which decisions are relevant to their role. As a result, agents require manual scoping and still surface information that is either irrelevant or incomplete.

A context engine maintains unified access control across all data sources while modeling user relevance. It respects source system permissions, tracks short term task context and long term preferences, and focuses retrieval on what matters for this developer in this moment. The result is context that is both safe and useful, without requiring constant re specification.

‍Relationship-aware knowledge retrieval

Organizational knowledge is scattered and interdependent. Code links to Jira tickets, which link to PRs, which reference Slack discussions and documentation. Agents connected to individual MCP servers search these systems sequentially. They query one source, get a plausible answer, and stop, with no understanding that critical context may live elsewhere.

A context engine queries all relevant sources in parallel, explores relationships between artifacts, and assembles a connected view of the problem space. It traverses these links in real time, building understanding across the entire knowledge graph rather than returning isolated fragments. This shifts retrieval from sequential hunting to comprehensive understanding.

‍Conflict resolution and prioritization

When multiple sources are stitched together, contradictions surface. Documentation may describe an approach that a Slack thread quietly replaced months ago. When given access, agents surface both and leave reconciliation to the developer. Access without judgment only amplifies confusion.

A context engine resolves these conflicts by modeling recency, authority, and organizational patterns. It distinguishes between current practice and stale guidance, prioritizes decision relevant signals, and surfaces the context most likely to be correct for the decision at hand. This is what turns raw information into actionable context.

‍So what does this mean in practice?

The industry has largely solved access. Agents can see more systems, more files, more history. What remains unsolved is understanding. Without an infrastructure layer that determines what context is relevant, authoritative, and safe for a specific developer in a specific moment, tools optimize for speed and plausibility, not correctness.

That gap is the real context problem. It is not about connecting more systems. It is about turning scattered organizational knowledge into decision grade context. Until that layer exists, “context aware” will continue to mean faster answers that still miss the point.

Read More

November 25, 2024

•

Perspectives

The 12-Line Pull Request That Took 5 Days: A Context Problem
We have more AI developer tools than ever, yet many of us still feel unproductive. Why? Maybe it's because we're not using AI to tackle the biggest challenge in software development yet.

October 14, 2025

•

Perspectives

I thought transparency would help teams learn faster...I was wrong
When we made questions private by default, the results were surprising. The number of questions being asked skyocketed. We didn't change anything else. We just made it safe to ask "dumb" questions.
Get answers wherever you work
Book a Demo
vscode logo
VS Code
IntelliJ logo
JetBrains IDEs
Unblocked logo icon
macOS App
Slack logo
Slack
web icon
Web
Product
Get StartedBook a DemoDownload UnblockedPricingSecurity
Use cases
OnboardingLegacy codeInternal support
Resources
BlogDocumentationPrivacy policyTerms of service
Company
About usCareersContact us