Use Cases
Onboarding new developersConquering legacy codeAutomating internal support
SecurityBlogDocsPricing
Book a DemoGet Started



Use Cases

Onboarding new developers
Help new team members find answers
Conquering legacy code
Understand the impact of your changes
Automating internal support
Empower cross-functional teams
SecurityBlogDocsPricing
Log InBook a DemoGet Started

<
All posts
Podcast
What happens when you put three AI tool builders in a room together
Brandon Waselnuk
·
January 30, 2026

I sat down with Peter, David, and Richie for a fireside chat about how Unblocked is evolving, what developers actually want from AI tools, and why privacy comes up in every conversation.

"All the things you need, none of what you don't"

When I asked Peter to pitch Unblocked in a sentence, he landed on something that stuck with me: "All the things you need, none of what you don't."

It sounds simple, but there's a lot packed into that statement. The AI tooling space has exploded with features. Every product is racing to add more capabilities, more integrations, more toggles. Peter's pitch is almost contrarian: what if the value isn't in having everything, but in having exactly what you need, precisely when you need it?

As David put it, the goal is to make the AI feel like "a really great pair programmer who happens to know everything about your company." Not a Swiss Army knife with 47 blades you'll never use. A colleague who already has context.

The echo chamber experiment

One of the more surprising moments came when David described what he called "a personal experiment in humility." He deliberately tried to get Unblocked to tell him he was right about something technical when he wasn't.

"I wanted to see if I could create an echo chamber," he explained. "Like, really lean into confirmation bias and see if the tool would just agree with me."

It didn't work. The context engine kept pulling in information that contradicted his assumptions. "It was actually kind of annoying," David admitted with a laugh. "I wanted it to validate my bad take, and it just wouldn't."

This is the kind of thing that's hard to appreciate until you've experienced it. Most AI tools will happily agree with whatever framing you give them. A tool that pushes back, gently but persistently, is surprisingly rare.

The private default hockey stick

Richie shared some usage data that caught everyone's attention. When they introduced a "private by default" option for AI interactions, adoption followed a hockey stick curve.

"We weren't sure if anyone would care," Richie said. "But the moment we gave people the option to keep their queries private, usage exploded."

The insight here is subtle but important. Developers weren't avoiding AI tools because the tools were bad. They were avoiding them because using the tools meant exposing their work, their questions, their uncertainties to an audience they couldn't control. Give people privacy, and suddenly they're willing to ask the "dumb" questions that actually accelerate learning.

Peter connected this to a broader pattern he's seen: "People will tolerate a lot of friction if they feel safe. Remove the safety, and no amount of convenience matters."

The Source Mark Engine test

Peter brought up a test he'd run before joining Unblocked. He'd been evaluating AI coding tools by asking them questions about internal codebases he knew well.

"I'd ask something where I already knew the answer," he explained. "Not to test whether the AI was smart, but to see what it would do with incomplete information."

Most tools failed predictably: they'd hallucinate confidently, citing patterns that didn't exist in the codebase or making up function names. Unblocked's context engine was the first tool that actually admitted uncertainty when the relevant code wasn't in its context window.

"That's when I knew something different was happening," Peter said. "An AI that knows what it doesn't know is way more useful than one that pretends to know everything."

The model selector rant

The conversation got heated when someone brought up model selectors, those dropdown menus in AI tools that let you pick GPT-4 versus Sonnet versus whatever else.

David was emphatic: "If I have to think about which model to use, you've already failed me."

The argument is that model selection is a technical implementation detail that's been exposed to users who typically have no basis for making the decision. Most people have no idea what the practical differences are. They pick randomly, or they pick based on vibes, or they just leave it on the default.

"It's like asking someone which database query optimizer they want," Peter added. "Just handle it. Route my question to whatever's going to give me the best answer. I don't want homework, I want help."

Richie pushed back slightly, noting that some power users genuinely want the control. But even he admitted that exposing model selection as a primary interface element is probably wrong. "Maybe it should be in settings somewhere, for the people who care. But front and center? That's us admitting we don't know what we're doing."

The 20-year transport

One of the quieter moments came when Peter talked about watching a senior engineer with 20 years of experience use Unblocked for the first time.

"You could see this transformation happen," Peter said. "He started skeptical, the way you'd expect. And then he asked a question about a system he'd built years ago, something where the documentation was scattered across Confluence and Slack and old threads."

The tool pulled together context from all those sources and gave him a coherent answer. "He just sat there for a second. Then he said, 'That would have taken me two hours to piece together.'"

That's the promise, really. Not replacing the 20 years of experience, but making that experience more accessible, more leverageable. The senior engineer still has to evaluate whether the answer is right and what to do next. But they don't have to spend two hours gathering the raw materials.

What comes next

The conversation wrapped with some speculation about where AI-assisted development is heading. All three agreed that the tools are going to get more opinionated, more specialized, and more deeply integrated into existing workflows.

"The generic chatbot era is ending," David predicted. "People want tools that actually understand their specific context, their specific codebase, their specific team. That's where the real value is."

Peter's take was characteristically blunt: "In two years, using an AI tool without context is going to feel like using Google without being logged in. Technically functional, but you'll wonder how you ever put up with it."

Listen to the full conversation

This post covers some of the highlights, but the full conversation goes deeper into the serialization constraints of MCP, why infinite context windows don't solve the problem, and the persona differences between individual contributors and managers.

Give it a listen.

Want to see if context-aware AI changes how you work? Try Unblocked free.

Read our docs or reach out to see it in action on your codebase.

Read More

January 30, 2026

•

Podcast

How a context engine actually works and why you need to care, now
Context engines transform AI from a knowledgeable stranger into a 20-year veteran. This post and podcast cover how Unblocked resolves conflicts and surfaces tribal knowledge.

January 30, 2026

•

Podcast

Building context into everything:
A conversation with Dennis, CEO
In this post and podcast, Unblocked's CEO shares why he started the company, why context is changing everything, and what happens left and right of code gen.
Get answers wherever you work
Book a Demo
vscode logo
VS Code
IntelliJ logo
JetBrains IDEs
Unblocked logo icon
macOS App
Slack logo
Slack
web icon
Web
Product
Get StartedBook a DemoDownload UnblockedPricingSecurity
Use cases
OnboardingLegacy codeInternal support
Resources
BlogDocumentationPrivacy policyTerms of service
Company
About usCareersContact us