Use Cases
Onboarding new developersConquering legacy codeAutomating internal support
SecurityBlogDocsPricing
Book a DemoGet Started



Use Cases

Onboarding new developers
Help new team members find answers
Conquering legacy code
Understand the impact of your changes
Automating internal support
Empower cross-functional teams
SecurityBlogDocsPricing
Log InBook a DemoGet Started

<
All posts
Podcast
Ceding control: The future of UX Design in the age of agents
Brandon Waselnuk
·
January 30, 2026

I sat down with Matt, one of our front-end engineers, to talk about what happens when AI asks developers to give up control. We covered the Prettier precedent, why Matt roasts his own coffee beans, what's wrong with model selectors, and why companies that deploy Cursor across their org often don't see the gains they expected.

Here's an overview of our conversation.

The control thesis

Matt has a theory about why some engineers struggle with AI tools. It's about control.

"Historically, software engineers have had a lot of control. Sometimes control over almost every part of what they're working on."

He walked through the stack of decisions engineers are used to making: what language, what database, what library, what patterns the team uses, what algorithms are approved. Then further down: what colour is your IDE theme, what shell do you use. Engineers have spent years learning to make these decisions. They like thinking through problems. They like exercising judgment.

AI is asking them to give some of that up.

The Prettier precedent

Matt pointed to a precedent: Prettier.

"Before Prettier, teams would argue endlessly about code formatting. Indenting, a space after every semicolon, any of that. That took up a lot of mental energy. People would argue. People would leave PR comments constantly reminding each other, this doesn't match our format."

Then Prettier came along and made a deal: cede control to us, and you won't have to think about this anymore. The part of your brain that was taken up with formatting debates can be used for better things.

"In retrospect, maybe people were scared about that a little bit. They didn't wanna give that kind of control up. But after they tried it out, they say, yeah, I can't imagine having that kind of argument anymore."

AI is asking for a similar trade. The exact code you write, the algorithm you choose, how your classes interact. As a first principle, you shouldn't need to worry about that as much. The bot gives you a first cut.

Some people are enthusiastic. Others, especially those who've been in the profession a long time, have mixed feelings.

Matt roasts his own coffee

We bonded over coffee. Matt takes it seriously. He roasts his own beans, grinds them exactly how he likes.

It's a good metaphor for the control question. Some people just want caffeine. They'll go to McDonald's and get a cup of coffee. That's fine. Others want to control every variable.

"Every once in a while it's like, I do want to write every character of syntax for this thing."

Matt works on front-end, where precision matters. And in his experience, LLMs tend to miss the mark on detail-oriented UI work. They'll get the code logically correct, but when you open it in a browser, it doesn't look right.

"I'm on the hook for anything I commit"

This is where Matt draws the line on ceding control.

"When I use these AI tools, I'm on the hook for anything I commit. Anything I push to git. I'm on the hook for fixing it. I'm on the hook for making sure it's correct. I'm on the hook for future bugs."

Even if a bot generates the code, Matt needs to understand it. Inevitably bugs will come. If he doesn't understand the code, that's a problem.

I shared my own disaster story. I was working on a personal website and told an agent to handle different screen sizes. So it made a list of every single screen size breakpoint.

"I don't think this is the elegant approach."

High level to narrow to stepping in

Matt described his workflow with AI tools. He starts high: ask the bot to make a plan, list out options, generate some skeleton code. Then he refines each part.

"Usually if I start by just asking a bot to implement this feature for me, even if I give it a lot of detail, it's gonna fall over pretty quickly."

He works down through the layers until he hits a point where he needs to step in and write code himself. Then he uses bots to fill gaps, like unit tests. But even then, the tests often miss logical constraints.

"I'll still have to go through and say, actually this one, you're missing this behavior that we need to verify for the unit test to actually work correctly."

The model selector rant

I asked Matt why tools still expose model selection to users. He had opinions.

"The motivation from a user point of view is they've tried to do some task. They've instructed the bot to do something. They feel the bot failed. So now they're gonna change the model and see if that gets any better."

It's spitballing. You don't know what will work, so you try a bunch of things.

Matt thinks this won't last. At Unblocked, there's no model selector. For each workflow, they've done the research to figure out what model works best. Asking every user to do that research themselves doesn't make sense.

"If VS Code had a dropdown asking what database should VS Code use, you'd think, well, that's a bizarre question. Why should I care about the database? You built the thing. Why don't you pick the database?"

And the problem compounds. Every time a new model drops, which is every few days, does everyone at the company stop working to research whether it's better than what they had before?

"I'm willing to cede this control. I don't want to think about those things at all."

Chat boxes are a starting point

I asked how Matt thinks about UX in AI tools. Right now, it's mostly chat boxes.

"I think it was a great way to start. Here's a box where you can put in anything. You can ask anything. You can define any task."

But he thinks it's a starting point, not the end state. More targeted workflows will emerge.

He pointed to Cursor's new UI editor feature. It opens a browser preview of your product, lets you drag and drop UI elements, and the bot suggests a work plan for how to implement those changes. That's a more tuned workflow for a specific task.

Another example: git worktrees. Tools like Conductor can spin up six versions of your code on six git worktrees from a single prompt, then show you the prototypes and ask which one you like.

"Nobody knew about git worktrees really before this whole thing. And now every AI tool is using it."

The adoption problem

Matt sees a pattern with companies that deploy AI coding tools.

"A lot of them, their teams deploy some agentic coding tool across the engineering org. They're like, you're all in Cursor. You're all in Claude Code. Here's a license, have fun. But then they don't see the gains of engineering output that they read other companies are getting."

The missing piece is context. Without it, the tools don't perform well, which erodes trust, which kills adoption.

Matt has seen this flip when teams introduce Unblocked alongside their existing tools. Suddenly there's more adoption, and the cycle reinforces itself.

Code review and trust

The same dynamic plays out with code review tools.

"A lot of the code review tools that are getting traction, you put up a PR and they don't give you a helpful code review. They nitpick everything. It's like having another engineer over your shoulder just tapping you and saying, you didn't use the approved pattern for this."

That erodes trust. It feels bad as a professional. You're being flagged for lower-order problems you don't need to think about.

What builds trust is feedback that's actually useful. Matt pointed to Unblocked's approach: make the feedback high-impact, not nitpicky. That helps engineers say, oh yeah, a bot can be helpful, because it helped me solve a particular problem.

Listen to the full conversation

This post covers some of the highlights, but the full conversation goes deeper into Matt's workflow, the Prettier analogy, and why he thinks we'll stop asking users to pick models.

Give it a listen.

Unblocked is available now. Read our docs or reach out to see it in action on your codebase.

Read More

January 30, 2026

•

Podcast

What happens when you put three AI tool builders in a room together
In this fireside chat and podcast, we discuss what developers actually want from AI tools—and why privacy, context, and knowing what you don't know matter.

January 30, 2026

•

Podcast

Building AI Code Review: A Conversation with Richie
AI code review inherits human biases—and adds new ones. This post and podcast explore how we built a reviewer that finds real bugs without adding noise.
Get answers wherever you work
Book a Demo
vscode logo
VS Code
IntelliJ logo
JetBrains IDEs
Unblocked logo icon
macOS App
Slack logo
Slack
web icon
Web
Product
Get StartedBook a DemoDownload UnblockedPricingSecurity
Use cases
OnboardingLegacy codeInternal support
Resources
BlogDocumentationPrivacy policyTerms of service
Company
About usCareersContact us