Why Anthropic Built a "Humble" Product
In December 2024, Anthropic shipped a small feature to Claude Code called AskUserQuestion. When the AI encounters ambiguity, it stops, presents structured multiple-choice questions, waits for your answer, then proceeds.
That's it. That's the feature.
But there's something interesting here. For three years, the AI product paradigm has been: craft the perfect prompt to get what you want. AskUserQuestion inverts this. Now the model prompts you.
The Product Decision Underneath
Anthropic could have added a line to the system prompt: "When uncertain, ask clarifying questions." It would have worked fine. Instead, they built a structured tool with defined schemas and multiple-choice options. First-class infrastructure for a simple behavior.
Why bother? Because they understood that asking questions isn't a fallback for confusion. It's a core capability worth investing in.
Here's the harder decision most teams would get wrong: a confident AI that guesses feels smarter than one that asks questions. The temptation to hide uncertainty is real. Anthropic optimized for outcome quality over perceived intelligence. Better to look uncertain and deliver correctly than to look confident and build the wrong thing.
Ambiguity as Collaboration
Most products treat ambiguity as a bug. They add defaults, make assumptions, constrain inputs. The goal is to eliminate interpretation.
AskUserQuestion treats it differently. Multiple valid approaches isn't a failure state. It's an invitation to collaborate.
The questions themselves become artifacts. "How should API errors be handled? A) Immediate failure, B) Retry with backoff, C) Custom handler." That's not just a prompt. That's the beginning of a spec. The AI surfaces decisions you might have missed.
When you use this with Plan Mode, Claude asks questions before writing code. You answer a dozen structured questions about authentication, error handling, data formats. By the time implementation starts, most decisions are already made.
Compare this to the typical flow: write a ticket, developer interprets it, builds something, you review it, realize the interpretation was wrong, rebuild. That loop repeats three or four times before anything ships.
What This Means for PMs
The lesson isn't "build an AskUserQuestion tool." It's the principles underneath.
Make uncertainty surfacing a feature. Your discovery process should expose what you don't know, not hide it. The questions you can't answer are more valuable than the ones you can.
Structured questions beat open-ended ones. "What do you think about authentication?" generates vague responses. "Should we use OAuth, JWT, or sessions? Here are the trade-offs." generates decisions. Framing the options is where clarity comes from.
Optimize for outcomes over optics. It's uncomfortable to say "I don't know" in front of stakeholders. It's more uncomfortable to ship the wrong thing because you guessed instead of asked.
The AI that asks questions is more useful than the one that guesses confidently. The PM who asks questions builds better products than the one who assumes they have the answer.
Both require humility that doesn't come naturally. Both pay off enormously when you commit to them.