This tool helps clarify vague product and design requests by asking only the questions needed before work begins.
Overview
Ambiguous Request Clarifier is a conversational AI agent that responds to vague or underspecified input by identifying what is unclear and asking targeted clarifying questions before proceeding.
Rather than generating solutions, the agent is designed to pause, surface missing context, and guide users toward clearer intent, scope, and constraints.
Why this exists
Ambiguous requests are one of the most common failure points in product, design, and AI work.
Stakeholders often express feedback or requirements using vague language such as “make this more intuitive,” “improve onboarding,” or “we need this faster.” While these statements feel actionable, they lack the specificity needed to make reliable decisions.
Most AI systems respond to ambiguity by making assumptions and generating confident outputs. In real-world contexts, this behavior increases misalignment rather than reducing it, especially when decisions are made before intent, scope, or constraints are clearly understood.
This project explores a different approach. Instead of solving ambiguous requests, the agent is designed to surface what is unclear, make uncertainty explicit, and ask precise clarifying questions before proceeding.
The goal is not automation, but alignment. By treating ambiguity as a design signal rather than an error, the project demonstrates how conversational AI can support clearer human conversations and more grounded decision-making.
Agent behavior & decision logic
The agent follows a clarification-first decision pattern:
It evaluates whether a request is sufficiently specified to act on
If ambiguity is detected, it does not generate a solution
Instead, it identifies missing dimensions such as intent, audience, scope, or success criteria
It asks a small set of targeted clarifying questions to reduce uncertainty
The agent only proceeds once sufficient context is provided. This behavior is intentional and consistent across interactions.
Guardrails and constraints
The agent operates under strict constraints to prevent assumption-driven output:
It does not generate recommendations, solutions, or designs when intent is unclear
It avoids extrapolating beyond the information provided
It limits clarification to the smallest set of questions needed to proceed
It does not infer user goals, success criteria, or constraints without confirmation
Tone of voice
The tone is calm, neutral, and precise.
The agent avoids:
persuasive language
judgment or criticism
conversational filler
Responses are framed to support collaboration and shared understanding, rather than challenge or correct the user.
Interaction design principles
The agent is designed around the following principles:
Clarification before execution
Explicit uncertainty over implicit assumptions
Fewer, better questions instead of broader interpretation
Alignment over output
The goal is not to appear helpful by producing content, but to be useful by shaping better conversations.
What this demonstrates
This project demonstrates my approach to designing AI behavior, including:
Conversational decision-making under uncertainty
Scope control and assumption management
Guardrail-driven AI behavior
Designing for trust and interpretability in LLM-powered systems
The emphasis is on how the agent behaves rather than technical novelty.
Closing
This agent is intentionally small in scope.
Its purpose is not to replace human communication, but to support clearer conversations by helping people recognize and address ambiguity before decisions are made.