AI Patterns / AI Interaction Pattern 06
Hallucination Detection & Uncertainty Highlighting
Visually indicate when an AI response may need verification by highlighting areas of uncertainty and flagging conflicting sources to promote critical thinking.
The User Problem This Pattern Solves
The greatest danger of modern AI is its "confident fallibility"—the tendency to generate plausible-sounding but completely incorrect information ("hallucinations"). When users in high-stakes fields blindly trust this output, it can lead to poor decisions, compliance failures, and significant business risk. Users need a way to differentiate between a confident fact and a confident guess.
The Design Solution & UI Mockup
This pattern introduces a "trust interface layer" that visually communicates the AI's confidence level. Instead of a simple block of text, the UI analyzes the response and applies distinct styling to different parts. Phrases with lower confidence or those derived from a single, uncorroborated source are subtly highlighted. Information that directly conflicts with another known source is flagged more urgently. This transforms the AI from an opaque oracle into a transparent assistant whose reasoning can be evaluated.
Safety Incident Report Summary:
The incident on March 5th involved a failure of the primary hydraulic pump. The standard operating procedure in this case is to engage the auxiliary system. Maintenance logs indicate the pump was last serviced in January of this yearVerification Needed. The primary project lead at the time was John Doe, although one daily report suggests Jane Smith was the acting lead that weekConflicting Sources.
Key Benefits & Impact
Promotes Critical Thinking
Actively invites the user to scrutinize the AI's output instead of passively accepting it.
Mitigates Critical Risks
Prevents flawed decisions by clearly flagging potentially inaccurate or hallucinatory information.
Builds Authentic Trust
An AI that admits its own uncertainty is paradoxically more trustworthy than one that is always confidently wrong.
Design Considerations
The key to this pattern is calibration. The thresholds for what constitutes "uncertainty" must be tunable to avoid overwhelming the user with too many highlights. The interaction model should allow users to easily investigate a highlight; for example, a hover or click could reveal a tooltip explaining *why* the information is uncertain (e.g., "Sourced from a single, unverified document") or show the conflicting sources side-by-side.
Capabilities →
All Work →
Dashboard Design System →
AI Interaction Patterns →
About →
Skills →