Everyone is "adding AI." Almost no one is building AI features that survive contact with real users.
This isn’t my philosophy. This is my playbook for what’s worked for me and for others I trust. 12 patterns, each with when to use it, what the UX actually looks like, how to scope behavior and guardrails, and a small exercise you can run this week.
Use it to refactor existing features or design new ones.
1. Upgrade "Ask Me Anything" to "Three Framed Jobs"
The anti-pattern: One empty box and a label: "Ask me anything." Users freeze or type "help," then churn.
The pattern: Always frame your AI entry point around 3 concrete jobs. Onboarding screen or first panel shows:
- "Plan..." – a meeting, a sprint, a campaign
- "Diagnose..." – why a metric changed, what's blocking progress
- "Create..." – a draft, a brief, a summary
Each job opens a guided input with 2–3 fields: Goal, Scope/context (data sources, time range), Output format.
Guardrails: For each job, hard-code which tools/data the AI is allowed to use. Save templates so users can re-run the same job with small tweaks.
Try this week: Take one "ask anything" surface. Replace it with 3 job buttons and guided forms. Measure prompt success (completions without retry) before vs after.
2. Swap "Chat Only" for Three Copilot Modes
Most products default to "chat with the AI." That's one mode. You need three.
The pattern: Offer three copilot modes in one product:
- Collaborative mode (chat) – Full-screen or large panel. Best for exploration, brainstorming, and mixed goals.
- Embedded mode (sidecar / inline) – Lives next to the core workflow (code, doc, ticket, canvas). Suggests actions and drafts in context.
- Asynchronous mode (background) – Runs when the user is away (nightly triage, weekly summaries). Reports back via digest or notifications.
Guardrails: For each mode, write a one-line Behavior Brief: "In collaborative mode, I never act on external systems." "In asynchronous mode, I only auto-apply low-risk actions under rules X/Y/Z."
3. Design the "Thinking" State Like a Real Step, Not a Spinner
AI latency will always exist. Treat it as a state with content, not a blank wait.
The pattern: When AI is working, show what it's doing ("Analyzing last 10 tickets tagged 'P1'..."), what it will output ("Drafting 3 reply options..."), and what the user can do ("Cancel," "Adjust filters").
Guardrails: Always include Cancel if the underlying action is interruptible. Use neutral microcopy—no "✨ cooking something magical ✨" when stakes are high.
4. Make Uncertainty Explicit with a Single Banner Pattern
Right now, AI either over-promises ("Here's your answer!") or hides behind tiny disclaimers.
The pattern: Create a reusable uncertainty banner component that appears on any AI output when data is incomplete, confidence falls below a threshold, or a guardrail triggers.
Example: "This summary is based on limited data (1 of 5 sources available). Please verify key figures before sharing."
5. Turn Prompts into Composable "Prompt Blocks"
Your users shouldn't have to remember prompts. Your product should.
The pattern: Represent prompts as Prompt Blocks in your system with Name, Inputs, Output format, and Risk level. Expose them as buttons in context, saved "recipes" in chat, and reusable templates in a library.
6. Demand a Behavior Brief for Every Agent
Before anyone ships an "agent," make them answer six questions in writing.
Behavior Brief (1 page): Name & purpose, Inputs, Capabilities (3–7 verbs), Constraints (hard "no's"), Decision rules (when it acts vs asks vs escalates), and Logs & controls.
Guardrails: No Behavior Brief, no production deployment. The UI must reflect the brief: "What I can see," "What I can do," "What I won't do."
7. Use "Event Triggers → Suggestions → Confirmations" for Copilots
Proactive AI doesn't mean random interruptions. It means event-driven suggestions with explicit confirmation.
The pattern: For each area, define a Trigger (metric spike, calendar event, status change), a Suggestion (one targeted action, phrased plainly), and a Confirmation UI (a compact card with "Apply / Edit / Dismiss").
8. Design One "Kill Switch" Pattern and Use It Everywhere
If users have to hunt for how to stop or undo AI, you will lose trust.
The pattern: Create a standard "Kill Switch" affordance: "Stop / Cancel" on every long-running AI operation, "Undo," "Revert," or "Restore previous version" on outputs, and "Turn off auto-actions for [area]" in settings.
9. Make "AI Logs" a First-Class Screen, Not a Debugging Tool
Autonomous behavior without history is horror. Give users—and yourself—a flight recorder.
The pattern: Add an "AI Activity" view that shows what was done, when, why, with which inputs, and outcome/status. Each entry links back to the object affected, the agent responsible, and controls.
10. Prototype AI UX Before the Model Works
Teams over-index on model architecture and under-index on interaction design.
The pattern: Use a Wizard of Oz or "fake until wired" approach. Validate: Do users understand what the AI is for? Do they know what it can't do? Do they feel in control and able to recover from bad answers?
11. Instrument the Right Metrics for AI Features
Most teams ship AI features without defining what success looks like beyond click-through.
The pattern: For each AI surface, track Adoption (% of eligible users who try it, repeat usage), Effectiveness (task completion time, edit rate on drafts), and Trust/control (Kill Switch usage, undo rate, "Use without AI" switches toggled).
12. Train Your Team, Not Just Your Models
You can't build any of this if your builders still think "AI feature = chatbox + pretty gradient."
The pattern: Give engineers and designers a shared playbook covering Roles (advisor, copilot, automation, agent), Patterns (collaborative, embedded, asynchronous), Artifacts (Context/Prompt Blocks, Behavior Briefs, AI components, AI Activity logs), and Research scripts.
Bring It All Together: A 30-Day Action Plan
Week 1 – Audit & Briefs: List all your current or planned AI features. For each: classify role, write a 1-page Behavior Brief, identify missing Kill Switches and uncertainty banners.
Week 2 – Patterns & Components: Replace at least one "ask anything" input with three framed jobs. Design your standard thinking state, uncertainty banner, and Kill Switch as reusable components.
Week 3 – Prototype & Test: Build one Wizard-of-Oz prototype for a new or risky AI feature. Run 5 user sessions focusing on trust, control, and failure.
Week 4 – Instrument & Educate: Instrument adoption, effectiveness, and trust metrics for one feature. Share a short internal deck with your team.
Do this once and you will feel the difference: AI features stop being scary demos and start being reliable teammates in your product.
Let's talk about your product, team, or idea.
Whether you're a company looking for design consultation, a team wanting to improve craft, or just want to collaborate—I'm interested.
Get in Touch