The biggest problem with AI right now isn't that it's not smart enough. It's that it feels sneaky.
You're typing in a doc, and suddenly text appears that you didn't write. You're in a support chat, and you can't tell if you're talking to a human or a bot. You ask for a summary, and you don't know if it read 5 emails or 500.
This is a failure of Contract Design.
The Contract is the agreement between the user and the system about: what is happening, who is in charge, and how to stop it.
When the contract is clear, AI feels like a helpful assistant. When the contract is murky, AI feels like spyware.
The 3 Laws of the AI Contract
Law 1: Visibility of Labor
Magic is bad UX. If an AI does work instantly and invisibly, users distrust it. They assume it hallucinated.
The Problem: User clicks "Summarize." Instantly, a summary appears. User thinks: "Did it actually read everything? Or did it just make something up?"
The Fix: Show the work. "Reading 12 documents..." → "Analyzing sentiment..." → "Drafting response..."
This isn't just a progress indicator. It's proof that the AI actually looked at the data. It builds trust through transparency.
Law 2: The Eject Button
Users are terrified of losing control. If they feel like the AI is a runaway train, they will disengage entirely.
The Stop Button: Always allow the user to interrupt generation. Not buried in a menu. Right there, obvious, labeled "Stop."
The Undo: Every AI action must be reversible. If an agent categorizes 1,000 emails, there must be a "Revert" button.
The Override: Users should be able to say "ignore the AI suggestion and do it my way" without friction.
Law 3: Provenance (Show Your Work)
"Because I said so" is not an acceptable answer from an AI.
Citations: If the AI makes a claim, it must link to the source. "Based on the Q3 report (link)" is trustworthy. "Based on our data" is not.
Confidence Signals: If the AI isn't sure, the UI should reflect that visually. Highlight uncertain text in yellow. Add a qualifier: "I found limited information on this topic."
The Reasoning Log: For complex decisions, show the steps. "I selected 'Urgent' because the customer used the word 'emergency' and has been waiting 48 hours."
The Takeaway
Trust is not a vibe. Trust is a mechanic. You design trust by designing the Contract.
If the user doesn't know what the machine is doing, the machine is malware—even if it's helpful malware.
Let's talk about your product, team, or idea.
Whether you're a company looking for design consultation, a team wanting to improve craft, or just want to collaborate—I'm interested.
Get in Touch