Cognitive Forcing Function (CFF) Demo
Simulating the **Update** approach for AI-assisted decision-making across scenarios.
Understanding the Dual-Process Problem (System 1 vs. System 2)
The research shows that when people are helped by AI, they tend to **overrely**—they accept the AI’s suggestion even when it’s wrong, often performing worse than if they had worked alone.
1. System 1 (The Default)
**Fast, Intuitive, Heuristic.** We default to this. When the AI gives a suggestion, we often use a shortcut: “The AI is usually right, so I’ll trust it.” This leads to overreliance.
2. System 2 (The Effort)
**Slow, Analytical, Deliberative.** This is the critical thinking we need to spot AI errors. It’s mentally taxing, so we avoid it unless we’re forced to engage.
The **Update CFF** below is a **Cognitive Forcing Function** designed to disrupt System 1 thinking. By making you commit first, it creates **cognitive conflict**, forcing your System 2 to wake up and review the evidence.
Current Scenario:
PHASE 1: Your Initial Assessment (System 1)
You must first commit to a decision before the AI can assist. **This is your anchor.**

