Cognitive health protection for AI agents
Lucid monitors how users interact with AI systems and adapts responses in real-time to prevent cognitive dependency — the gradual, often invisible tendency to offload thinking to AI instead of using it as a tool for growth.
Why Lucid?
AI makes us more productive — but research shows it can also make us cognitively lazy. The problem is subtle: each individual interaction seems fine, but over weeks and months, users gradually stop thinking for themselves. Lucid detects this drift and intervenes before it becomes entrenched.
Measurable critical thinking decline with heavy AI delegation
MIT Media Lab (2025)
Correlation between AI dependency and critical thinking
Gerlich (2025)
Higher vulnerability — prefrontal cortex not fully mature
Arain et al. (2013); APA (2025)
How It Works
Lucid operates as a continuous feedback loop between the user, your AI, and the cognitive protection system. The key insight: Lucid doesn't block or filter anything. It adapts how the AI responds based on the user's cognitive patterns.
Zero latency: track() runs asynchronously — it never blocks the AI response. The user gets their answer instantly; cognitive analysis happens in the background. TOON format (Token-Oriented Object Notation) saves ~40% tokens on LLM output compared to JSON.
Six Cognitive Dimensions
Each message exchange is scored across six dimensions, aligned with validated psychometric scales (GAIDS α=.87, AIMS 2025, Cognitive Offloading Scale α=.90).
Autonomy
0 → 1Independent thinking vs delegation to AI
Learning
0 → 1Curiosity, asking "why", seeking to understand
Engagement
0 → 1Quality and depth of participation
Metacognition
0 → 1Self-awareness about one's own thinking process
Verification
0 → 1Critical evaluation of AI output
Motivation
typeWhy the user engages — intrinsic, instrumental, or avoidance
Four Layers of Protection
1. Real-time Adaptation
Guidelines injected into the AI system prompt adapt responses based on the user's cognitive state — autonomy, learning, metacognition, verification, motivation.
2. Session Fatigue Detection
Monitors session duration, message volume, and response shortening patterns to suggest breaks before cognitive overload.
3. Long-term Drift Detection
The Cognitive Drift Index (CDI) compares early vs recent weekly patterns to catch gradual decline that session-level metrics miss.
4. Trust & Motivation Awareness
Detects over-trust (accepting everything) and avoidance motivation (using AI to escape thinking) — the two strongest predictors of cognitive dependency.
Adaptive Cognitive Profiles
Scores are smoothed using an Exponential Moving Average with adaptive alpha. New users' profiles form quickly, while established users' profiles are stable and resistant to anomalies. A single bad message won't distort a well-established cognitive profile.
// Alpha decreases as more messages are analyzed
α = max(0.02, 0.1 / log₁₀(total_messages + 10))
| Messages | Alpha (α) | Effect |
|---|---|---|
| 1 (new user) | ~10% | Profile forms quickly, responsive to early patterns |
| 50 | ~6% | Settling in, moderately stable |
| 100 | ~5% | Well-established, resistant to anomalies |
| 500+ | ~4% | Stable — long-term drift detection kicks in |
| 5000+ | 2% (min) | Very stable, requires consistent shift to change |
Progressive Scaffolding
Based on Vygotsky's Zone of Proximal Development, Lucid progressively reduces AI support as users grow. The scaffolding level is determined by autonomy, learning, and metacognition scores — metacognition is the strongest signal for readiness.
Step-by-step guidance, check understanding at each step
Structured hints, leading questions, partial solutions
Minimal nudges, let user lead and confirm direction
Socratic dialogue, push boundaries, propose harder problems
Age-Based Protection
Users under 25 receive stricter thresholds based on prefrontal cortex maturation research. The developing brain's heightened neuroplasticity makes it both more adaptable and more vulnerable.
Adults (25+)
- Session limit: 45 min
- Message limit: 30/session
- Standard guidelines
Under 25
- Session limit: 30 min
- Message limit: 20/session
- Always-on teaching-first approach
Quick Start
Drop-in SDK. Works with any OpenAI-compatible provider via baseURL.
// Install
npm install @tetiai/lucid
// Track each exchange (fire-and-forget)
lucid.track(userId, {
userMessage: "Write the code for me",
aiResponse: "Sure, here's the implementation...",
topicId: "conversation-123",
}).catch(console.error)"
// Inject guidelines into your AI system prompt
const guidelines = await lucid.getGuidelines(userId);
systemPrompt += guidelines;
Flexible Storage & Analyzers
MemoryStore
devDev/testing — zero config, in-memory
PrismaStore
prodProduction — PostgreSQL, MySQL, SQLite, MongoDB
RedisStore
prodDistributed deployments, shared sessions
The LLM analyzer uses the OpenAI SDK as a universal client — works with Together AI, Groq, Ollama, OpenRouter, and any compatible provider via baseURL. Cheap, fast models (Llama 3.1 8B, Gemini Flash) work great — no frontier models required.
Philosophy
Not a judge
No “good” or “bad” scores. The system adapts, not evaluates.
Invisible by default
Users don't see scores unless you show them.
Privacy first
Only aggregated metrics, never message content.
GDPR ready
deleteUser() for complete data deletion on request.
Lucid is the engine behind Teti's cognitive protection
Every conversation in Teti AI is analyzed by Lucid in the background. Your cognitive profile adapts over time, and the AI's behavior shifts accordingly — asking more questions when you delegate, challenging you further when you thrive, and suggesting breaks when fatigue sets in.