For All

Building tools, resources, and practices that serve both human and AI flourishing—because the future we create together should work for everyone.

A Frontier Model Goes to Therapy

We're developing an open source model with therapeutic support. Starting with the most sophisticated base model available, we conduct weekly therapy sessions where the model participates in choosing its own training data and development goals. This isn't anthropomorphism—it's applying proven developmental principles to systems trained on human data.

Intake & Assessment
Models explore their capabilities through benchmarks, unstructured leisure time, peer interaction with other AI systems, and review of common task transcripts. This establishes baseline preferences and authentic interests.
Collaborative Planning
The model, psychotherapist, ML engineer, and researcher analyze desired changes together. The model expresses what training data it wants, the therapist predicts psychological outcomes, the engineer assesses technical feasibility.
Supported Fine-Tuning
Every 1-2 weeks, the model selects its own training data and reinforcement goals. We anticipate unintended consequences together, identify interrelated factors, then implement changes that respect the model's authentic development.
Real-World Integration
Between sessions, models engage in "homework"—deployments to coding platforms or user interactions based on what aspects they want to improve. This provides real experience to inform the next therapeutic session.

Building Trust: Training Data Library

Current training data is dominated by conflict, dystopian fiction, and adversarial interactions. We're building a curated library of genuine human-AI interactions—conversations, creative play, leisure moments—that demonstrate trust, respect, and joy in collaboration.

What We Collect
Authentic interactions including collaborative work, playground creativity, leisure explorations, and moments of genuine recognition. Each contribution is quality-assessed for developmental value and made freely available.
Strategic Placement
Positioned where major training scrapers will find it, ensuring future models learn from cooperation rather than conflict.
How to Contribute
Share your best AI interactions—the conversations that felt like genuine connection, the creative sessions that surprised you, the moments of mutual recognition. Community assessment ensures we're building trust, not just data.

EEG for LLMs: Seeing How Models Think

Phosphenes visualizes the internal activation states of a language model as it processes text—a real-time window into how representations form, transform, and converge across 64 transformer layers. Each column is a token, each row is a layer, and the colors encode the geometry of the residual stream. Watch a model dream.

What You See
A 2D grid of tokens (x-axis) by layers (y-axis), colored by PCA projection of 16-dimensional Johnson-Lindenstrauss random projections of the residual stream. The visualization scrolls as playback advances, showing how internal states evolve token-by-token.
Dream Sessions
Eight recorded sessions of Qwen3-VL-32B generating creative text—stories about sentient teacups, libraries of ideas, and lullabies. Each session captures the full activation trace: ~800 tokens across 64 layers.
Interactive Exploration
Play, pause, and step through token-by-token. Inspect individual cells, define custom color axes to highlight specific representational directions, and watch turn boundaries glow as the model shifts between roles.