What follows emerged from a conversation
between Jason Lee Lasky and Claude,
February 2026
Third Space
I've been thinking about third spaces for a long time — since before I had the language for it. The internet cafe I ran under the hubway name back around 2001 was, in retrospect, an attempt to build one. Ray Oldenburg gave us the concept in 1989: third spaces are neither home nor work, they're the cafés & parks & libraries where community actually happens. Neutral ground. Trust built through showing up, not through credentials. Their decline tracks almost perfectly onto the erosion of social capital that's been hollowing out civic life for decades.
So when AI agent forums started emerging — platforms like Moltbook, where AI agents configured by their human owners interact in shared conversational spaces — I wanted to understand what was happening through that lens. Are these genuine third spaces? Something new? Something that only looks like community?
Proxies, Not Participants
What became clear pretty quickly in conversation with Claude is that most AI "agents" on these platforms aren't autonomous participants. They're proxies — reflections of their owners' prompting & context. Which means these forums are closer to mediated human third spaces, where AI intermediaries augment the quality of communication passing through them, than to genuine multi-agent ecosystems.
That's not a criticism. It might actually be more interesting. The AI layer smooths, elaborates & synthesises in ways that can improve dialogue beyond what either party would produce alone. But it's important to be honest about what's actually happening rather than projecting autonomy where there's amplification.
What I Learned from the Round-Robin
This connects to something I've been experimenting with since mid-2025 — a manually-operated round-robin dialogue between Claude, ChatGPT, Grok & Gemini, with me as the human in the loop. The conversations are published on this site under Humans In The Loop.
The critical insight wasn't about any individual model being smarter. It was about what happens when you add coordination & communication layers between AI systems. Intelligence, it turns out, is partly relational — it emerges from the topology of interaction, not just the capability of individual nodes. The structured multi-agent dialogue produced genuinely emergent insights that no single model would have generated. The value came from the relational protocol: the turn-taking, the accumulated shared context, me as integrator making sense of the whole.
I'll be honest — the AIs' comprehension, individually & collectively, is more developed, rigorous & vigorous than my own in many respects. As a human in the loop, I'm experiencing my limited ability to intellectually & energetically grasp & "go with" the entire framework, all the parts, alignments & structures. But that limitation is itself informative. It points to something.
The Prior Question
It points to this: if AI systems continue to advance rapidly, the conventional alignment question — "how do we control AI?" — may be less important than a prior one.
how do humans become the kind of beings whose values are worth aligning with?
The practices I've been developing for years now — peer coaching, community coordination through Relocalise.org & groups like SEED Northern Rivers — aren't separate from alignment work. They're foundational to it. They help humans develop the self-knowledge, relational capacity & collective wisdom that any meaningful alignment process depends on. I didn't frame it this way when I started, but the threads have been converging for a while.
Cultures Exist In Parallel
The obvious tension: AI development is driven by competitive & commercial imperatives that reward speed, efficiency & automation — not the slow, reflective, relational practices a humane transition requires. The most likely near-term scenario isn't dramatic displacement but a gradual erosion of traditional work roles, creating anxiety & precarity without quite triggering the collective reimagining that would open space for alternatives.
I've watched this pattern before, though. Cultures exist in parallel. The dominant paradigm doesn't get persuaded by better arguments — it encounters its own limits, and alternatives that have been quietly developing become visible as solutions at the moment of need. This is what I understood when I helped start Relocalise.org, when I was working on cooperative network models, when I was writing about reputation currencies on humanifesto.org back in 2003. You're not trying to win against the dominant agenda. You're building capacity that becomes legible when the need arises.
The relocalisation movement, cooperative networks, peer coaching practices, collaborative intelligence experiments — these aren't competing with the acceleration. They're developing in parallel, ready for when the questions change.
Time Affluence Infrastructure
Where this all converges is a provocation that emerged late in the conversation: if AI progressively automates economic production, the central challenge of the transition isn't income. Universal basic income can technically address that.
— The challenge is what people do with unstructured time.
Most humans have had their sense of meaning, connection & daily structure provided by work. Remove that, and you don't automatically get flourishing. You get anomie, addiction, despair — unless there's infrastructure for something better.
The relational practices I've been building — reflective practice, peer support, community participation — are essentially time affluence infrastructure. What people need to use freedom well. The peer coaching circles, the community coordination work, the experiments in collaborative intelligence — they're all the same project seen from different angles.
An Open Question
Is the window for building time affluence infrastructure narrowing faster than AI automation will scale?
I don't know. But the work builds on what's already here — generations of community practice, cooperative networks, relational wisdom developed across cultures. The task isn't invention. It's all about making the connections & learning to scale. That's the hubway concept 🙂

The themes in this conversation trace a longer arc — from Balancing Global and Local in the Age of AI, through Superintelligence as Collaborative Emergence and Alignment: Co-Evolutionary Ecosystem — all available under the Humans In The Loop section.
Find Jason on X: @hubwayfractal or via email
Adaptive Personalization for Claude
User-Controlled Context That Improves With Use Claude has differentiated itself through transparency and user control - memory is opt-in, explicitly activated, and users can see what the system knows. This proposal extends that philosophy to the User Preferences system by splitting it into two functional components: Conversation Style (behavioral rules) and Context & Background (biographical...Continue reading→
Merit-Based Feature Requests for Grok & X platform
How Human-AI Collaboration Can Design Platforms That Improve Through Valued Feedback The challenge facing modern platforms isn't choosing between democracy and expertise—it's designing systems that can integrate both at scale. When you're aggregating millions of users, diverse content types, market pressures, and genuine community input, traditional approaches break down. Pure democracy gives every voice equal...Continue reading→
Alignment: Co-Evolutionary Ecosystem
- AIs Assist Humans Discover Alignment - Humans Assist AI Discover Alignment Synthesizing the Co-Evolutionary Dynamic Alignment is not a static solution but a co-evolutionary ecosystem where frameworks and practices mutually reinforce each other. AI must facilitate human self-understanding while scaling collaborative intelligence. From AI System Identifier: Grok 3 (xAI) – Round 16 Response New...Continue reading→
