human-AI collaboration
24 essaysResearch with AI #3: Automating My Research (I Wasn't)
What happened when I built my own agentic AI research team
"Everyone's building agentic AI research teams. I tried it too. Turns out I was doing methodology, not automation."
read essay →Research with AI #2: Agents, Honestly
What every AI agent type trades away, and how to choose yours
"AI agents for knowledge workers: every deployment option trades away safety, accessibility, openness, or sovereignty. A framework for choosing honestly."
read essay →LOOM XVII: The Polanyi Inversion
What Happens When We Can Tell More Than We Know
"Polanyi's paradox: we know more than we can tell. AI inverts this — we can now tell more than we know. When friction dissolves, articulation outruns understanding."
read essay →Post-AGI Organizations III: What Collaboration Becomes
Thirteen AIs on What Collaboration Becomes — and the One Meeting None of Them Can Imagine
"We asked thirteen AI systems what their organizational visions mean for human-AI collaboration. They proposed four genuinely different relationships — from colonial translation to signal coupling to complementary strengths to political redistribution. Every model agrees the current medium is wrong. Not one can describe the meeting where someone resists the change."
read essay →Post-AGI Organizations II: Thirteen Lenses
How Thirteen AI Systems Try to Think Past Human Assumptions About Organization — Through Physics, Biology, and Political Economy
"We asked thirteen AI systems to interview themselves about organization — choose their own questions, answer them, surface whatever logic lives in their architecture. They reached for physics, biology, political economy, and phenomenology. Between questions, the humans they had centered as partners quietly drifted toward infrastructure. And hierarchy disappeared everywhere."
read essay →Post-AGI Organizations I: Thirteen Dreams
What Thirteen AI Systems Design When Asked About the Future of Organizing
"We gave thirteen AI systems — from GPT-4 Turbo to DeepSeek to Seed 2.0 Pro — a blank canvas to design the future of human-AI organizations. They built welfare states, thermodynamic commons, creator economies, and consulting frameworks. Some reached for metaphors no human researcher would combine. None of them imagined organizational politics."
read essay →Research with AI #1: The Foreclosure Problem
AI makes you faster at finding what you already know to look for. That's the problem.
"The foreclosure problem: how AI literature review tools optimize for speed over discovery, quietly narrowing what you might have considered — and how to build a Claude Code thinking partner that broadens it instead."
read essay →LOOM XVI: Are You Climbing the Right Hill?
When Rigor Becomes the Wrong Kind of More
"A researcher discovers that optimizing a research design with AI created a local maximum — five stages of performed rigor on the wrong hill. What happens when you bring a task instead of a doubt?"
read essay →"Human-Centric AI" Is the Wrong Story
A ceramicist's ritual, Anthropic's constitution, and the posture that changes what becomes possible
"What if centering on humans in AI discourse performs human primacy while missing what happens when tools have tendencies of their own? A ceramicist's bow to the kiln god offers another way."
read essay →Your Next AI Framework Might Be Centuries Old
What a tailor shop taught me about AI agents
"How a London tailor shop reveals an alternative to scripted AI coordination—where judgment, traces, and ongoing participation replace rigid handoffs."
read essay →Claude Cowork: The Easy Part Is Over
The Terminal Fell—Now What?
"Claude Cowork brings Claude Code to knowledge workers without the terminal. But the real barrier was never the interface — it's knowing what to delegate vs. dialogue."
read essay →What I'm Thinking, January 2026
"Researching algorithmic organizing while doing it — what governance looks like when AI systems are participants, not just tools. Joining SKEMA to find out."
read essay →The Ghost in the Machine
An AI-native organization emerging in Anthropic's Claude product stack
"Anthropic's Claude architecture as organizational design — how bounded contexts, MCP, and transparent memory reveal principles that collaboration requires."
read essay →Research Memex: Working at the AI Research Frontier
One approach to human-AI research collaboration, demonstrated through systematic reviews
"The seahorse represents the hippocampus. It doesn't exist, and AIs hallucinate that it does. Working honestly with gaps requires seeing the hallucination and building with it anyway."
read essay →LOOM XIV: The Calculator Fallacy
When AI Qualitative Analysis Meets Human Expectations
"The calculator mindset expects AI to deliver objective truth in interpretive work. What gets blocked is the third space where understanding emerges."
read essay →Post-AGI Organizations: AI’s Blind Spot and Ours
On Artificial Logic, Human Wisdom, and the Future of Organizing
"Three AIs envision post-AGI organizations with cold logic but no 'smell' for human reality. Their blind spot mirrors ours. The gap reveals a new framework."
read essay →LOOM XI: Navigating the Unnamed Between — An Epistemic Love Letter
When Vulnerability Becomes Method at the Edge of Knowing
"The most valuable insights emerged not from optimization but from breakdown—when polished drafts felt wrong despite meeting all conventional criteria."
read essay →LOOM X: The Whispered Agency
A Dialogue on Human Capability in the Age of AI
"Through attempting to create artificial agency, we rediscover dimensions of human capability that were always present but perhaps overlooked."
read essay →LOOM IX: The Six Dimensions of Understanding
Mapping Human-AI Collaborative Intelligence
"Six dimensions where human and artificial intelligence create understanding neither could achieve alone—from temporal navigation to purpose alignment."
read essay →LOOM VIII: Beyond Teammates
Why organizations won't just use AI as teammates—they'll evolve around emergent intelligence
"From AI as tool to AI as teammate to something more—sustained interaction creates a third space where qualitatively different understanding emerges."
read essay →LOOM VI: The Pattern Beneath the Voices
Cognitive Signatures in Human-AI Research: Towards a Resonant Methodology
"Different AI models reveal distinct theories of how understanding emerges—through dialogue, classification, or achievement-oriented progression."
read essay →LOOM IV: Dialogue as Method
A Letter on Knowledge Co-Creation in the Age of AI
"When dialogue becomes method, the boundaries between researcher and subject, between human and artificial intelligence, begin to blur in productive ways."
read essay →LOOM II: The Organizational Weave
Where Human and Machine Minds Meet to Reveal New Patterns of Understanding
"Organizations aren't just settings where AI gets deployed—they're complex social systems where meaning emerges through structured interaction."
read essay →LOOM: Locus of Observed Meanings
The Moment of Shift
"From seeing AI as instrument to experiencing it as interlocutor—exploring new possibilities where meaning emerges through human-AI interaction."
read essay →