001
Thesis

When algorithmic participants enter organizational life, the boundaries we use to think start shifting.

tool / agent understanding / trust human / AI hierarchy / network

Organizational theory assumes humans are the only actors. But algorithms are increasingly part of decision-making, and our frameworks weren't built for that.

They assume stable actors in defined roles. When tools start behaving more like participants, the theories stop helping.

These observations share a common source—each points to a boundary that no longer holds the way it once did.

The line between tool and agent is getting blurry
Algorithms used to augment decisions Now they shape how organizations work And organizations generate the data that trains the next version The tools we build are building us back
Understanding is giving way to trust
Scientists accept AI outputs because they've worked before Not because anyone can explain why We can articulate things we can't actually understand
The interesting dynamics emerge from interaction
From the back-and-forth between humans and AI Where neither operates alone
Old coordination patterns keep showing up
London tailors coordinate through shared understanding, not formal structure Their patterns may matter more as algorithmic participants make hierarchy less stable
Decentralization and hierarchy have a complex relationship
Effective decentralization often means making hierarchy transparent and bounded

How AI and organizations form feedback loops that create emergent forms of agency.

DAOs as sites where I develop and test theory — code as constitution, tokens as coordination.

Coordination through craft and shared understanding — the London tailoring study.

Governance when agency emerges from dynamics, rather than residing in any single actor.

Qualitative research combined with computational methods Holding the synthesis myself rather than outsourcing it Most writing emerges from actual dialogue with AI systems Building tools partly to test whether the ideas work