← Back to Writing

The Real Argument in Sequoia’s “From Hierarchy to Intelligence” It’s Not About AI. It’s About What Counts as Performance.

April 2026· 6 min read
AIFutureOfWorkLeadership
0:000:00

Sequoia’s “From Hierarchy to Intelligence,” co-authored by Jack Dorsey and Roelof Botha, is being discussed as if it were mostly a technology argument. Read that way, it sounds like one more installment in a familiar genre: AI is coming, org charts are changing, management will be transformed. All true in the shallow sense, but not yet useful. The deeper point—and the one worth taking seriously—is not technological at all. It is operational. This is a proposal to change how performance is defined, measured, and produced inside companies. And on that level, the piece is one of the most substantive things a major tech company has published about organizational design in years.

Sequoia opens with a military history of hierarchy that is more than decorative. The Roman contubernium, the Prussian General Staff, McCallum’s railroad org chart, Taylor’s scientific management: these are presented not as management trivia but as a lineage of information-routing technologies. Each one solved the same problem—coordinating people across distance and complexity—by narrowing span of control and adding layers. The cost was always latency. It was tolerable because information itself moved slowly. But once markets accelerated and software collapsed response cycles, the same structure that once created order started producing drag. Decision-making became a relay race. Every handoff added friction. Every layer turned time into risk.

What makes the current moment different, in Sequoia’s framing, is not that intelligence suddenly appeared but that coordination is increasingly machine-addressable. Work artifacts are digital, context is capturable, and capabilities can be modularized. The practical implication is real: the distance between signal and action no longer has to be governed primarily by managerial routing.

This is where the piece gets specific. Block’s proposal is not “AI as a feature.” It is an architectural redesign built on four layers: composable capabilities (the atomic financial primitives like payments, lending, and card issuance), a world model (split between a company model that replaces managerial context-routing and a customer model built from transaction data), an intelligence layer (that composes capabilities into solutions for specific customers at specific moments), and interfaces (the delivery surfaces like Square, Cash App, and Afterpay). In this design, the traditional product roadmap—where a product manager hypothesizes about what to build next—is replaced by failure signals from the intelligence layer. When the system tries to compose a solution and can’t because a capability doesn’t exist, that gap becomes the backlog.

The org structure follows from the architecture, and it is genuinely different from what most companies are doing. Block proposes normalizing to three roles: individual contributors who build and operate the system’s layers, Directly Responsible Individuals (DRIs) who own cross-cutting problems or customer outcomes for defined periods, and player-coaches who combine building with people development. There is no permanent middle management layer. The world model handles the alignment that managers used to provide. The DRI structure handles priority. The player-coach handles craft and growth.

It is a clean design. It is also the point where the argument deserves real pressure-testing, because organizational architecture is not the same as organizational reality.

I have a personal reference point for this kind of transition. When Genome, my agency, went fully remote in 2016, the immediate story was location. The enduring story was performance. The old proxies stopped working almost overnight. Presence, desk time, visible busyness—none of it reliably mapped to value. What replaced it was less glamorous but far more honest: output quality, delivery consistency, and communication clarity across distributed teams. People who were strong at visible activity but weak at accountable execution struggled. People who could produce, document, and align without theatrical supervision accelerated. Remote work did not simply change where work happened. It changed what counted as work.

That experience made me pay close attention to one specific claim in the Sequoia piece: that the world model gives every person at the edge “the context they need to act without waiting for information to travel up and down a chain of command.” I believe that’s directionally true. I also know that context delivery is not the same as context comprehension. When we went remote, we had all the information tools we needed within about six months. What took years was building the judgment culture—the shared understanding of what context actually mattered, how to act on incomplete information, and when to escalate versus decide. The tooling was the easy part. The operating discipline was the hard part.

The same challenge applies to Block’s DRI model. On paper, a DRI who owns merchant churn in a specific segment for 90 days, with full authority to pull resources from the world model team, the lending capability team, and the interface team, sounds like a powerful coordination mechanism. In practice, “full authority to pull resources” from teams you don’t manage is one of the hardest things to make work in any organization. It requires not just a system that provides context but a culture that treats cross-functional authority as legitimate, a leadership layer that resolves resource conflicts quickly, and an accountability framework that distinguishes between a DRI who failed because the problem was hard and one who failed because they couldn’t actually command the resources they were promised. Block may solve this. But the architecture alone doesn’t guarantee it.

This is the broader pattern I see across the AI-transformation landscape. Many companies are currently applying AI as a productivity veneer over unchanged operating models. They draft faster, summarize faster, and occasionally ship faster in isolated pockets. But they remain structurally slow because the underlying decision architecture is untouched. The meeting load survives. The escalation ladder survives. The approval culture survives. Efficiency improves while velocity stagnates. This is why so many AI transformations feel impressive in demos and disappointing in outcomes.

Sequoia’s thesis is compelling precisely because it points to architecture rather than tooling. But architecture without governance is where the real danger lives. A flawed decision inside a slow organization is a contained problem. A flawed decision inside an intelligence-mediated organization is a multiplication event. The intelligence layer doesn’t just compose solutions; it composes solutions at speed and scale. If the model misprices risk for a lending capability, that error doesn’t surface in one merchant interaction. It surfaces in thousands simultaneously. If the customer world model develops a systematic bias in how it interprets transaction patterns, that bias doesn’t distort one recommendation. It distorts the entire intelligence layer’s output.

This is why governance in this era is not a compliance footnote. It is an operating requirement at the same level as the architecture itself. An intelligent enterprise needs to be able to answer three questions continuously: Why did this decision happen? Can we intervene quickly when context changes? And who owns the outcome when things degrade? If a company cannot answer all three, it has not built an intelligent organization. It has built an automated ambiguity machine—one that moves faster than anyone’s ability to correct it.

Block, to their credit, is at least building in public and acknowledging the difficulty. Dorsey and Botha write that “parts of it will likely break before they work.” That is the kind of honesty most AI-transformation rhetoric lacks. But the real test is not whether the architecture is elegant. The test is whether the accountability systems scale at the same rate as the decision-making systems. In every organizational transition I’ve been part of—from office to remote, from agency to acquisition, from hierarchy to flat—the structural redesign was the announced story. The accountability redesign was the actual story.

So here is where I think the argument lands. Sequoia is right about direction. We are moving from hierarchy as the dominant coordination substrate toward intelligence as a new one. The competitive advantage will not come from adopting models first. It will come from redesigning the company’s performance system—what gets measured, how decisions move, where human judgment remains decisive, and how communication sustains trust under speed.

We already lived an early version of this shift in the move from office presence to remote outcomes. The same logic now applies at organizational scale. The future does not belong to companies that look AI-native. It belongs to companies that can convert context into accountable action faster than everyone else—and prove it in ways people can trust.

That’s not a technology story. That’s management, redesigned from the operating layer up.