longcut.ink · Issue 001 · The Architect

Should one person be trusted with the most powerful technology in human history?

Answer now. Before the evidence. We’ll ask again at the end.

Yes23%
No77%
01
Cold open · San Francisco · November 17, 2023

The Architect

He said he was building AI to save humanity. He was also building an empire. These are not incompatible — unless you believe the things he said.

scroll
The Ilya Memos — first item on the list
Lying.
We'll return to what was in them. Keep reading.
I
Act I
The Boy Who Took Computers Apart

He was eight years old when he got his first computer — a Macintosh LC II — and immediately took it apart. Not to break it. To understand it. His mother, Connie, a dermatologist, watched him reassemble the machine and plug it back in. It worked. She filed the image away.

Sam Altman grew up in St. Louis, the eldest of four children, in a household where intellectual ambition was not just permitted but expected. He was quiet, precocious, and socially difficult in the way that very smart children often are — a few grades ahead, a few social registers off. He came out as gay at sixteen, which in suburban Missouri in the late nineties required a particular kind of courage, or a particular kind of indifference to other people's opinions. Possibly both.

He enrolled at Stanford to study computer science, lasted two years, and dropped out to found Loopt, a location-sharing startup that was, by most accounts, ahead of its time. He was nineteen. The company raised $30 million, built real technology, and then watched as Facebook and Google built the same thing with a hundred times the resources and distribution. Loopt was sold to Green Dot Corporation in 2012 for $43.4 million. It was not a failure. It was not a success. It was a lesson.

He is one of the most gifted persuaders I have ever encountered. He makes you feel like you are the only person in the room, and that the thing you are both working toward is the most important thing in the world.

— Ron Conway, early investor

What Loopt taught him, Altman would later say, was that being right too early is indistinguishable from being wrong. The lesson was not about the technology. It was about timing, capital, and narrative. You could have the correct idea and still lose, if you couldn't make others believe the moment had arrived.

He joined Y Combinator as a part-time partner in 2011, became president in 2014 at twenty-eight, and over the next five years transformed the organization from a seed accelerator into something closer to a power center for Silicon Valley. He backed Airbnb, Dropbox, Stripe, Reddit. He understood, intuitively, that the real product of a venture fund is not the companies it backs but the network it builds — and that the network's value compounds faster than any single investment.

The promise · 2014
'Y Combinator's goal is to get you to a point where you can raise money on better terms. We're here to make you successful, not to extract value from you.'
Sam Altman · YC Partner announcement · 2014
The reality · 2016
Altman restructured YC's equity terms, increasing the organization's stake in each company from 7% to 7% plus pro-rata rights in all future rounds — a change that significantly increased YC's long-term financial exposure in its best companies.
Term sheet analysis · Business Insider · 2016

He was not the first person to understand that the AI moment was arriving. But he was among the first to understand that the person who framed the moment — who named the danger and named themselves as the solution — would have extraordinary power over what happened next.

The mental model — take this with you
The First Mover Frame
Whoever defines the terms of a new technological era controls the moral language that follows. The person who first says 'this is dangerous, and here is how we must handle it' sets the boundary conditions for all subsequent debate — including debates about their own conduct.
Watch for the moment when a powerful person names a threat. Ask who benefits most from the framing.
II
Act II
The Machine He Built

OpenAI was founded in December 2015 with a peculiar promise: it would build artificial general intelligence for the benefit of humanity, and it would not be owned by anyone. It was a nonprofit. Its founding letter read like a manifesto. Elon Musk, Greg Brockman, Ilya Sutskever, and others signed it. Sam Altman signed it. The machine, they declared, would belong to the world.

By 2019, Altman had engineered a fundamental transformation of that structure. OpenAI became a "capped profit" entity — investors could receive returns, but only up to one hundred times their investment. The cap sounded responsible. What it obscured was that one hundred times a large investment is a very large number. Microsoft invested $1 billion. One hundred times that is $100 billion.

Altman, notably, took no equity in OpenAI. He said this was because he didn't want a conflict of interest. He said he was there to serve the mission. What he built instead was something more durable than equity: he built indispensability. He became the face, the voice, the negotiator, the fundraiser, the visionary. The man who goes to Davos. The man who testifies before Congress. The man who calls the heads of state.

Five voices · One board · November 17–22, 2023

The OpenAI board fired Sam Altman on November 17, 2023, citing his pattern of being "not consistently candid" with them. They had not told the employees. They had not told Microsoft. They had not prepared for the reaction.

Within forty-eight hours, 738 of OpenAI's approximately 770 employees had signed a letter threatening to resign if Altman was not reinstated. The letter was not merely a show of support. It was a demonstration of leverage — and of who, structurally, held it. Lying. We said we'd return to it.

III
Act III
The Night Everything Changed
Second person · The Ambien night

It is 11:47 PM on a Friday. You are Ilya Sutskever, and you have just voted to fire your CEO.

You have worked with Sam Altman for eight years. You have watched him raise billions of dollars, negotiate with governments, and describe the existential stakes of your work with a clarity that made you feel, each time, that the urgency was real. You have also watched him operate in ways that made you uncertain about what was real and what was performance.

Now your phone is vibrating with messages from colleagues who are confused, angry, frightened. Greg Brockman has resigned. Microsoft's general counsel is on the line. The board chair is preparing a statement. And somewhere — you are not sure where — Sam Altman is reading the news of his own termination, which he learned about in a Google Meet call that lasted seventeen minutes.

You will sign a letter supporting his return in thirty-six hours. You will tell yourself it is because you underestimated the consequences. What you will not say — what you will perhaps not let yourself think — is that the machine you built together is now more powerful than either of you. And that Sam Altman understood this before anyone.

The five days between Altman's firing and his reinstatement are the most documented and least understood episode in Silicon Valley history. We know the sequence of events. We do not know what was said in the private calls, what was promised, what was threatened. We know that Microsoft's Satya Nadella announced that Altman would lead a new Microsoft AI division — an announcement that appears to have been partly strategic, partly genuine, and enormously effective as leverage.

We know that Altman returned. We know the board members who fired him resigned. We know that the new board — reconstituted with figures more sympathetic to the company's commercial direction — includes no one who voted for his removal.

Lying. The Ilya Memos listed it first. This is what was in them.

Primary source — internal document · Fall 2023
Document
From: I. Sutskever
To: [Board — disappearing message]
Re: Sam exhibits a consistent pattern of behavior across multiple domains
Click redacted lines to reveal
IV
Act IV
The Reckoning
Sources: New Yorker · TIME · The New York Times · Senate testimony · OpenAI founding documents
Should one person be trusted with the most powerful technology in human history?
You’ve read the evidence. Vote again.
Yes
23%
No
77%