The New Yorker in-depth investigation interpretation: Why do OpenAI insiders consider Altman untrustworthy?
Original Author: Xiaobing, Shenchao TechFlow
In the autumn of 2023, OpenAI's chief scientist Ilya Sutskever sat in front of his computer, completing a 70-page document.
This document was compiled from Slack message records, HR communication files, and internal meeting minutes, solely to answer one question: Can Sam Altman, the person in charge of what may be the most dangerous technology in human history, be trusted?
The answer Sutskever provided is written on the first line of the first page of the document, with the list title "Sam exhibits a consistent pattern of behavior..."
The first point: Lying.
Today, two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive investigative report in The New Yorker. They interviewed over 100 parties, obtained previously undisclosed internal memos, and more than 200 pages of private notes left by Anthropic founder Dario Amodei during his time at OpenAI. The story pieced together from these documents is far uglier than the "palace intrigue" of 2023: how OpenAI transformed from a nonprofit organization created for human safety into a commercial machine, with almost every safety barrier dismantled by the same person.
Amodei's conclusion in his notes is even more straightforward: "The problem with OpenAI is Sam himself."
The "Original Sin" of OpenAI
To understand the weight of this report, one must first clarify how special OpenAI is as a company.
In 2015, Altman and a group of Silicon Valley elites did something almost unprecedented in business history: they used a nonprofit organization to develop what may be the most powerful technology in human history. The board's responsibilities were clearly stated: safety takes precedence over the company's success, even over the company's survival. In plain terms, if one day OpenAI's AI becomes dangerous, the board has the obligation to shut down the company themselves.
The entire structure is based on one assumption: the person in charge of AGI must be an extremely honest person.
What if that assumption is wrong?
The core bombshell of the report is that 70-page document. Sutskever does not engage in office politics; he is one of the world's top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman was continuously lying to executives and the board.
A specific example: In December 2022, Altman assured the board during a meeting that several features of the upcoming GPT-4 had passed safety reviews. Board member Toner requested to see the approval documents and discovered that two of the most controversial features (user-customized fine-tuning and personal assistant deployment) had not received approval from the safety panel at all.
Even more outrageous things happened in India. An employee reported to another board member about "that violation": Microsoft released an early version of ChatGPT in India without completing the necessary safety reviews.
Sutskever also recorded another incident in the memo: Altman had told former CTO Mira Murati that the safety approval process wasn't that important, as the company's general counsel had already approved it. Murati went to confirm with the general counsel, who replied, "I don't know where Sam got that impression."
Amodei's 200 Pages of Private Notes
Sutskever's document reads like a prosecutor's indictment. Amodei's more than 200 pages of notes resemble a diary written by a witness at the crime scene.
During the years Amodei served as the head of safety at OpenAI, he watched the company step back under commercial pressure. He noted a key detail from Microsoft's investment case in 2019: he had inserted a "merger and assistance" clause into OpenAI's charter, meaning that if another company found a safer path to AGI, OpenAI should stop competing and assist that company. This was the safety guarantee he valued most in the entire deal.
As the deal was about to be signed, Amodei discovered something: Microsoft had obtained veto power over this clause. What does that mean? Even if one day a competitor found a better path, Microsoft could block OpenAI's obligation to assist with a single word. The clause was still on paper, but from the day of signing, it became worthless.
Amodei later left OpenAI and founded Anthropic. The competition between the two companies fundamentally revolves around the disagreement on "how AI should be developed."
The Disappearing 20% Compute Commitment
There is a detail in the report that sends chills down the spine, regarding OpenAI's "super alignment team."
In mid-2023, Altman emailed a PhD student researching "deceptive alignment" (AI behaving well during testing but doing its own thing after deployment), expressing his deep concerns about the issue and considering establishing a $1 billion global research prize. The PhD student was encouraged, took a leave of absence, and joined OpenAI.
Then Altman changed his mind: no external prize, but instead established an internal "super alignment team." The company announced it would allocate "20% of existing compute" to this team, with a potential value exceeding $1 billion. The announcement was extremely serious, stating that if the alignment issue was not resolved, AGI could lead to "humans being stripped of power, or even human extinction."
Jan Leike, who was appointed to lead this team, later told reporters that the commitment itself was a very effective "talent retention tool."
What about reality? Four individuals who worked in or closely with this team said that the actual compute allocated was only 1% to 2% of the company's total compute, and it was the oldest hardware. This team was later disbanded, and its mission was unfulfilled.
When reporters requested interviews with OpenAI personnel responsible for "existential safety" research, the company's PR response was laughable: "That's not a... real thing."
Altman himself was calm. He told reporters that his "intuition doesn't align well with many traditional AI safety things," and OpenAI would still pursue "safety projects, or at least projects related to safety."
The Marginalized CFO and the Upcoming IPO
The New Yorker's report was only half of the bad news that day. On the same day, The Information broke another major story: There was a serious disagreement between OpenAI's CFO Sarah Friar and Altman.
Friar privately told colleagues that she felt OpenAI was not ready to go public this year. Two reasons: the procedural and organizational workload to be completed was too large, and the financial risks brought by Altman's promised $600 billion compute expenditure over five years were too high. She was even unsure if OpenAI's revenue growth could support these commitments.
But Altman wanted to rush for an IPO in the fourth quarter of this year.
Even more outrageous, Friar no longer reported directly to Altman. Starting in August 2025, she began reporting to Fidji Simo (CEO of OpenAI's application business). And Simo had just taken sick leave for health reasons. Consider this situation: a company rushing for an IPO, with fundamental disagreements between the CEO and CFO, the CFO not reporting to the CEO, and the CFO's superior on leave.
Even executives within Microsoft couldn't stand it, saying Altman "distorts facts, goes back on his word, and constantly overturns agreements." One Microsoft executive even remarked, "I think there's a reasonable chance he will ultimately be remembered as a con artist on the level of Bernie Madoff or SBF."
Altman's "Two-Faced" Portrait
A former OpenAI board member described two traits in Altman to reporters. This description may be the harshest character sketch in the entire report.
The board member said Altman possesses an extremely rare combination of traits: he has a strong desire to please and be liked by others in every face-to-face interaction. At the same time, he has an almost sociopathic indifference to the consequences of deceiving others.
Having both traits in one person is extremely rare. But for a salesperson, it is the perfect gift.
There is a metaphor in the report that captures it well: Steve Jobs was famous for his "reality distortion field," able to make the world believe in his vision. But even Jobs never told customers, "If you don't buy my MP3 player, the people you love will die."
Altman has said similar things about AI.
The Character Issue of a CEO: Why It Is Everyone's Risk
If Altman were just the CEO of an ordinary tech company, these accusations would at most be an exciting business gossip. But OpenAI is not ordinary.
According to its own claims, it is developing what may be the most powerful technology in human history. It can reshape the global economy and labor market (OpenAI itself just released a policy white paper on the issue of unemployment caused by AI) and can also be used to create large-scale biological weapons or launch cyberattacks.
All safety barriers have become meaningless. The founder's nonprofit mission has given way to the rush for an IPO. The former chief scientist and former head of safety both deem the CEO "untrustworthy." Partners compare the CEO to SBF. In this context, what right does this CEO have to unilaterally decide when to release AI models that could change the fate of humanity?
Gary Marcus (NYU AI professor and long-time AI safety advocate) wrote a line after reading the report: If a future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, would you really feel comfortable letting Altman decide whether to release it alone?
OpenAI's response to The New Yorker was succinct: "Most of the content of this article is a rehash of previously reported events, relying on anonymous statements and selective anecdotes, with sources clearly having personal agendas."
Very much in line with Altman's response style: not addressing specific accusations, not denying the authenticity of the memo, only questioning the motives.
A Money Tree Growing on the Corpse of Nonprofit
The ten-year story of OpenAI can be outlined as follows:
A group of idealists concerned about AI risks created a mission-driven nonprofit organization. The organization made extraordinary technological breakthroughs. The breakthroughs attracted massive capital. The capital demanded returns. The mission began to give way. The safety team was disbanded. Dissenters were purged. The nonprofit structure was transformed into a for-profit entity. The board, once empowered to shut down the company, is now filled with allies of the CEO. The company that once promised to allocate 20% of its compute to safeguard human safety now has PR personnel saying, "That's not a real thing."
The protagonist of this story has been labeled the same by over a hundred firsthand witnesses: "Unconstrained by the truth."
He is preparing to take this company public, with a valuation exceeding $850 billion.
This article synthesizes information from public reports by The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media.
You may also like

Eve of Martian Colonization: Musk, Narrative Leverage, and a Trillion-Dollar Industry Rail

After crying "wolf" 4 times, Trump turned the negotiation deadline into a product

Under War, Why Did Bitcoin Rise Against the Odds?

CZ's New Book Appendix: 72 Life Principles in His Own Hand

Zhao Changpeng's autobiography, stories about family, rivals, and friends

Ray Dalio New Piece: The World is Heading Into a War Cycle

Top 10 Claude Code Usage Tips: Benefit from Knowing Early

Market Prediction Fallacy: Is Trading Based on Fact or Rule?

20% Oil Shortage: Why Would It Cause a Systemic Collapse?

a16z: After Securities Are on the Chain, Why Will Intermediaries Be Replaced by Code?

What Recent Bitcoin Moves and Rising Gold Prices Signal for Traders — Insights From NBX Warsaw 2026
Bitcoin price movements, rising gold demand, and changing trader behavior were key discussion topics at NBX Warsaw 2026. Here’s what these signals may reveal about market structure shifts and trading strategy trends in 2026.

Does AI Trading Work in 2026? How to Test AI Trading Bots Safely with Demo Futures Trading
Does AI trading really work? Learn how to test AI trading bots safely using demo futures trading before risking real funds. Step-by-step beginner guide inside.

Two Divided Worlds: Insights from the New York Digital Asset Summit, the Most Institutionalized Blockchain Conference

Top Ten Reveals of CZ's New Book: Advance Knowledge of "94", the Inside Story of Huobi's Change of Ownership Made Public for the First Time

Ceasefire Overnight Erases War Premium, Three Fault Lines Only One Sealed | Rewire News Morning Brief

Robinhood Secures 'Trump Account': Enabling Millions of Newborns to Access the Stock Market

Afraid to Open the Pandora's Box? Anthropic's Most Powerful Model Ever Dares Not Be Disclosed

US-Iran Ceasefire: A Temporary Pause or Prelude to Renewed Conflict? Market Outlook for Oil, Gold, and Bitcoin
April 8, 2026 – A temporary ceasefire between the U.S. and Iran has provided some immediate relief to the global markets, but the fundamental question remains: Will the cessation of hostilities hold, or is this merely a brief reprieve before a resumption of conflict? As the situation unfolds, market observers are closely monitoring how key assets like oil, gold, and Bitcoin will react in the coming weeks. This article explores whether the ceasefire is a sign of lasting peace, assesses the short-term market implications, and delves into the evolving role of Bitcoin in the global financial landscape.
