I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his 3-year-old startup, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.
Halfway through the meal, he held up his iPhone so I could see the contract he had spent the past several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or AGI, a machine that could do anything the human brain could do.
Later, as Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during World War II had been a “project on the scale of OpenAI — the level of ambition we aspire to.”
Sign up for The Morning newsletter from the New York Times
He believed AGI would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
“I try to be upfront,” he said. “Am I doing something good? Or really bad?”
In 2019, this sounded like science fiction.
In 2023, people are beginning to wonder if Altman was more prescient than they realized.
Now that OpenAI has released an online chatbot called ChatGPT, anyone with an internet connection is a click away from technology that will answer burning questions about organic chemistry, write a 2,000-word term paper on Marcel Proust and his madeleine, or even generate a computer program that drops digital snowflakes across a laptop screen — all with a skill that seems human.
As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Altman of reckless behavior.
This past week, more than a thousand AI experts and tech leaders called on OpenAI and other companies to pause their work on systems such as ChatGPT, saying they present “profound risks to society and humanity.”
And yet, when people act as if Altman has nearly realized his long-held vision, he pushes back.
“The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
Many industry leaders, AI researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
Altman, a slim, boyish-looking, 37-year-old entrepreneur and investor from the suburbs of St. Louis, sits calmly in the middle of it all. As CEO of OpenAI, he somehow embodies each of these seemingly contradictory views, hoping to balance the myriad possibilities as he moves this strange, powerful, flawed technology into the future.
That means he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
To spend time with Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be. At one point during our dinner in 2019, he paraphrased Robert Oppenheimer, leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said. (Altman pointed out that, as fate would have it, he and Oppenheimer share a birthday.)
He believes that AI will happen one way or another, that it will do wonderful things that even he can’t yet imagine and that we can find ways of tempering the harm it may cause.
It’s an attitude that mirrors Altman’s own trajectory. His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
But if he’s wrong, there’s an escape hatch: In its contracts with investors such as Microsoft, OpenAI’s board reserves the right to shut the technology down at any time.
The Vegetarian Cattle Farmer
The warning, sent with the driving directions, was “Watch out for cows.”
Altman’s weekend home is a ranch in Napa, California, where farmhands grow wine grapes and raise cattle.
During the week, Altman and his partner, Oliver Mulherin, an Australian software engineer, share a house on Russian Hill in the heart of San Francisco. But as Friday arrives, they move to the ranch, a quiet spot among the rocky, grass-covered hills. Their 25-year-old house is remodeled to look both folksy and contemporary. The Cor-Ten steel that covers the outside walls is rusted to perfection.
As you approach the property, the cows roam across both the green fields and gravel roads.
Altman is a man who lives with contradictions, even at his getaway home: a vegetarian who raises beef cattle. He says his partner likes them.
On a recent afternoon walk at the ranch, we stopped to rest at the edge of a small lake. Looking out over the water, we discussed, once again, the future of AI.
His message had not changed much since 2019. But his words were even bolder.
He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
Altman tends to describe the future as if it were already here. And he does so with an optimism that seems misplaced in today’s world. At the same time, he has a way of quickly nodding to the other side of the argument.
Kelly Sims, a partner with venture capital firm Thrive Capital who worked with Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
“In a single conversation,” she said, “he is both sides of the debate club.”
He is very much a product of the Silicon Valley that grew so swiftly and so gleefully in the mid-2010s. As president of Y Combinator, a Silicon Valley startup accelerator and seed investor, from 2014-19, he advised an endless stream of new companies — and was shrewd enough to personally invest in several that became household names, including Airbnb, Reddit and Stripe. He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
But he is also the product of a strange, sprawling online community that began to worry, around the same time Altman came to Silicon Valley, that AI would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
The question is whether the two sides of Altman are ultimately compatible: Does it make sense to ride that curve if it could end in disaster? Altman is certainly determined to see how it all plays out.
He is not necessarily motivated by money. Like many personal fortunes in Silicon Valley that are tied up in a wide variety of public and private companies, Altman’s wealth is not well documented. But as we strolled across his ranch, he told me, for the first time, that he holds no stake in OpenAI. The only money he stands to make from the company is a yearly salary of about $65,000 — “whatever the minimum for health insurance is,” he said — and a tiny slice of an old investment in the company by Y Combinator.
His longtime mentor, Paul Graham, founder of Y Combinator, explained Altman’s motivation like this: “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
‘What Bill Gates Must Have Been Like’
In the late 1990s, the John Burroughs School, a private prep school named for the 19th-century American naturalist and philosopher, invited an independent consultant to observe and critique daily life on its campus in the suburbs of St. Louis.
The consultant’s review included one significant criticism: The student body was rife with homophobia.
In the early 2000s, Altman, a 17-year-old student at John Burroughs, set out to change the school’s culture, individually persuading teachers to post “Safe Space” signs on their classroom doors as a statement in support of gay students such as him. He came out during his senior year and said the St. Louis of his teenage years was not an easy place to be gay.
Georgeann Kepchar, who taught the school’s Advanced Placement computer science course, saw Altman as one of her most talented computer science students — and one with a rare knack for pushing people in new directions.
“He had creativity and vision, combined with the ambition and force of personality to convince others to work with him on putting his ideas into action,” she said. Altman also told me that he had asked one particularly homophobic teacher to post a “Safe Space” sign just to troll the guy.
Graham, who worked alongside Altman for a decade, saw the same persuasiveness in the man from St. Louis.
“He has a natural ability to talk people into things,” Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.’”
The two got to know each other in 2005 when Altman applied for a spot in Y Combinator's first class of startups. He won a spot — which included $10,000 in seed funding — and after his sophomore year at Stanford University, he dropped out to build his new company, Loopt, a social media startup that let people share their location with friends and family.
He now says that during his short stay at Stanford, he learned more from the many nights he spent playing poker than he did from most of his other college activities. After his freshman year, he worked in the AI and robotics lab overseen by professor Andrew Ng, who would go on to found the flagship AI lab at Google. But poker taught Altman how to read people and evaluate risk.
It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
After selling Loopt for a modest return, he joined Y Combinator as a part-time partner. Three years later, Graham stepped down as president of the firm and, to the surprise of many across Silicon Valley, tapped Altman, then 28, as his successor.
Altman is not a coder or an engineer or an AI researcher. He is the person who sets the agenda, puts the teams together and strikes the deals. As the president of Y Combinator, he expanded the firm with near abandon, starting a new investment fund and a new research lab and stretching the number of companies advised by the firm into the hundreds each year.
He also began working on several projects outside the investment firm, including OpenAI, which he founded as a nonprofit in 2015 alongside a group that included Elon Musk. By Altman’s own admission, Y Combinator grew increasingly concerned he was spreading himself too thin.
He resolved to refocus his attention on a project that would, as he put it, have a real impact on the world. He considered politics, but settled on AI.
Altman believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through AI research, as opposed to the many people who could do so through politics.
In 2019, just as OpenAI’s research was taking off, Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way, he could pursue the money it would need to build a machine that could do anything the human brain could do.
Raising ‘10 Bills’
In the mid-2010s, Altman shared a three-bedroom, three-bath San Francisco apartment with his boyfriend at the time, his two brothers and their girlfriends. The brothers went their separate ways in 2016 but remained on a group chat, where they spent a lot of time giving one another guff, as only siblings can, his brother Max remembers. Then, one day, Altman sent a text saying he planned to raise $1 billion for his company’s research.
Within a year, he had done so. After running into Satya Nadella, Microsoft’s CEO, at an annual gathering of tech leaders in Sun Valley, Idaho — often called “summer camp for billionaires” — he personally negotiated a deal with Nadella and Microsoft’s chief technology officer, Kevin Scott.
A few years later, Altman texted his brothers again, saying he planned to raise an additional $10 billion — or, as he put it, “10 bills.” By this past January, he had done this, too, signing another contract with Microsoft.
Brockman, OpenAI’s president, said Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Brockman told me. “That is the algorithm he uses over and over.”
The agreement has put OpenAI and Microsoft at the center of a movement that is poised to remake everything from search engines to email applications to online tutors. And all this is happening at a pace that surprises even those who have been tracking this technology for decades.
Amid the frenzy, Altman is his usual calm self — although he does say he uses ChatGPT to help him quickly summarize the avalanche of emails and documents coming his way.
Scott believes that Altman will ultimately be discussed in the same breath as Gates, Steve Jobs and Mark Zuckerberg.
“These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
The Man in the Middle
In March, Altman tweeted out a selfie, bathed by a pale-orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
The woman was Canadian singer Grimes, Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described AI researcher who believes, perhaps more than anyone, that AI could one day destroy humanity.
The selfie — snapped by Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of AI.
Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
He also helped spawn the vast online community of rationalists and effective altruists who are convinced that AI is an existential risk. This surprisingly influential group is represented by researchers inside many of the top AI labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
Altman believes that effective altruists have played an important role in the rise of AI, alerting the industry to the dangers. He also believes they exaggerate these dangers.
As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Altman and OpenAI that chose to share the technology with the world.
Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation. On Friday, the Italian government temporarily banned ChatGPT in the country, citing privacy concerns and worries over minors being exposed to explicit material.
Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
He told me that it would be a “very slow takeoff.”
When I asked Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
If he’s wrong, he thinks he can make it up to humanity.
He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors such as Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of AGI and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
If AGI does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
But as he once told me: “I feel like the AGI can help with that.”
c.2023 The New York Times Company