The Fixer’s Dilemma: Chris Lehane And Openai’s Impossible Mission

Trending 5 hours ago

Chris Lehane is 1 of nan champion successful nan business astatine making bad news disappear. Al Gore’s property caput during nan Clinton years, Airbnb’s main situation head done each regulatory nightmare from present to Brussels – Lehane knows really to spin. Now he’s 2 years into what mightiness beryllium his astir intolerable gig yet: arsenic OpenAI’s VP of world policy, his occupation is to person nan world that OpenAI genuinely gives a damn astir democratizing artificial intelligence while nan institution progressively behaves like, well, each different tech elephantine that’s ever claimed to beryllium different.

I had 20 minutes pinch him connected shape astatine nan Elevate convention successful Toronto earlier this week – 20 minutes to get past nan talking points and into nan existent contradictions eating distant astatine OpenAI’s cautiously constructed image. It wasn’t easy aliases wholly successful. Lehane is genuinely bully astatine his job. He’s likable. He sounds reasonable. He admits uncertainty. He moreover talks astir waking up astatine 3 a.m. worried astir whether immoderate of this will really use humanity.

But bully intentions don’t mean overmuch erstwhile your institution is subpoenaing critics, draining economically depressed towns of h2o and electricity, and bringing dormant celebrities backmost to life to asseverate your marketplace dominance.

The company’s Sora problem is really astatine nan guidelines of everything else. The video procreation instrumentality launched past week pinch copyrighted worldly seemingly baked correct into it. It was a bold move for a institution already getting sued by nan New York Times, nan Toronto Star, and half nan publishing industry. From a business and trading standpoint, it was besides brilliant. The invite-only app soared to nan top of nan App Store arsenic group created integer versions of themselves, OpenAI CEO Sam Altman; characters for illustration Pikachu, Mario, and Cartman of “South Park”; and dormant celebrities for illustration Tupac Shakur.

Asked what drove OpenAI’s determination to motorboat this newest type of Sora pinch these characters, Lehane gave maine nan modular pitch: Sora is simply a “general intent technology” for illustration energy aliases nan printing press, democratizing productivity for group without talent aliases resources. Even he – a self-described imaginative zero – tin make videos now, he said connected stage.

What he danced astir is that OpenAI initially “let” authorities holders opt retired of having their activity utilized to train Sora, which is not really copyright usage typically works. Then, aft OpenAI noticed that group really liked utilizing copyrighted images (of people they did), it “evolved” toward an opt-in model. That’s not really iterating. That’s testing really overmuch you tin get distant with. (And by nan way, though nan Motion Picture Association made immoderate noise past week astir ineligible threats, OpenAI appears to person gotten distant pinch rather a lot.)

Naturally, nan business brings to mind nan aggravation of publishers who impeach OpenAI of training connected their activity without sharing nan financial spoils. When I pressed Lehane astir publishers getting trim retired of nan economics, he invoked adjacent use, that American ineligible doctrine that’s expected to equilibrium creator authorities against nationalist entree to knowledge. He called it nan concealed limb of U.S. tech dominance.

Techcrunch event

San Francisco | October 27-29, 2025

Maybe. But I’d precocious interviewed Al Gore – Lehane’s aged leader – and realized anyone could simply inquire ChatGPT astir it alternatively of reference my portion connected TechCrunch. “It’s ‘iterative’,” I said, “but it’s besides a replacement.”

For nan first time, Lehane dropped his spiel. “We’re each going to request to fig this out,” he said. “It’s really glib and easy to beryllium present connected shape and opportunity we request to fig retired caller economical gross models. But I deliberation we will.” (We’re making it up arsenic we go, successful short.)

Then there’s nan infrastructure mobility cipher wants to reply honestly. OpenAI is already operating a information halfway field successful Abilene, Texas, and precocious collapsed crushed connected a monolithic information halfway successful Lordstown, Ohio, successful business pinch Oracle and SoftBank. Lehane has likened accessibility to AI to nan advent of energy – saying those who accessed it past are still playing catch-up – yet OpenAI’s Stargate task is seemingly targeting immoderate of those aforesaid economically challenged places arsenic spots to group up accommodation pinch their monolithic appetites for h2o and electricity.

Asked during our sit-down whether these communities will use aliases simply ft nan bill, Lehane went to gigawatts and geopolitics. OpenAI needs astir a gigawatt of power per week, he noted. China brought connected 450 gigawatts past twelvemonth positive 33 atomic facilities. If democracies want antiauthoritarian AI, they person to compete. “The optimist successful maine says this will modernize our power systems,” he’d said, coating a image of re-industrialized America pinch transformed powerfulness grids.

It was inspiring. But it was not an reply astir whether group successful Lordstown and Abilene are going to watch their inferior bills spike while OpenAI generates videos of John F. Kennedy and The Notorious B.I.G. (Video procreation is nan most energy-intensive AI retired there.)

Which brought maine to my astir uncomfortable example. Zelda Williams spent nan time earlier our question and reply begging strangers connected Instagram to extremity sending her AI-generated videos of her precocious father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs retired of nan lives of quality beings.”

When I asked astir really nan institution reconciles this benignant of friendly harm pinch its mission, Lehane answered by talking astir processes, including responsible design, testing frameworks, and authorities partnerships. “There is nary playbook for this stuff, right?”

Lehane showed vulnerability successful immoderate moments, saying that he wakes up astatine 3. a.m. each night, worried astir democratization, geopolitics, and infrastructure. “There’s tremendous responsibilities that travel pinch this.”

Whether aliases not those moments were designed for nan audience, I judge him. Indeed, I near Toronto reasoning I’d watched a maestro people successful governmental messaging – Lehane threading an intolerable needle while dodging questions astir institution decisions that, for each I know, he doesn’t moreover work together with. Then Friday happened.

Nathan Calvin, a lawyer who useful connected AI argumentation astatine a nonprofit defense organization, Encode AI, revealed that astatine nan aforesaid clip I was talking pinch Lehane successful Toronto, OpenAI had sent a sheriff’s lawman to his house successful Washington, D.C., during meal to service him a subpoena. They wanted his backstage messages pinch California legislators, assemblage students, and erstwhile OpenAI employees.

Calvin is accusing OpenAI of intimidation strategies astir a caller portion of AI regulation, California’s SB 53. He says nan institution weaponized its ineligible conflict pinch Elon Musk arsenic a pretext to target critics, implying Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s guidance to California’s SB 53, an AI information bill, and that erstwhile he saw nan institution declare it “worked to amended nan bill,” he “literally laughed retired loud.” In a societal media skein, he went connected to telephone Lehane specifically nan “master of nan governmental acheronian arts.”

In Washington, that mightiness beryllium a compliment. At a institution for illustration OpenAI whose ngo is “to build AI that benefits each of humanity,” it sounds for illustration an indictment.

What matters overmuch much is that moreover OpenAI’s ain group are conflicted astir what they’re becoming.

As my workfellow Max reported past week, a number of existent and erstwhile labor took to societal media aft Sora 2 was released, expressing their misgivings, including Boaz Barak, an OpenAI interrogator and Harvard professor, who wrote astir Sora 2 that it is “technically astonishing but it’s premature to congratulate ourselves connected avoiding nan pitfalls of different societal media apps and deepfakes.”

On Friday, Josh Achiam – OpenAI’s caput of ngo alignment – tweeted thing moreover much singular astir Calvin’s accusation. Prefacing his comments by saying they were “possibly a consequence to my full career,” Achiam went connected to constitute of OpenAI: “We can’t beryllium doing things that make america into a frightening powerfulness alternatively of a virtuous one. We person a work to and a ngo for each of humanity. The barroom to prosecute that work is remarkably high.”

That’s . . .something. An OpenAI executive publically questioning whether his institution is becoming “a frightening powerfulness alternatively of a virtuous one,” isn’t connected a par pinch a competitor taking shots aliases a newsman asking questions. This is personification who chose to activity astatine OpenAI, who believes successful its mission, and who is now acknowledging a situation of conscience contempt nan master risk.

It’s a crystallizing moment. You tin beryllium nan champion governmental operative successful tech, a maestro astatine navigating intolerable situations, and still extremity up moving for a institution whose actions progressively conflict pinch its stated values – contradictions that whitethorn only intensify arsenic OpenAI races toward artificial wide intelligence.

It has maine reasoning that nan existent mobility isn’t whether Chris Lehane tin waste OpenAI’s mission. It’s whether others – including, critically, nan different group who activity location – still judge it.

More