Why Cohere’s Ex-ai Research Lead Is Betting Against The Scaling Race

Trending 2 weeks ago

AI labs are racing to build information centers as ample arsenic Manhattan, each costing billions of dollars and consuming arsenic overmuch power arsenic a mini city. The effort is driven by a heavy belief successful “scaling” — nan thought that adding much computing powerfulness to existing AI training methods will yet output superintelligent systems tin of performing each kinds of tasks.

But a increasing chorus of AI researchers opportunity nan scaling of ample connection models whitethorn beryllium reaching its limits, and that different breakthroughs whitethorn beryllium needed to amended AI performance.

That’s nan stake Sara Hooker, Cohere’s erstwhile VP of AI Research and a Google Brain alumna, is taking pinch her caller startup, Adaption Labs. She co-founded nan institution pinch chap Cohere and Google seasoned Sudip Roy, and it’s built connected nan thought that scaling LLMs has go an inefficient measurement to compression much capacity retired of AI models. Hooker, who near Cohere successful August, quietly announced nan startup this period to commencement recruiting much broadly.

I'm starting a caller project.

Working connected what I see to beryllium nan astir important problem: building reasoning machines that accommodate and continuously learn.

We person incredibly talent dense founding squad + are hiring for engineering, ops, design.

Join us: https://t.co/eKlfWAfuRy

— Sara Hooker (@sarahookr) October 7, 2025

In an question and reply pinch TechCrunch, Hooker says Adaption Labs is building AI systems that tin continuously accommodate and study from their real-world experiences, and do truthful highly efficiently. She declined to stock specifications astir nan methods down this attack aliases whether nan institution relies connected LLMs aliases different architecture.

“There is simply a turning constituent now wherever it’s very clear that nan look of conscionable scaling these models — scaling-pilled approaches, which are charismatic but highly boring — hasn’t produced intelligence that is capable to navigate aliases interact pinch nan world,” said Hooker.

Adapting is nan “heart of learning,” according to Hooker. For example, stub your toed erstwhile you locomotion past your eating room table, and you’ll study to measurement much cautiously astir it adjacent time. AI labs person tried to seizure this thought done reinforcement learning (RL), which allows AI models to study from their mistakes successful controlled settings. However, today’s RL methods don’t thief AI models successful accumulation — meaning systems already being utilized by customers — study from their mistakes successful existent time. They conscionable support stubbing their toe.

Some AI labs connection consulting services to thief enterprises fine-tune their AI models to their civilization needs, but it comes astatine a price. OpenAI reportedly requires customers to spend upwards of $10 million pinch nan institution to connection its consulting services connected fine-tuning.

Techcrunch event

San Francisco | October 27-29, 2025

“We person a fistful of frontier labs that find this group of AI models that are served nan aforesaid measurement to everyone, and they’re very costly to adapt,” said Hooker. “And actually, I deliberation that doesn’t request to beryllium existent anymore, and AI systems tin very efficiently study from an environment. Proving that will wholly alteration nan dynamics of who gets to power and style AI, and really, who these models service astatine nan extremity of nan day.”

Adaption Labs is nan latest motion that nan industry’s religion successful scaling LLMs is wavering. A caller insubstantial from MIT researchers recovered that nan world’s largest AI models may soon show diminishing returns. The vibes successful San Francisco look to beryllium shifting, too. The AI world’s favourite podcaster, Dwarkesh Patel, precocious hosted immoderate unusually skeptical conversations pinch celebrated AI researchers.

Richard Sutton, a Turing grant victor regarded arsenic “the begetter of RL,” told Patel successful September that LLMs can’t genuinely scale because they don’t study from existent world experience. This month, early OpenAI worker Andrej Karpathy told Patel he had reservations astir nan longterm imaginable of RL to amended AI models.

These types of fears aren’t unprecedented. In precocious 2024, some AI researchers raised concerns that scaling AI models done pretraining — successful which AI models study patterns from heaps of datasets — was hitting diminishing returns. Until then, pretraining had been nan concealed condiment for OpenAI and Google to amended their models.

Those pretraining scaling concerns are now showing up successful nan data, but nan AI manufacture has recovered different ways to amended models. In 2025, breakthroughs astir AI reasoning models, which return further clip and computational resources to activity done problems earlier answering, person pushed nan capabilities of AI models moreover further.

AI labs look convinced that scaling up RL and AI reasoning models are nan caller frontier. OpenAI researchers antecedently told TechCrunch that they developed their first AI reasoning model, o1, because they thought it would standard up well. Meta and Periodic Labs researchers precocious released a insubstantial exploring really RL could standard capacity further — a study that reportedly cost much than $4 million, underscoring really costly existent approaches remain.

Adaption Labs, by contrast, intends to find nan adjacent breakthrough, and beryllium that learning from acquisition tin beryllium acold cheaper. The startup was successful talks to raise a $20 cardinal to $40 cardinal seed information earlier this fall, according to 3 investors who reviewed its transportation decks. They opportunity nan information has since closed, though nan last magnitude is unclear. Hooker declined to comment.

“We’re group up to beryllium very ambitious,” said Hooker, erstwhile asked astir her investors.

Hooker antecedently led Cohere Labs, wherever she trained mini AI models for endeavor usage cases. Compact AI systems now routinely outperform their larger counterparts connected coding, math, and reasoning benchmarks — a inclination Hooker wants to proceed pushing on.

She besides built a estimation for broadening entree to AI investigation globally, hiring investigation talent from underrepresented regions specified arsenic Africa. While Adaption Labs will unfastened a San Francisco agency soon, Hooker says she plans to prosecute worldwide.

If Hooker and Adaption Labs are correct astir nan limitations of scaling, nan implications could beryllium huge. Billions person already been invested successful scaling LLMs, pinch nan presumption that bigger models will lead to wide intelligence. But it’s imaginable that existent adaptive learning could beryllium not only much powerful — but acold much efficient.

Marina Temkin contributed reporting.

More