Anthropic’s Super Bowl commercial, 1 of 4 ads nan AI laboratory dropped on Wednesday, originates pinch nan connection “BETRAYAL” splashed boldly crossed nan screen. The camera pans to a man earnestly asking a chatbot (obviously intended to picture ChatGPT) for proposal connected really to talk to his mom.
The bot, portrayed by a blonde woman, offers immoderate classical bits of advice. Start by listening. Try a quality walk! And past twists into an advertisement for a fictitious (we hope!) cougar-dating tract called Golden Encounters. Anthropic finishes nan spot by saying that while ads are coming to AI, they won’t beryllium coming to it’s ain chatbot, Claude.
Another one features a flimsy young man looking for proposal connected building a six pack. After offering his height, age, and weight, nan bot serves him an advertisement for height-boosting insoles.
The Anthropic commercials are cleverly crafted astatine OpenAI’s users, aft that company’s recent announcement that ads will beryllium coming to ChatGPT’s free tier. And they caused an contiguous stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” connected OpenAI.
They are funny capable that moreover Sam Altman admitted connected X that he laughed astatine them. But he intelligibly didn’t really find them funny. They inspired him to constitute a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”
First, nan bully portion of nan Anthropic ads: they are funny, and I laughed.
But I wonderment why Anthropic would spell for thing truthful intelligibly dishonest. Our astir important rule for ads says that we won’t do precisely this; we would evidently ne'er tally ads successful nan measurement Anthropic…
In that post, Altman explains that an ad-supported tier is intended to enarthrosis nan load of offering free ChatGPT to galore of its millions of users. ChatGPT is still nan astir celebrated chatbot by a ample margin.
But nan OpenAI CEO insisted they were “dishonest” successful implying that ChatGPT will twist a speech to insert an advertisement (and perchance for an off-color product, to boot).”We would evidently ne'er tally ads successful nan measurement Anthropic depicts them,” Altman wrote successful nan societal media post. “We are not stupid and we cognize our users would cull that.”
Techcrunch event
Boston, MA | June 23, 2026
Indeed, OpenAI has promised ads will beryllium separate, labeled, and will ne'er power a chat. But nan institution has besides said it is readying connected making them conversation-specific — which is nan cardinal allegation of Anthropic’s ads. As OpenAI explained successful its blog. “We scheme to trial ads astatine nan bottommost of answers successful ChatGPT erstwhile there’s a applicable sponsored merchandise aliases work based connected your existent conversation.”
Altman past went connected to fling immoderate arsenic questionable assertions astatine his rival. “Anthropic serves an costly merchandise to rich | people,” he wrote. “We besides consciousness powerfully that we request to bring AI to billions of group who can’t salary for subscriptions.”
But Claude has a free chat tier, too, pinch subscriptions astatine $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could reason nan subscription tiers are reasonably equivalent.
Altman besides alleged successful his station that: “Anthropic wants to power what group do pinch AI” He argues it blocks usage of Claude Code from “companies they don’t like” for illustration OpenAI, and said Anthropic tells group what they tin and can’t usage AI for.
True, Anthropic’s full trading woody since time 1 has been “responsible AI.” The institution was founded by two erstwhile OpenAI alums, aft all, who claimed they grew alarmed astir AI information erstwhile they worked there.
Still, some chatbot companies person usage policies, AI guardrails, and talk astir AI safety. And, while OpenAI allows ChatGPT to be utilized for erotica while Anthropic does not, it, too, has wished some contented should beryllium blocked, peculiarly successful regards to intelligence health.
Yet Altman took this Anthropic-tells-you-what-to-do statement to an utmost level erstwhile he accused Anthropic of being “authoritarian.”
“One authoritarian institution won’t get america location connected their own, to opportunity thing of nan different evident risks. It is simply a acheronian path,” he wrote.
Using “authoritarian” successful a rant complete a cheeky Super Bowl advertisement is misplaced, astatine best. It’s peculiarly tactless erstwhile considering nan existent geopolitical situation successful which protesters astir nan world person been killed by agents of their ain government. While business rivals person been duking it retired successful ads since nan opening of time, intelligibly Anthropic deed a nerve.
1 hour ago
English (US) ·
Indonesian (ID) ·