- Get link
- X
- Other Apps
AI Goes Supersonic: Anthropic's Power Surge, OpenAI's Record Raise & the Open-Source Coding War
⚡ Anthropic Taps SpaceX's Colossus — 300 MW of Compute and Counting
In one of the most significant infrastructure moves in AI history, Anthropic announced on May 6 that it has secured a deal to tap all available compute capacity at SpaceX's Colossus 1 data center in Memphis, Tennessee — gaining immediate access to more than 300 megawatts of power. The move is a direct response to an extraordinary period of growth: Anthropic's compute demands surged by 80 times in the first quarter of 2026 alone, creating reliability strain on Claude Pro and Max users that had become public in April. The SpaceX deal effectively buys the company the headroom it needs to operate at frontier scale while its own infrastructure continues to expand.
The implications extend well beyond Memphis. As part of ongoing discussions, Anthropic and SpaceX are reportedly exploring the deployment of orbital compute nodes — essentially data centers in low-Earth orbit — to provide continuous, globally distributed inference capacity. While the orbital ambition remains forward-looking, the Colossus agreement is live and represents a fundamental shift in how frontier AI labs are thinking about compute sourcing: not just hyperscalers, but aerospace companies and power-adjacent industrial partners are now key players in the AI infrastructure stack.
💰 OpenAI Closes $122B Round at $852B — Anthropic's ARR Quietly Overtakes It
OpenAI has completed what is now officially the largest private fundraising event in history: a $122 billion round that values the company at $852 billion on a post-money basis. CFO Sarah Friar confirmed in an investor briefing that a portion of any future IPO shares will be reserved for retail investors, signaling that OpenAI is inching toward a possible public listing in the second half of 2026. The round dwarfs all previous records and cements OpenAI's position as the most valuable private company in the world by a substantial margin.
But buried beneath the headline numbers is a striking competitive inversion. Anthropic has surpassed OpenAI in annualized revenue for the first time, reaching a run rate of $30 billion versus OpenAI's $24 billion. Analysts attribute the reversal primarily to enterprise adoption of agentic workflows — complex, multi-step AI pipelines that companies are deploying in production environments — where Anthropic's Claude has found stronger product-market fit than consumer-facing chat products. The contrast tells the story of two divergent strategies: OpenAI expanding its valuation ceiling while Anthropic quietly dominates the enterprise revenue floor.
"A portion of IPO shares will be reserved for retail investors" — Sarah Friar, OpenAI CFO, May 2026
🔍 Google Kills Project Mariner — Browser AI Is Now Built Into Gemini
Google has officially shut down Project Mariner, its experimental AI-powered browser agent that could autonomously perform web tasks — booking hotel rooms, organizing email, completing online forms — on a user's behalf. The company stated that the underlying technology has been absorbed and integrated into mainstream Google products, specifically Gemini Agent and AI Mode in Google Search. Launched as a DeepMind research project in late 2024 and expanded to limited enterprise users throughout 2025, Mariner represented one of the first attempts by a major technology company to deploy a production-grade autonomous web agent visible to the public.
The shutdown, while framed by Google as a graduation rather than a failure, reflects a broader pattern in Big Tech's approach to agentic AI: prototype externally, then absorb into flagship product lines at scale. The move also arrives at a symbolically charged moment — just as competitors including Anthropic (with its Computer Use feature) and OpenAI (with Operator and the Codex developer platform) are doubling down on standalone agent products. Google appears to be betting that embedding agent capabilities into Gemini and AI Mode — products with hundreds of millions of users — will ultimately outperform maintaining a dedicated agent application with a separate distribution challenge.
🇨🇳 Four Chinese Labs, Four Coding Models, 12 Days — Open-Source AI Accelerates
In a remarkable display of accelerating competition, four major Chinese AI laboratories released open-weights coding models within a single 12-day window: Z.ai's GLM-5.1, MiniMax M2.7, Moonshot's Kimi K2.6, and DeepSeek V4 all landed within days of each other. According to independent benchmarks, all four models reach roughly the same capability ceiling on agentic engineering tasks — a performance tier that previously required Western frontier models — at meaningfully lower inference costs. The coordinated-looking release cadence, though likely coincidental, sent an unambiguous signal about the pace of capability development occurring outside the American AI ecosystem.
Perhaps more strategically disruptive than the benchmark scores is the pricing dynamic. All four models are available as open weights, meaning developers can run them on their own infrastructure without per-token API fees. At capability levels comparable to GPT-4-class and Claude Sonnet-class systems, the cost differential for high-volume enterprise applications is substantial. Western AI labs have responded with increasingly aggressive enterprise pricing tiers, but the open-weight competitive pressure from Chinese labs — which began with DeepSeek's original release in late 2024 — shows no signs of diminishing as 2026 progresses.
Comments
Post a Comment