DeepMind's AI Co-Mathematician doubles FrontierMath scores

PLUS: Nvidia tops $40B in AI equity bets & Anthropic signs $1.8B Akamai compute deal. Mistral AI hits 20x ARR growth in 2026, Gemini 3.1 Flash-Lite rolls out to GA.

In today’s agenda:

1️⃣ Google DeepMind unveils an agentic math system built on Gemini 3.1 that topped the FrontierMath Tier 4 leaderboard at 48% and helped an Oxford professor solve an open problem

2️⃣ Nvidia has committed over $40 billion in equity investments in 2026, positioning itself as the AI ecosystem's dominant financial backer beyond chip supply

3️⃣Anthropic signs a seven-year, $1.8 billion deal with Akamai for cloud compute capacity amid surging demand for Claude

  • Mistral AI achieved 20x ARR growth in one year and is on track to cross $1 billion in 2026, fueled by EU sovereignty demand

  • Google ships Gemini 3.1 Flash-Lite into general availability on Google Cloud, targeting ultra-low latency production workloads

MAIN AI UPDATES / 11th May 2026

🧮 DeepMind's AI Co-Mathematician doubles FrontierMath scores 🧮
DeepMind's new agentic math system opens researcher access to AI-powered proof strategies.

Google DeepMind published a paper on its AI Co-Mathematician, an agentic system built on Gemini 3.1 designed to help mathematicians tackle unsolved problems. Modeled after AI coding environments like Claude Code, it employs a coordinator agent that breaks research into parallel workstreams, with sub-agents writing code, searching literature, and attempting proofs. Oxford's Marc Lackenby resolved an open problem in the Kourovka Notebook after spotting a clever proof strategy inside a rejected output. On Epoch AI's FrontierMath Tier 4, the system topped the leaderboard at 48%, more than doubling Gemini 3.1 Pro's raw 19% score — a capability jump that redefines how frontier math research gets done.

💰 Nvidia tops $40B in AI equity bets 💰
Nvidia's $40 billion investment spree locks in hardware integration across the AI stack.

Nvidia has committed over $40 billion in equity investments in 2026 alone, aggressively financing the entire AI supply chain to ensure it runs on Nvidia hardware. The company, whose stock has surged more than 11-fold in four years thanks to the global GPU scramble, is now positioning itself as a dominant financial backer — not just a chip supplier — in the AI ecosystem. This strategy secures long-term hardware lock-in across cloud providers, startups, and infrastructure players, giving Nvidia unprecedented competitive pressure over the industry's direction. The move reflects Nvidia's transition from pure component manufacturer to investor-operator shaping AI's architecture at every layer.

☁️ Anthropic signs $1.8B Akamai compute deal ☁️
Anthropic races to scale compute access with a landmark $1.8 billion Akamai partnership.

Anthropic has signed a seven-year, $1.8 billion deal with Akamai for cloud compute capacity as it scrambles to address widespread complaints about Claude usage limits. This deal comes alongside recently struck or expanded agreements with CoreWeave, Amazon, Google, Broadcom, and xAI — all within May alone. The aggressive multi-provider compute strategy highlights the extraordinary demand Anthropic faces and its determination to scale inference infrastructure rapidly. For Akamai, it represents a landmark AI customer win and validation of its expanding cloud platform. Compute scarcity remains a defining bottleneck for frontier labs — distribution now depends on raw infrastructure spending.

INTERESTING TO KNOW

🇪🇺 Mistral AI hits 20x ARR growth in 2026 🇪🇺

Mistral AI achieved 20x growth in annual recurring revenue over the past year and is expected to cross $1 billion in ARR in 2026, with its pricing and integration strategy built around sovereignty and efficiency for regulated enterprises. Its customer base skews toward multinational organizations that prioritize jurisdiction control and vendor concentration risk — a profile common among European enterprises navigating EU data sovereignty concerns. The trajectory proves the AI market rewards structural positioning, not just raw model scale.

⚡ Gemini 3.1 Flash-Lite rolls out to GA ⚡

Google launched Gemini 3.1 Flash-Lite into general availability on Google Cloud, targeting ultra-low latency and high-volume production workloads with sub-second speed for sectors like software engineering and financial services. The model delivers p95 latency around 1.8 seconds at reduced pricing, directly competing with lightweight offerings from OpenAI and Anthropic. The rollout underscores the industry's shift toward specialized, cost-optimized models alongside flagship frontier systems.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: