OpenAI and Anthropic eye $2 trillion IPO wave

PLUS: Gemini launches Personal Intelligence for US users & Airbnb poaches Meta's GenAI lead as CTO. GPT-5.2-Codex ships via Responses API, Mistral releases Ministral 3 model family.

In today’s agenda:

1️⃣ OpenAI, Anthropic, and SpaceX explore IPOs that could total nearly $2 trillion in combined valuations

2️⃣ Google rolls out Personal Intelligence in beta, connecting Gemini to Gmail, Photos, YouTube, and Search for deeply personalized AI assistance

3️⃣ Airbnb hires Ahmad Al-Dahle, former head of GenAI at Meta, as its new Chief Technology Officer

  • OpenAI releases GPT-5.2-Codex with 400K context window and four reasoning effort levels via the Responses API

  • Mistral publishes Ministral 3 technical report featuring 3B, 8B, and 14B parameter models with Cascade Distillation

MAIN AI UPDATES / 15th January 2026

📈 OpenAI and Anthropic eye $2 trillion IPO wave 📈
The AI industry's biggest labs begin exploring public market access.

OpenAI and Anthropic have reportedly begun early discussions around potential public offerings, joining SpaceX in what could become a historic wave of tech IPOs. The three companies carry a combined valuation approaching $2 trillion—SpaceX at approximately $800 billion, OpenAI at roughly $500 billion, and Anthropic in talks near $350 billion. If all three proceed, the combined capital raised could exceed the roughly 200 US IPOs completed last year. This signals a major maturation moment for the AI industry, as its leading labs consider transitioning from private funding rounds to public markets—a shift that could reshape how AI companies are financed, governed, and held accountable.

🤖 Gemini launches Personal Intelligence for US users 🤖
Google's AI assistant now accesses your apps for contextual, personalized responses.

Google has launched Personal Intelligence in beta for US users, a significant new capability that connects Gemini to users' Google apps including Gmail, Photos, YouTube, and Search to deliver deeply personalized AI assistance. The integration allows Gemini to understand a user's personal context across Google services, enabling far more specific and informed responses than generic AI assistants can provide. Users maintain full control over which apps are connected, with privacy safeguards ensuring data remains secure. The rollout begins with Google AI Pro and Ultra subscribers, with broader access planned. This matters for distribution because it deepens user lock-in across the Google ecosystem while setting new expectations for personalized AI.

💼 Airbnb poaches Meta's GenAI lead as CTO 💼
Travel giant recruits top AI talent to accelerate platform integration.

Airbnb has appointed Ahmad Al-Dahle, formerly the head of Generative AI at Meta, as its new Chief Technology Officer. The hire signals Airbnb's strategic pivot toward AI-powered travel experiences as the company expands beyond short-term rentals into broader travel services. Al-Dahle brings extensive AI leadership experience from Meta and Apple, positioning Airbnb to integrate AI chatbots, concierge services, and intelligent recommendations into its platform. The move reflects ongoing competition for top AI talent and demonstrates how traditional tech companies are restructuring leadership to prioritize generative AI capabilities—a critical signal for competitive pressure across the travel industry.

INTERESTING TO KNOW

⚡ GPT-5.2-Codex ships via Responses API ⚡

GOpenAI has released GPT-5.2-Codex, an upgraded model optimized for agentic coding tasks with API access now available. The model features a substantial 400,000-token context window and supports four levels of reasoning effort settings, giving developers fine-grained control over performance versus cost tradeoffs. Pricing is set at $1.75 per million input tokens and $14.00 per million output tokens, marking another step in OpenAI's push to dominate AI-assisted development.

🇪🇺 Mistral releases Ministral 3 model family 🇪🇺

French AI company Mistral has released Ministral 3, a new family of dense language models in 3B, 8B, and 14B parameter sizes, optimized for rollout in low-resource environments. The models come in base, instruction-tuned, and reasoning variants with image understanding capabilities. A notable technical innovation is Cascade Distillation, combining distillation and pruning to create efficient smaller models—particularly relevant for European enterprises with data sovereignty requirements.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: