Nvidia launches Nemotron 3 to compete with OpenAI and Google

PLUS: AI models ace finance industry's top certification for investment management & Perplexity reveals agent usage. Zoom tops reasoning benchmark, AI2 releases fully transparent OLMo 3.

In today’s agenda:

1️⃣ Nvidia releases Nemotron 3 Nano (30B parameters) with 500B Ultra model coming in 2026, challenging closed-source AI giants

2️⃣ Six AI models including Gemini 3.0 Pro and GPT-5 now pass all three levels of the top professional certification for investment management (CFA)

3️⃣ Perplexity study analyzing hundreds of millions of queries shows users primarily deploy AI for cognitive work over simple automation

  • Zoom achieves 48.1% on Humanity's Last Exam, a challenging AI reasoning benchmark, using innovative multi-model approach

  • AI2 publishes OLMo 3 with full training transparency, releasing complete data, checkpoints, and post-training infrastructure for researchers

MAIN AI UPDATES / 16th December 2025

🤖 Nvidia launches Nemotron 3 to compete with OpenAI and Google 🤖
Chip giant releases 30B parameter model with 500B Ultra coming in 2026

Nvidia released Nemotron 3 Nano (30B parameters, 3B active) with benchmark scores rivaling closed-source competitors from OpenAI, Google, and Anthropic. The company is releasing Super (100B) and Ultra (500B) models in early 2026, publishing full training data and agent customization libraries. This move appears strategic as major AI companies increasingly develop their own chips instead of relying on Nvidia hardware, pushing the chip giant to compete directly in the AI model space.

💰 AI models ace finance industry's top professional certification exam (CFA) 💰
Six frontier models pass all three levels of elite investment management qualification

A new study found that six leading AI models now pass all three levels of the Chartered Financial Analyst (CFA) certification exams—the gold-standard professional qualification for investment analysts and portfolio managers—with Gemini 3.0 Pro scoring a record 97.6% on Level 1. Researchers tested GPT-5, Gemini 3.0 Pro, Claude Opus 4.1, Grok 4, and DeepSeek-V3.1 across 980 questions spanning all exam tiers. GPT-5 topped Level II at 94.3%, while Gemini dominated the most difficult constructed-response section with 92%. In 2023, GPT 3.5 failed the first two levels—the leap to near-perfect scores took roughly 24 months.

📊 Perplexity reveals AI agent usage patterns 📊
Research shows cognitive work dominates over simple task automation

Perplexity and Harvard published a study analyzing hundreds of millions of anonymized queries from Perplexity's Comet browser launched in July. Over half of queries involved research or workflow management, with common tasks including summarization, document editing, and coursework help. Tech workers, academics, marketers, and finance professionals generated the bulk of activity, with users who started with casual queries often migrating toward heavier knowledge work over time, showing AI agents are primarily used for deep cognitive work rather than simple automation.

INTERESTING TO KNOW

🧠 Zoom tops reasoning benchmark with innovative multi-model approach 🧠

The company most commonly known for video calling has achieved a new state-of-the-art result on Humanity's Last Exam, a challenging AI reasoning benchmark. Zoom scored 48.1% using a "federated" approach that leverages strengths from multiple models to improve performance, surprising the industry with serious AI research capabilities. The achievement demonstrates that companies outside traditional AI research labs are making significant advances in complex reasoning tasks.

🔬 AI2 releases fully transparent OLMo 3 open research model 🔬

AI2 published OLMo 3 with complete openness unprecedented in the industry: all model checkpoints, training data, code, and the entire SFT/DPO/RLVR post-training stack. The release includes OlmoRL infrastructure that reduced training time from 15 to 6 days. Notably, RL with random rewards that worked on Qwen failed on OLMo 3, providing valuable research insights into model behavior differences and setting a new standard for transparent AI research.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: