- H-FARM AI's Newsletter
- Posts
- OpenAI ships Chronicle, always-on memory
OpenAI ships Chronicle, always-on memory
PLUS: Kimi K2.6 tops benchmarks at fraction of cost & Brin forms Gemini strike team to beat Anthropic. DeepMind ships TIPSv2 encoder, Anthropic locks $25B Amazon compute deal.

In today’s agenda: 2️⃣ Moonshot AI releases Kimi K2.6, an open-source agentic model rivaling GPT-5.4 and Claude Opus 4.6 on key benchmarks at lower cost 3️⃣ Sergey Brin personally leads a new DeepMind strike team to surpass Anthropic's coding capabilities with Gemini |
|
MAIN AI UPDATES / 21st April 2026
🧠 OpenAI ships Chronicle, always-on memory 🧠
Codex gains persistent memory through ambient screen-capture integration on macOS.
OpenAI has shipped Chronicle, a new memory-building feature for Codex now available to ChatGPT Pro users on macOS. Chronicle runs background agents that capture on-screen context to build persistent memories stored as unencrypted markdown files locally on the user's device. The goal: help Codex understand a developer's ongoing work without needing to restate context every session. Continuous context could reshape daily developer workflows. The feature requires macOS Screen Recording and Accessibility permissions, and OpenAI warns about prompt injection risks from screen content, recommending users pause Chronicle during sensitive work. This rollout marks a shift from reactive coding assistants to ambient, always-on AI collaborators that maintain continuity across sessions.
🔥 Kimi K2.6 tops benchmarks at fraction of cost 🔥
China's open-source coding model challenges frontier labs on pricing and speed.
Chinese AI lab Moonshot AI has released Kimi K2.6, an open-source agentic coding model that nears or outperforms GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro on key benchmarks—at a fraction of their cost. The model tops Humanity's Last Exam, SWE-bench Multilingual, SWE-Bench Pro, and BrowseComp leaderboards. The lineup includes variants for quick replies (Instant), complex reasoning (Thinking), document tasks (Agent), and large-scale processing (Agent Swarm), which spins up 300 parallel sub-agents—triple its predecessor. K2.6 can operate for 12+ hours across 4,000+ tool calls. Weights are available on Hugging Face, putting serious competitive pressure on proprietary pricing models and continuing to close the open-source performance gap.
⚔️ Brin forms Gemini strike team to beat Anthropic ⚔️
Google's coding AI rollout gets a dedicated strike team led by its co-founder.
Google co-founder Sergey Brin is personally leading a push to surpass Anthropic's coding capabilities with Gemini, forming a new internal "strike team" at DeepMind. Research engineer Sebastian Borgeaud heads the group under CTO Koray Kavukcuoglu. In an internal memo, Brin framed coding as the critical capability to achieve self-improving AI—AI that can train the next generation of AI systems. DeepMind researchers reportedly rate Claude's code-writing abilities above Gemini's internally, underscoring the urgency. Gemini engineers are now required to use Google's internal agent tools on complex tasks, with usage tracked on a company leaderboard called Jetski. Coding is now the critical front in AI competitive pressure.
INTERESTING TO KNOW
👁️ DeepMind ships TIPSv2 for zero-shot visual tasks 👁️
Google DeepMind has released TIPSv2, a new vision-language encoder that improves multimodal AI integration by combining distillation, enhanced self-supervised objectives, and richer caption data. The rollout delivers strong gains in zero-shot segmentation and broader multimodal tasks, strengthening foundational representations that bridge visual and linguistic understanding. These improvements underpin capabilities from image understanding to complex visual reasoning, reinforcing DeepMind's edge in the multimodal space.
💰 Anthropic locks $25B Amazon deal for 5GW compute 💰
Anthropic and Amazon have expanded their partnership to secure massive compute access, with Amazon investing up to $25 billion more into Anthropic in exchange for over $100 billion in AWS spending. The deal secures up to 5 gigawatts of compute capacity for Claude's training and deployment at scale—one of the largest AI infrastructure partnerships to date. Compute access has become a decisive competitive advantage, and this positions AWS as central to Anthropic's scaling ambitions.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!
🔗 Stay connected:
