Block kills middle management in its org, builds AI "world model" instead

PLUS: OpenAI solves three Erdős problems & Claude thinking cuts break coding workflows. Arcee ships open reasoning model, OpenMed trains mRNA models for $165.

In today’s agenda:

1️⃣ Jack Dorsey and Sequoia's Roelof Botha co-author a blueprint for replacing middle management with an AI-powered "world model" at Block

2️⃣ OpenAI publishes paper where its model autonomously solved three open Erdős mathematical problems

3️⃣ Analysis shows Anthropic's thinking token redaction causes measurable quality regression in complex Claude coding sessions

  • Arcee AI releases Trinity-Large-Thinking, an open-weight reasoning model under Apache 2.0 for agent workflows

  • OpenMed trains production mRNA language models across 25 species in just 55 GPU-hours for $165

MAIN AI UPDATES / 2nd April 2026

🏢 Block kills middle management in its org, builds AI "world model" instead 🏢
Dorsey and Sequoia's Botha co-author a blueprint for the post-hierarchy company.

Jack Dorsey and Sequoia's Roelof Botha published a joint essay arguing that middle management exists only to route information — and that AI can now do it better. Block is restructuring its entire organization around an AI-powered "world model" that continuously tracks decisions, projects, and priorities across the company, replacing the context that managers used to carry. All roles normalize down to three: ICs, DRIs, and player-coaches — no permanent management layer. An "intelligence layer" composes Block's financial capabilities (payments, lending, payroll) into personalized solutions for merchants and consumers in real time, driven by transaction data rather than product roadmaps. Co-signed by Sequoia, this isn't just internal philosophy — it's a public signal that one of tech's top VCs sees the manager-free org as the next standard.

🧮 OpenAI model solves three open Erdős problems 🧮
An internal OpenAI model autonomously cracks decades-old math problems.

OpenAI published a new paper announcing that an internal model solved three previously open mathematical problems originally posed by legendary mathematician Paul Erdős. Each solution was discovered autonomously by the model, marking a capability jump in AI-driven abstract reasoning. This positions OpenAI's models as increasingly competitive in pure research domains — not just engineering and coding tasks. If validated by the broader mathematics community, this could accelerate the adoption of AI as a tool for open-problem discovery and proof generation across STEM disciplines. The result pushes the frontier of what large language models can achieve in formal mathematical reasoning.

🔧 Claude thinking cuts break senior coding workflows 🔧
Extended thinking tokens prove essential, not optional, for complex engineering.

A detailed analysis shows that Anthropic's rollout of thinking content redaction correlates directly with measurable quality regression in complex, long-session Claude engineering workflows. The report finds that extended thinking tokens are structurally required for models to perform multi-step research, convention adherence, and careful code modification. When thinking depth is reduced, model tool-usage patterns shift measurably — producing the issues power users have widely reported. This matters for enterprise teams allocating token budgets. The findings suggest extended thinking is not a luxury feature but a load-bearing component for professional-grade coding, with direct implications for Anthropic's pricing and product strategy.

INTERESTING TO KNOW

🧠 Arcee AI ships frontier open-weight reasoning model 🧠

Arcee AI released Trinity-Large-Thinking, an open-weight reasoning model designed for complex agent workflows and multi-turn tool calling — available via API and on Hugging Face under the Apache 2.0 license for full commercial access. Described as likely the strongest open model released outside of China, it was trained for coherence across turns, reliable tool use, and instruction following under constraints. The rollout signals intensifying competition in the open-weight space, giving developers a strong replacement for proprietary models in agent-centric applications.

🧬 OpenMed trains mRNA models across 25 species for $165 🧬

OpenMed built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization — and made the full rollout open with runnable code and results. The team scaled training to 25 species, producing four production models in just 55 GPU-hours at a total cost of $165. Their top model, CodonRoBERTa-large-v2, achieved a perplexity of 4.10, significantly outperforming ModernBERT. This pricing makes high-quality biological language models accessible to smaller labs, potentially accelerating mRNA research and drug discovery.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: