Figma launches Dev Mode MCP server for AI-powered code generation

PLUS: Figure 02 robot learns package handling, Rednote releases open-source 142B AI model & Apple Design Awards snub AI apps

In today’s agenda:

1️⃣ Figma integrates directly with AI coding tools via new MCP server

2️⃣ Figure's humanoid robot demonstrates autonomous package handling

3️⃣ China's Rednote debuts massive open-source language model with no synthetic data

Plus, some interesting news:

  • Apple Design Awards recognizes 12 winners but AI apps notably absent for second year

  • Dead Sea Scrolls mystery deepens as AI analysis dates manuscripts earlier than thought

MAIN AI UPDATES / 7th June 2025

Figma launches Dev Mode MCP server
Bridging design and AI code generation

Figma has released a beta version of its Dev Mode MCP server, which integrates the design platform directly into developer workflows for AI-powered code generation. The server works with AI coding tools like Copilot, Cursor, and Claude Code through the Model Context Protocol standard, providing design context that helps LLMs generate code matching both design intent and codebase patterns. The system offers pattern metadata, screenshots, interactivity examples, and content details to help AI tools understand design systems more comprehensively.

🤖 Figure 02 robot learns package handling 🤖
Autonomous barcode orientation and flattening

Figure AI has demonstrated its Figure 02 humanoid robot, powered by their Helix (VLA) model, autonomously manipulating packages to orient barcodes and flatten items for scanning. The robot shows human-like adaptability in real-world tasks, including recovering from failed grasp attempts. The demonstration highlights the robot's advanced policy learning and closed-loop sensorimotor control, though questions remain about its tactile sensing capabilities. The technology represents a significant step toward practical, adaptable robots for warehouse and logistics applications.

🧠 Rednote releases open-source 142B model 🧠
Massive model with no synthetic training data

China's Xiaohongshu (Rednote) has released dots.llm1, a large-scale, open-source MoE language model with 142B total parameters (14B active) and a 32K context window. The model is notable for being trained on 11.2T high-quality, non-synthetic tokens and released under a truly open-source license. It includes intermediate checkpoints (every 1T tokens) and robust infrastructure support for efficient inference. The mixture-of-experts architecture employs 128 experts with top-6 routing and 2 shared experts, reportedly performing competitively against much larger models like Qwen3 235B.

INTERESTING NEWS

🏆 Apple Design Awards snub AI apps 🏆

Apple has announced the winners of its 2025 Design Awards just before WWDC, recognizing 12 apps and games across six categories. For the second consecutive year, generative AI apps were notably absent from the winners list, which primarily consisted of indie apps and startups. Notable winners include Watch Duty for Social Impact, which provides wildfire information during California emergencies, and Play for Innovation in developer prototyping. Apple emphasized how developers utilized its tools to create better user experiences rather than focusing on AI-centric applications.

📜 AI dates Dead Sea Scrolls earlier than thought 📜

A breakthrough AI study has revealed that the Dead Sea Scrolls may be significantly older than previously believed. Using advanced pattern recognition and linguistic analysis techniques, researchers were able to identify subtle features in the text that point to an earlier origin date. The AI system analyzed script variations, material composition, and contextual references to establish a new chronology for these important historical documents. This finding has profound implications for understanding the development of religious texts and ancient Near Eastern history.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: