Napster launches AI concierge kiosks

PLUS: Meta's SWE-RL trains AI through self-made bugs & SoftBank finalizes $41B OpenAI stake. Waymo tests Gemini as robotaxi companion, Robotic NRE-skin mimics human nervous system.

In today’s agenda:

1️⃣ Napster launches Napster Station, an AI concierge kiosk for crowded spaces.

2️⃣ Meta unveils Self-play SWE-RL, a system where coding models autonomously create and fix bugs using test feedback

3️⃣ SoftBank completes $41 billion investment in OpenAI, securing an estimated 11% stake

  • Waymo app code reveals plans to deploy Gemini as an in-car AI companion for robotaxis

  • Researchers develop electronic skin with built-in reflexes and neuromorphic chips for safer human-robot interaction

MAIN AI UPDATES / 31st December 2025

🧍‍♂️ Napster launches AI concierge service for crowded spaces 🧍‍♀️
Purpose-built kiosks bring high-fidelity conversational AI to noisy, traffic-heavy environments.

Napster has announced Napster Station, an enterprise AI concierge kiosk designed for places where voice assistants typically fail—hotel lobbies, retail floors, healthcare facilities, and airport terminals. Station pairs conversational video AI with purpose-built hardware, including a near-field microphone array that isolates a single user’s voice in loud environments, multimodal sensing to identify who is speaking, and high-clarity audio output intended to keep interactions understandable and authoritative even amid crowd noise. Napster positions the unit as “floor-deployable” AI—always-on, multilingual, and built for real-world customer service workflows—claiming a significant cost advantage versus human or traditional digital concierge setups. The product is slated for enterprise deployment starting Q1 2026, with live demos planned at CES 2026.

🧠 Meta's SWE-RL trains AI through self-made bugs 🧠
Reinforcement learning system improves coding autonomously without human-labeled data.

Meta has unveiled Self-play SWE-RL, an innovative system where a coding model improves autonomously by creating and resolving software bugs without human-written issues or labels. The model operates in dual roles: first injecting bugs into real codebases, then switching to repair them using tests as the only feedback mechanism. This approach achieved significant improvements, raising scores by 10.4 points on SWE-Bench Verified and 7.8 on SWE-Bench Pro, outperforming RL methods trained on human-generated data. The pricing implications are substantial—this could dramatically reduce the cost of training competitive coding models by eliminating expensive human annotation pipelines.

💰 SoftBank finalizes $41B OpenAI stake 💰
Massive investment secures 11% ownership and fuels infrastructure expansion plans.

SoftBank has finalized its massive investment in OpenAI, completing a final payment of approximately $22–22.5 billion to bring total commitment to roughly $41 billion. This secures SoftBank an estimated 11% stake in the AI company. The capital will support OpenAI's long-term infrastructure plans, including partnerships with Nvidia, AMD, Broadcom, Oracle, and the Stargate compute project. To facilitate this investment, SoftBank sold its $5.8 billion Nvidia stake and acquired DigitalBridge for $4 billion. The deal signals competitive pressure mounting in the AI infrastructure race, while ongoing talks continue for a potential $10 billion Amazon investment.

INTERESTING TO KNOW

🚗 Waymo tests Gemini as robotaxi companion 🚗

Integration of Google's Gemini into Waymo robotaxis could transform the rider experience through conversational AI. A researcher discovered a system prompt in Waymo's app code revealing plans to deploy Gemini as an in-car companion that handles passenger queries, adjusts lighting and temperature, and provides reassurance to nervous riders—notably without vehicle control capabilities. Waymo tests Gemini as robotaxi companion signals a new application frontier for large language models in autonomous transportation.

🤖 Robotic NRE-skin mimics human nervous system 🤖

Researchers have developed NRE-skin, an electronic skin that enables robots to sense touch with human-like accuracy. The system uses rapid electrical spikes similar to biological neurons, firing faster as pressure increases. A key innovation is built-in reflexes allowing instant withdrawal responses when pressure exceeds pain thresholds, without central processing delays—a capability jump that could make robots significantly safer for human interaction. The modular design features magnetic pieces that snap on and auto-detect position for easy repairs. Powered by neuromorphic chips, the skin operates with minimal energy consumption, representing a major step toward more intuitive collaboration in healthcare and manufacturing settings.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: