Apple is moving into smart glasses - AR and beyond

PLUS: Higgsfield’s AI visual effects, LlamaFirewall for secure agents, OpenAI splits leadership, and SoundCloud updates AI policy

In today’s agenda:

1️⃣ Apple explores smart glasses with and without AR capabilities

2️⃣ Higgsfield Effects Mix blurs the line between reality and visual effects

3️⃣ LlamaFirewall introduces open-source guardrails for AI agent safety

Plus, some interesting news:

  • OpenAI splits leadership with new Applications CEO

  • SoundCloud updates terms to allow AI training on user content

MAIN AI UPDATES / 10th May 2025

🤓 Apple quietly prepares its move into smart glasses 🤓
AR and non-AR wearables in the works

Apple is reportedly developing a custom chip for next-gen smart glasses, as it eyes competition with Ray-Ban Meta’s AI-powered frames. According to Bloomberg, the chip builds on Apple Watch architecture but is optimized to manage multiple cameras, a key feature of future wearable interfaces. Apple is also pursuing a parallel path: developing AR-capable glasses reminiscent of Meta's Orion concept. Mass production could begin as early as 2026 or 2027, positioning Apple to span both the casual and high-end segments of the smart glasses market.

🎨 Higgsfield Effects Mix challenges visual reality 🎨
AI tools for surreal, creator-driven effects

Higgsfield, a new generative video platform, is pushing the boundaries of digital creativity. Its toolset allows creators to produce effects like melting bodies, metallic skin, or runners engulfed in flames—without traditional VFX pipelines. Described by users as "beyond anything I've seen before," this platform could democratize access to cinematic-quality transformations across short-form video, social media, and digital art. While still emerging, Higgsfield represents a notable shift in AI-assisted content creation, blending realism with surrealism.

🔒 LlamaFirewall secures AI agents 🔒
Security layer against injection and misalignment

LlamaFirewall is a newly released open-source framework designed to protect AI agents from common threats like prompt injection, misaligned behaviors, and insecure code execution. It offers modular protections that can be integrated with most LLM-based systems, including tools like LangChain or AutoGen. As AI agents become more autonomous, demand is growing for transparent, customizable safety layers—and LlamaFirewall may become a foundational piece in securing agent-driven applications.

INTERESTING NEWS

👨‍💼 OpenAI separates product and research leadership 👨‍💼

OpenAI has appointed Fidji Simo (former Instacart CEO and ex-Facebook exec) as CEO of Applications, while Sam Altman continues as overall CEO, now focused on research, safety, and compute. The restructuring formally splits OpenAI’s product development from its research division, likely in preparation for broader deployment of tools like ChatGPT, Voice, and Vision. The announcement includes references to “approaching superintelligence,” though some industry experts remain cautious about the timeline.

🎵 SoundCloud updates policy to allow AI training on user uploads 🎵

As discovered by TechCrunch, SoundCloud quietly modified its terms of use to allow AI training on user-uploaded content. The clause permits use of audio to "inform, train, develop or serve as input to artificial intelligence." Following criticism, SoundCloud clarified that it hasn’t used artist content to train generative AI and added a “no AI” tag artists can apply to restrict use. The change is currently aimed at improving platform features like recommendations, not building generative models—though the line is thin, and the debate is ongoing.

📩 Have questions or feedback? Just reply to this email , we’d love to hear from you!

🔗 Stay connected: