# AI%20safety
Latest news and articles about AI%20safety
Total: 16 articles found

AI Insiders Sound the Alarm as U.S. Start‑ups Pivot from Safety to Speed
Senior researchers exiting US AI companies have publicly warned that commercialization and IPO pressures are sidelining safety, risking manipulative or harmful model behaviour. The conflict between monetisation incentives and the need for interpretability, privacy safeguards and robust alignment work has produced real‑world moderation failures and could invite regulatory intervention.

Musk’s AI Project in Retreat: Key xAI Founders Exit After SpaceX Rescue
Two prominent xAI founders quit within 48 hours after a series of earlier exits left half the original founding team gone, undermining Elon Musk’s AI ambitions. The exits, heavy cash burn, and product scandals around Grok have coincided with xAI’s absorption into SpaceX — a deal that looks like a financial bailout but raises fresh strategic and regulatory headaches.

OpenClaw’s Viral Rise Signals a New Age for Cheap, Deployable AI Agents — and New Risks
OpenClaw, an open‑source agent platform created by Peter Steinberger, has gone viral by turning chat messages into executable commands across multiple model APIs, accelerating demand for inexpensive, high‑throughput models and simple local hardware like the Mac Mini. The surge highlights opportunities for Chinese model providers such as Minimax and Kimi, while raising acute security, deployment and governance challenges.

Philippines to Lift Ban on xAI’s Grok After Promised Fixes for Sexual-Content Abuse
The Philippines will lift its ban on xAI’s Grok once the company implements promised fixes to stop the chatbot being used to generate sexually explicit images, including alleged child-exploitative content. Authorities will continue close monitoring, following platform-level restrictions introduced earlier by X to block generation of real-person nudity.