# AI chips
Latest news and articles about AI chips
Total: 41 articles found

Nvidia Moves Up the Chain: Partnership with Qnity Electronics Targets Advanced Semiconductor Materials
Nvidia has announced a collaboration with Chinese materials supplier Qnity Electronics to co-develop advanced semiconductor materials, a move aimed at securing critical inputs for AI chip production and optimising chip-to-material integration. The partnership reflects broader industry efforts to diversify and localise supply chains amid rising demand for AI accelerators and geopolitical friction over semiconductor technology.

Cambrian’s Cash Gift: A Timely Dividend That Quietly Clears a Regulatory Hurdle
Cambrian reported its first annual profit in 2025 and proposed its first cash dividend of about RMB632 million. However, the payout followed a parent‑level accounting adjustment using capital reserves that created distributable profits and positioned the dividend just above a regulatory 30% threshold, clearing the way for possible controlling‑shareholder sell‑downs while the company still faces sizeable cash‑flow and funding gaps.

Shanghai’s AWE Turns Trade Show Into a Visa‑Free Shop Window for China’s Hard Tech
Shanghai’s Appliance & Electronics World Expo introduced an “Oriental Hub” with visa‑free entry for invited foreigners and duty‑free handling for exhibits, turning a consumer fair into an efficient international marketplace for Chinese hard tech. The policy enabled overseas buyers to inspect and negotiate on AI chips, edge compute, optical interconnects and robotics on the spot, accelerating commercial engagement while signalling China’s push to export integrated technology solutions.

Amazon Taps Cerebras for Cloud Inference Push, Taking Aim at Nvidia’s Dominance
AWS will deploy Cerebras inference chips alongside its Trainium3 processors in a new service aimed at faster, cheaper AI inference for chatbots and coding tools. The move reflects a market shift from GPU‑heavy training towards specialised, lower‑latency inference hardware and intensifies competition with Nvidia’s GPU ecosystem.

Cambricon’s Breakout Year: Revenue Soars, Profitable Turnaround and Insider Adds Shares
Cambricon delivered a powerful 2025 turnaround with revenue up 453% to ¥6.497 billion and net profit of ¥2.059 billion, while announcing generous dividends and a large bonus share issue. Fifth-largest shareholder Zhang Jianping increased his stake in Q4 to about 6.81 million shares, worth roughly ¥7.5 billion at current prices, reinforcing investor confidence amid changing market classifications on the STAR Market.

Meta Accelerates Own AI Silicon Push with Four New MTIA Chips, Betting on in‑House Efficiency
Meta announced four new AI chips under its MTIA programme, with MTIA 300 already in production and three further models slated through 2027. The chips aim to accelerate both training and inference for generative features and ranking systems, reflecting a broader industry move toward custom silicon to cut costs and control performance.

Meta Accelerates Push for Custom AI Chips to Power Generative Models and Wean Off Nvidia
Meta revealed plans to roll out four in‑house MTIA chips through 2027, with MTIA 300 already in production and later chips slated for inference-heavy generative AI workloads. The move signals a deliberate strategy to diversify suppliers, lower operating costs, and pair continued purchases of Nvidia/AMD hardware with bespoke silicon aimed at Meta’s unique demands.

AMD Courts Samsung to Lock In HBM Supply as AI Chip Demand Soars
AMD CEO Lisa Su will meet Samsung chairman Lee Jae‑yong in Seoul to discuss collaborating on high‑bandwidth memory (HBM) supplies and will also explore AI compute infrastructure cooperation with Naver. The talks are a bid to secure scarce memory resources and to deepen regional partnerships as demand for AI accelerators intensifies worldwide.

Chinese Firm Pushes Indigenous AI Compute: A Bid to Build a Homegrown Foundation for the Digital Economy
Pinggao, together with Jiangyuan Technology, has launched a full-stack domestic AI compute system aiming to reduce reliance on foreign accelerators. The company urges policy measures during China’s 15th Five‑Year Plan to support chip R&D, talent, and ecosystem development, reflecting a broader strategic push to secure indigenous AI compute capabilities.

When Memory Rules: How HBM Is Rewriting the Economics of AI Chips
The AI chip competition has pivoted from raw compute to memory capacity and bandwidth as HBM and advanced packaging now dominate costs and performance requirements. Persistent HBM shortages and soaring prices favour cloud buyers who prioritise memory-rich GPUs and push chipmakers toward software and system optimisations to reduce memory demand.

Amazon Plays Both Sides: $50bn Bet on OpenAI while Doubling Down on Its Own AI Chips
Amazon said it will invest up to $50 billion in OpenAI and host substantial OpenAI workloads on AWS, including a pledge to run 2GW of its Trainium chips on OpenAI’s Frontier platform. The deal, which runs alongside continuing ties with Anthropic, strengthens AWS’s competitive position in the AI cloud market and validates Amazon’s push into custom AI silicon while leaving significant milestones and conditionality unresolved.

Meta Retreats on Ambitious In‑House AI Chip — Turns to AMD, Nvidia and Google to Fill the Gap
Meta has paused work on a high‑end internal AI training chip, Olympus, after design and stability issues, opting instead for a simpler internal design and large purchases from AMD, Nvidia and Google. The move underscores the difficulty of competing with Nvidia’s performance and software ecosystem and signals a pragmatic industry shift toward external partnerships to secure AI compute capacity.