# Mixture of Experts

Latest news and articles about Mixture of Experts

Total: 3 articles found

Close-up of a hand holding a smartphone showing the NVIDIA logo on screen with a blurred background.
Technology

NVIDIA’s Omni-Vision: Setting New Benchmarks for the Era of Autonomous AI Agents

NVIDIA has launched Nemotron 3 Nano Omni, a multimodal AI model utilizing a Mixture-of-Experts architecture to deliver 9x the efficiency of competing open models. Designed for autonomous agents, the model integrates text, video, and audio reasoning to enable real-time digital interaction and lower deployment costs.

NeTe2026年4月28日 20:58
#NVIDIA#Nemotron 3#Multimodal AI
Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.
Technology

Efficiency Over Scale: Bailing Unveils Ling-2.6-flash to Disrupt the Intelligence-Cost Curve

Bailing has launched Ling-2.6-flash, a 104B parameter model that uses Mixture of Experts (MoE) technology to activate only 7.4B parameters. It achieves benchmark parity with larger models while consuming only 10% of the tokens required by competitors like Nemotron-3-Super.

NeTe2026年4月22日 02:28
#Bailing LLM#Ling-2.6-flash#Artificial Intelligence
Senior male perfumer sitting among fragrance bottles in a rustic setting, creating unique scents.
Technology

Alibaba’s Qianwen Open-Sources an 80B Coding Model Optimized for Agents and Local Development

Alibaba’s Qianwen has open‑sourced Qwen3‑Coder‑Next, an 80B parameter model designed for coding agents and local deployment that combines hybrid attention with MoE to lower inference costs. The release aims to accelerate enterprise adoption in China by enabling on‑premise use and customization, while raising questions about IP, safety and the infrastructure needed to realize claimed efficiency gains.

NeTe2026年2月4日 01:50
#Qwen3-Coder-Next#Alibaba Qianwen#Mixture of Experts