awesome-ai-radar
← Back to Radar

Meta Reveals Four MTIA Chip Generations by 2027 for AI Inference at Scale

infrastructure backend mlops

What happened

Meta published its MTIA chip roadmap on March 11, revealing four new chip generations — MTIA 300, 400, 450, and 500 — to be deployed by end of 2027. MTIA 300 is already in production, MTIA 400 has completed testing and is rolling out soon. The chips are developed in partnership with Broadcom and primarily target GenAI inference workloads. Meta has shipped a new generation approximately every six months, significantly faster than the industry norm of one to two years.

Why it matters

Meta joining Google (TPU) and Amazon (Trainium/Inferentia) in deploying custom silicon at scale signals that the largest AI consumers are moving away from exclusive NVIDIA dependency. For the broader ecosystem, this accelerates competition in AI inference hardware and could drive down inference costs industry-wide. Meta's six-month cadence also suggests that custom chip iteration speeds are becoming a competitive advantage in themselves.

Who should pay attention

  • Infrastructure engineers planning GPU/accelerator procurement strategies
  • Teams building on Meta's open-source Llama models (likely optimized for MTIA)
  • AI hardware investors tracking the custom silicon trend
  • Cloud providers evaluating their own custom chip strategies