Meta wants to ditch Nvidia - and fast

META

Meta is testing its first in-house chip designed for training AI models, aiming to reduce its dependence on Nvidia.

The social media giant, which owns Facebook, Instagram, and WhatsApp, has started small-scale testing and will expand production if it proves successful.

The custom chip is part of Meta’s long-term plan to lower infrastructure costs.

AI investments drive much of its expected $114–$119 billion in expenses for 2025.

Unlike general-purpose GPUs, the new chip is a specialised AI accelerator, making it more power-efficient for training models.

Taiwan-based TSMC is handling production.

This is part of Meta’s broader Meta Training and Inference Accelerator (MTIA) programme.

The company has struggled with custom AI chips before, scrapping a previous version after poor results.

Here’s what you should know:

  • Meta is testing a custom AI chip to cut costs and reduce reliance on Nvidia.

  • The chip is part of the MTIA program, which has faced setbacks but saw progress with an inference chip in 2023.

  • The need for high-powered GPUs is being questioned as newer, more efficient AI models emerge.

Will this chip actually chip in?

However, it successfully introduced an inference chip in 2023 for its recommendation systems and now aims to expand into AI training, including for its chatbot, Meta AI.

Despite these efforts, Meta remains one of Nvidia’s biggest customers, relying on its GPUs for AI training.

However, there are growing doubts about whether scaling large AI models by adding more computing power is the best approach.

This debate intensified after Chinese startup DeepSeek introduced cheaper, more efficient AI models.

The shift briefly caused Nvidia’s stock to drop, though investors still see its chips as central to AI development.

Another day, another Big Tech company deciding it’s time to “DIY” AI.