Meta has started testing its first in-house chip for training its AI systems, according to Reuters. The move is part of the company’s plan to reduce its reliance on chip suppliers like NVIDIA and lower its AI infrastructure costs.
The chip is part of the Meta Training and Inference Accelerator (MTIA) series. If tests go well, Meta plans to increase production and use the chip more widely.
The company is working with Taiwan’s Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture it.
The report suggests that Meta’s AI-related spending is a major part of its projected $114 billion to $119 billion expenses for 2025, including up to $65 billion in capital expenditures.
The new chip is a dedicated AI accelerator, designed specifically for AI tasks. This makes it more efficient than general-purpose GPUs typically used for AI training.
Meta has previously struggled with its chip programme. It scrapped an earlier inference chip after poor test results and went back to buying billions of dollars’ worth of NVIDIA GPUs in 2022. However, Meta did deploy a custom chip last year for AI inference on recommendation systems for Facebook and Instagram.
Executives revealed that they aim to use in-house chips by 2026 for both training and inference tasks. Last month, reports surfaced that OpenAI was working on developing its own custom AI chips to lessen its dependence on NVIDIA. The company was nearing completion of the design for its first in-house chip, which it plans to send to TSMC for fabrication in the coming months.