Google Adds Marvell to Its AI Chip Team for Faster Inference

USASun Apr 19 2026
Google is expanding its custom chip lineup by talking to Marvell Technology about two new AI processors. One will be a memory‑processing unit that works beside Google’s existing Tensor Processing Units (TPUs). The other will be a TPU designed only for inference, the part of AI that answers user requests instead of learning from data. Marvell would help design these chips, similar to how MediaTek has helped with Google’s latest Ironwood TPU. These talks happen right after Broadcom secured a long‑term deal to supply TPUs and networking parts through 2031. Instead of replacing Broadcom, Google appears to be building a multi‑supplier system that includes Broadcom for high‑performance chips, MediaTek for cheaper variants, and TSMC for manufacturing. This approach mirrors how car makers use many component suppliers to avoid over‑reliance on a single vendor. The shift toward inference is changing the chip market. Training a large AI model is a one‑time, compute‑heavy event that lasts weeks or months. Inference runs all the time, handling every user query and scaling with demand. Because billions of users rely on AI services each day, even a small cost reduction per inference can save Google huge amounts of money.
Google’s new Ironwood TPU, launched this month, already offers ten times the performance of its predecessor and can be deployed in massive superpods. Marvell’s chips would likely target different workloads or price points, adding flexibility to Google’s overall strategy. Marvell has been a growing partner for cloud providers. In 2025 it built processors for Amazon, Microsoft and Meta, and it has a $1. 5 billion run‑rate in custom silicon design. Recent investments from Nvidia and a $5. 5 billion acquisition of Celestial AI have positioned Marvell at the intersection of GPU and ASIC ecosystems, boosting its market share to about 25% in custom AI chips by 2027. Broadcom remains the dominant player, holding more than 70% of the custom AI accelerator market and targeting $100 billion in revenue by 2027. Yet Google’s diversified supplier base—Broadcom, MediaTek, Marvell and TSMC—reduces risk and keeps the company in control of its silicon roadmap. Overall, Google’s new partnership with Marvell signals a focus on inference‑optimized hardware that can serve billions of AI requests more efficiently, while keeping its supply chain robust and flexible.
https://localnews.ai/article/google-adds-marvell-to-its-ai-chip-team-for-faster-inference-4c759b05

actions