Anthropic has announced a major expansion of its collaboration with Google Cloud, signaling one of the largest compute scale-ups in the history of artificial intelligence infrastructure.
The company plans to deploy up to one million tensor processing units by 2026 — a move that represents an investment worth tens of billions of dollars and will bring more than a gigawatt of computing power online.
According to Anthropic, the additional compute will fuel both AI research and commercial product development, including advancements to its Claude AI models. The expansion reflects Anthropic’s growth plans: the company now serves more than 300,000 business customers, with the number of large enterprise accounts — defined as those generating more than $100,000 in annual run-rate revenue — rising nearly sevenfold over the past year.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO at Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood.”
Partners in prime
The agreement underscores the deepening relationship between the two companies. Google Cloud has been one of Anthropic’s earliest and most important partners, providing both infrastructure and ecosystem support.
The expanded partnership comes as hyperscale cloud providers compete fiercely to secure AI workloads, with companies like Anthropic, OpenAI, and Cohere driving massive demand for specialized hardware.
Krishna Rao, CFO of Anthropic, emphasized the move’s importance in sustaining the company’s growth trajectory. “Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” he said. “Our customers — from Fortune 500 companies to AI-native startups — depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry.”
Anthropic’s compute strategy is notably diversified. Rather than relying on a single hardware vendor, the company employs a multi-platform approach that spans Google’s TPUs, Amazon’s Trainium, and Nvidia’s GPUs. This strategy allows Anthropic to balance cost, performance, and scalability while maintaining strong ties across the cloud ecosystem.
The company reaffirmed its ongoing commitment to Amazon Web Services, describing it as its “primary training partner and cloud provider,” and highlighting joint work on Project Rainier — a vast AI supercomputing cluster featuring hundreds of thousands of chips distributed across US data centers.
With this expansion, Anthropic wants to handle the growing computational demands of training and deploying ever-larger models. The company has stated that the new capacity will also enable more safety testing, alignment research, and scaling — core priorities as AI systems become increasingly capable and complex.
In a move that signals the rapid commoditization of high-level AI capabilities, Anthropic recently released Claude Haiku 4.5.

