A new agreement with Google and Broadcom underscores how access to chips, power and data center capacity is becoming as strategically important as the intelligence of the models themselves.
Anthropic’s decision to expand its compute partnership with Google and Broadcom is more than a supply deal for Claude. It is a marker of where the artificial intelligence race now stands. For much of the last three years, the public conversation around generative AI has centered on model releases, benchmark scores and the rivalry among chatbots. But the announcement reported by TechCrunch on April 7 points to a deeper reality: the contest is increasingly being shaped by who can secure enough hardware, electricity, networking equipment and cloud capacity to keep frontier systems improving and reliably serving customers.
Anthropic said it had signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity expected to begin coming online in 2027. Broadcom’s regulatory filing gave the market a more concrete sense of scale, stating that Anthropic would access approximately 3.5 gigawatts as part of that broader commitment. In an industry where compute is often discussed in abstract terms, that number matters. It suggests a level of infrastructure planning that looks less like ordinary cloud purchasing and more like the build-out of industrial capacity.
The significance is not only the size of the commitment, but also what it says about the economics of modern AI. Training and running advanced models such as Claude requires enormous volumes of specialized chips, fast interconnects, resilient power supply and sophisticated data center design. Inference, the process of serving answers to users after a model is trained, has become a major burden in its own right as enterprise and consumer demand rises. That means the frontier is no longer defined solely by research breakthroughs. It is defined by whether a company can physically deliver enough computing power at a sustainable cost.
Anthropic’s own language reflects that shift. The company described the Google and Broadcom arrangement as its most significant compute commitment to date, framed around the need to support “extraordinary demand” from customers worldwide. It also said the vast majority of the new compute would be located in the United States, extending a previous $50 billion commitment to invest in American computing infrastructure. That framing is telling. The company is not presenting compute as a backend procurement issue. It is treating it as a strategic pillar of growth, resilience and national industrial capacity.
The numbers behind Anthropic’s demand help explain why. In its announcement, the company said its run-rate revenue had surpassed $30 billion, up from about $9 billion at the end of 2025. It also said the number of business customers spending more than $1 million annually had risen from more than 500 in February to over 1,000. Whether or not those figures ultimately translate into durable long-term profitability, they indicate a customer base expanding quickly enough to force infrastructure decisions far in advance. Companies at this scale cannot wait for compute to become available on short notice. They have to reserve it years ahead, and often across multiple suppliers.
That, in turn, explains the structure of Anthropic’s broader infrastructure strategy. The startup has made clear that it is not relying on a single hardware stack. It says Claude is trained and run across AWS Trainium, Google TPUs and Nvidia GPUs, while Amazon remains its primary cloud provider and training partner through Project Rainier. This is not redundancy for redundancy’s sake. It is a hedge against scarcity, pricing pressure and operational risk. In a world of constrained supply, any frontier model company that depends on only one chip family or one cloud operator is exposed.
The Google-Broadcom side of the deal also highlights a second structural change in AI: the growing importance of custom silicon ecosystems. Broadcom said it had entered a long-term agreement to develop and supply custom TPUs for Google’s future generations, alongside a supply assurance agreement for networking and other components in Google’s next-generation AI racks through 2031. Anthropic is therefore plugging into more than just rented cloud capacity. It is connecting to a vertically coordinated chain involving chip design, systems integration, networking and deployment timelines that stretch several years into the future.
That matters because the AI stack is no longer modular in the way traditional enterprise computing often was. Performance, cost and availability increasingly depend on how well chips, racks, interconnects, software tooling and model architectures are optimized together. The closer a model developer sits to that integrated stack, the better its odds of improving speed, lowering serving costs and managing reliability at scale. The companies that control or deeply influence those layers gain leverage not just in product performance, but in the pace at which they can ship the next generation.
There is also a geopolitical and policy dimension. Anthropic’s emphasis on locating most of the new capacity in the United States aligns with a broader effort by major AI firms and cloud providers to present infrastructure investment as an economic and strategic national asset. The language around domestic jobs, American competitiveness and local data center build-out is becoming standard. That is partly politics, but it also reflects a practical concern: frontier AI development now depends on land, power transmission, construction timelines and supply chains that governments increasingly view through a national security lens.
The competitive implications are significant. OpenAI, Google, Meta, xAI and Anthropic are all effectively competing on two fronts at once. One is visible to the public: model quality, developer tools, enterprise adoption and consumer mindshare. The other is quieter and arguably more decisive: securing enough compute to sustain training cycles and inference demand without letting costs spiral out of control. The companies that win on the second front buy themselves more room to compete on the first.
This is why the Anthropic announcement should be read as evidence that the bottleneck in AI has migrated. There is still intense competition in algorithms, fine-tuning and product design. But the harder constraint increasingly lies beneath the software layer. A model can only be as scalable as the infrastructure that feeds it. If a company cannot obtain power, chips, networking equipment and cloud capacity on the necessary timetable, even the best research team may find its ambitions throttled by physics and procurement.
Broadcom’s filing contained a cautionary line that sharpened the point: Anthropic’s consumption of the expanded compute capacity depends on its continued commercial success. That statement is a reminder that the infrastructure race is financially risky as well as technically demanding. Multi-gigawatt commitments are enormous bets on future demand, pricing and customer retention. They assume that today’s enthusiasm for generative AI will translate into long-lived revenue streams large enough to justify industrial-scale build-outs.
For now, Anthropic appears confident that the demand is there. The company’s rising revenue run-rate, swelling roster of large-spending customers and increasingly diversified hardware footprint suggest a business trying to ensure that Claude’s next phase is not limited by lack of capacity. The message to the market is straightforward: frontier AI is no longer just about building smarter models. It is about building the factories behind them.
In that sense, the deal with Google and Broadcom may be remembered less as a supplier announcement than as a snapshot of the industry’s new center of gravity. The glamour of AI still belongs to the model. But the power increasingly belongs to the infrastructure.

