Nvidia's Customer Risk: 40% Revenue from Two Buyers

Nvidia’s latest quarterly earnings report showcased historic growth, but a detail buried in its SEC filing reveals the precarious foundation of its success. The company disclosed that nearly 40% of its record $46.7 billion in Q2 revenue originated from just two unidentified customers. This extreme revenue concentration highlights a significant vulnerability for the chipmaker, as its largest buyers—widely believed to be the same tech giants fueling this boom—are aggressively developing their own in-house AI chips to reduce this exact dependency. While the AI infrastructure arms race is padding Nvidia’s bottom line today, the disclosure brings the long-term Nvidia customer concentration risk into sharp focus, demonstrating that the company’s biggest partners are also its most formidable future competitors.
Key Points
• Nvidia’s Q2 SEC filing documents that two direct customers accounted for 39% of total revenue (23% and 16%, respectively).
• This massive sales volume is a direct result of the AI infrastructure “arms race,” with hyperscalers like Microsoft, Google, AWS, and Meta investing billions in data centers.
• The heavy Nvidia Q2 earnings customer dependency creates a double-edged sword: immense short-term revenue with significant long-term risk if a key customer alters its strategy.
• This risk is amplified by the fact that big tech developing own AI chips vs Nvidia is an established trend, with custom silicon like Google’s TPU and Microsoft’s Maia designed to reduce vendor reliance.
Six Buyers, One Fortune
The scale of Nvidia’s customer concentration is striking. According to its official Form 10-Q filing, “Customer A” was responsible for 23% of total Q2 revenue, while “Customer B” contributed another 16%. Combined, these two entities represent 39% of the company’s sales, a figure that underscores the immense purchasing power of a select few. The dependency runs even deeper, as the filing also notes that four other customers contributed an additional 46% of revenue.
This means a mere six direct customers—likely OEMs or distributors fulfilling massive orders—were responsible for an astonishing 85% of Nvidia’s total sales. While the filing doesn’t answer the question of who are Nvidia’s two biggest customers by name, the end-users driving this demand are the handful of tech titans building the world’s AI infrastructure. This structure, with Nvidia revenue 40 percent two customers, forms the factual basis for analyzing the company’s current market position and its inherent risks.
Silicon’s Gold Rush: The Hyperscaler Billions
Nvidia’s sales figures are a direct reflection of a massive, industry-wide technology shift. A global “AI Capex Tsunami,” as described by Sequoia Capital, is underway, with a few cloud service providers (CSPs) leading the charge. These companies are engaged in an arms race to build data centers capable of handling the immense computational demands of next-generation AI models. Nvidia’s CFO, Colette Kress, confirmed that large CSPs accounted for about half of the company’s data center revenue, as reported by CNBC.
The spending figures are astronomical. According to a Reuters report, the four largest cloud providers are on track to spend nearly $200 billion in capital expenditures in 2025. Meta alone has forecast up to $40 billion in spending, driven almost entirely by AI. This context explains why Nvidia’s revenue is so concentrated: only a few organizations on the planet are buying AI hardware at this scale, funneling their historic investments through Nvidia’s product lines.
Dancing on the Edge of Dependency
Industry analysts view this customer concentration as a classic high-risk, high-reward scenario. On one hand, the immense and predictable orders from cash-rich tech giants provide Nvidia with unparalleled revenue visibility. As Gimme Credit analyst Dave Novosel noted, these customers are expected to “spend lavishly on data centers over the next couple of years.” This demand is what has propelled Nvidia to its current market capitalization and cemented its role as the primary arms dealer in the AI gold rush.
On the other hand, this dependency is a significant vulnerability. A strategic shift by even one of these key customers could have an outsized negative impact on Nvidia’s revenue. According to Morgan Stanley Research, the share of custom in-house chips is expected to grow significantly over the next 3-5 years as hyperscalers seek to optimize performance and reduce costs. This trend, coupled with the potential for a new competitor like AMD to gain traction in what some analysts call the AI chip wars, makes Nvidia’s reliance on a few buyers a critical point of concern.

When Customers Craft Rivals
The concentration risk is not theoretical; it is an active and well-funded strategic initiative within Nvidia’s largest indirect customers. These tech giants are investing billions to develop their own custom AI accelerators, a move aimed directly at reducing their reliance on vendors like Nvidia. This dynamic of big tech developing own AI chips vs Nvidia represents the most tangible threat to the company’s long-term dominance.

Examples of this trend are well-established:
- Google’s Tensor Processing Units (TPUs) have been powering its internal AI workloads for years and are a core part of its cloud offering.
- Amazon’s Trainium and Inferentia chips are designed specifically for training and inference within the AWS ecosystem, offered as cost-effective alternatives to GPUs.
- Microsoft’s Maia AI accelerator was developed with the explicit goal of “optimizing performance and cost” for its Azure cloud services, as stated in its announcement.
These projects demonstrate that Nvidia’s biggest buyers are simultaneously its most motivated and well-resourced future competitors.
The Gilded Cage of Success
Nvidia’s Q2 earnings report paints a picture of a company at the zenith of its power, fueled by an unprecedented wave of AI investment. Yet, the same report reveals a structural fragility: its fortunes are tied to a very small number of customers who are actively working to build their own alternatives. This dynamic creates a fundamental tension between short-term success and long-term stability. The documented rise of custom silicon is not a distant threat but a present-day reality. The central question for Nvidia is no longer about maintaining its technological lead, but about how it will navigate a market where its most important partners are also planning for a future without it.
Tags
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing — and the answer should concern anyone who bet […]

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month — and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%) […]
