في جوهر الأمر، يكمن المشكلة في منهجية خاطئة في التنبؤ. بعض الأشخاص، تمامًا مثل نماذج اللغة غير المُعدَّلة بشكل جيد، يعتمدون بشكل مفرط على البيانات التاريخية ويعتبرونها مستقبلًا دون فهم الديناميكيات الكامنة وراءها. وهذا أمر مهم خاصة عند النظر إلى أرقام مثل مايكل بوري وأصحاب الرأي المتشكك، الذين قد يتجاهلون تمييزًا حاسمًا في السوق الحالي.
بشكل عام، كانت الصناعات مثل رقاقات الشريحة، الإلكترونيات الاستهلاكية والسلع الأساسية تُحركها الطلب البشري. منتجات مثل هواتف آيفون، ووحدات المعالجة المركزية، وشرائح الذاكرة تواجه حدودًا طبيعية؛ بمجرد وصول السوق إلى حالة التشبع في المناطق المتقدمة، فإن النمو يتباطأ بالضرورة. هذا يؤدي إلى دورة من الفائض في القدرة الإنتاجية والانحدار، كما شهدناه مع شركات مثل سيسكو أو إن텔. ومع ذلك، يختلف المشهد بالنسبة لبطاقات الرسومات (GPU) بشكل أساسي. هنا، الطلب قد يكون لا نهائيًا لأن المستخدم النهائي ليس البشر، بل أنظمة الذكاء الاصطناعي. تستهلك أنظمة الذكاء الاصطناعي الرموز للتفكير والإبداع وإنشاء الثروة، مما يجعل استهلاك بطاقات الرسومات وظيفة لحاجة الحساب وليس للفائدة البشرية. بينما لا يستخدم شخص عديداً من الهواتف الذكية، يمكن للذكاء الاصطناعي استخدام عدد لا نهاية له من بطاقات الرسومات لإنتاج قيمة متزايدة باستمرار. وبالتالي، فإن خطر نقص الطلب أو تشبع السوق الذي عانى منه التقنيات السابقة لا ينطبق بنفس الطريقة. هذا يمثل تحولًا في النموذج قد يفوت بعض التحليلات التقليدية.
بالطبع، هذه هي منظومة مبسطة - سيحتاج تحليل الأسهم المفصل إلى مراعاة عوامل مثل وحدات التدريب مقابل وحدات الاستنتاج، والقيود المتعلقة بالطاقة، والقابلية للتوسع. ولكن الفكرة الأساسية تبقى: يُحفَّز الطلب على بطاقات الرسومات بواسطة عمليات إنتاجية تخلق ثروة، وليس استهلاكًا بشريًا محدودًا.
Most bears have little understanding of technology and business, focusing only on fundamentals. Burry’s depreciation argument doesn’t hold up when you consider that developers are still using A100s from 2020 and V100s from 2018. They’ll likely use the H100, H200, and B200 chips for five to six years without issue.
Most bulls don’t have a strong grasp of the technology either.
Unfortunately, you’re not wrong.
As one of those bears, I’m still cheering for Nvidia.
Your point about iPhone sales slowing once the developed world was saturated is accurate, and a similar dynamic applies to GPU sales. Constraints like electricity production limits and export controls will slow growth, and competition among manufacturers will increase, much like Android phones competing with iPhones.
Michael Burry’s challenge was timing the slowdown correctly. Such slowdowns are difficult to predict, and even when identified, they are temporary. Stocks of companies that produce in-demand goods tend to rise over time, despite occasional downturns or failures.
Generally, it’s more effective to invest in successful companies and hold those shares long-term.
The key difference is that iPhones are consumer products, while companies competing to develop functional agentic AI—not necessarily true AGI—are willing to invest heavily in more efficient hardware. As Satya Nadella noted, their data centers are interchangeable; they can retire older hardware from the forefront of development and still utilize it effectively.
The core issue is that many people fail to distinguish between AI and LLMs, with LLMs being just a narrow, lower-margin segment of the broader AI field. This confusion often results in shaky arguments about fundamentals.
People are concerned about the depreciation of older GPUs. Even if AI could theoretically use an unlimited number of GPUs, inefficiency makes them useless, since GPUs incur ongoing costs for electricity and heat.
Additionally, as companies purchase new GPUs, their older models depreciate and lose value on their balance sheets. If the new GPUs don’t generate enough revenue to cover their costs or at least break even, demand will eventually decline. While AI has the potential to use vast numbers of GPUs, humans are currently responsible for allocating them, as we don’t have an active artificial superintelligence managing resources.
The real concern is a potential demand cap and overbuilding, which are the main factors making Nvidia questionable to some. Depreciation is just a temporary argument.
Depreciation isn’t critical because GPUs only need to recoup their costs, which they do. Any long-term earnings are an added bonus. Moreover, the need for Nvidia’s cloud services to upgrade quickly is actually a positive.
The concern isn’t about overbuilding GPUs—TSMC is fully booked, and demand continues to outpace supply. Instead, the issue lies with overbuilding in areas like data centers and excessive capital expenditure by companies such as Google, OpenAI, and Microsoft. Most AI ventures are not yet profitable and have not recouped their initial investments.
The idea that GPUs easily pay for themselves is currently inaccurate. Many large companies are subsidizing these costs using cash flow from other profitable areas, as they are unwilling to fall behind in adopting AI technology.
The overbuilding issue isn’t about GPUs but rather data centers and excessive capital expenditure from companies like Google, OpenAI, and Microsoft. Most AI offerings still lose money and haven’t recouped their initial investments.
The claim that GPUs easily pay for themselves is incorrect. Major companies are currently subsidizing these costs through cash flow from other ventures because they don’t want to miss out on AI development.
However, AI is already generating significant profits for megacaps like Google and Meta through improved recommendations and features like AI Overviews, which have revived search growth. For pure GPU hosting providers like CoreWeave, profitability is immediate since their capacity is pre-sold before construction, making any extended use pure additional profit.
Megacap profits continue to grow, but their capital expenditures and costs are rising at the same pace. While many speculate that AI will eventually become profitable, current valuations are not yet justified. The only company truly profiting from the AI boom is NVIDIA, which supplies the essential hardware.
What AI products are actually generating revenue? Services like Gemini, OpenAI, and Claude are all operating at a loss. Data centers hosting these models may appear profitable, but that’s only because the AI providers are heavily subsidizing costs to compete, fund R&D, and train new models. If these providers don’t become profitable, they can’t sustain these losses indefinitely.
The gold rush analogy still applies: the GPU users are spending heavily, enriching manufacturers and hosting providers. But this spending is based on the expectation of future returns. If those returns don’t materialize, the market for these resources could collapse.
Michael Burry and other bears are overlooking key facts. AI overviews and recommendations are already generating significant revenue. Only 40% of Meta’s GPUs are allocated to what people typically consider “AI”; the remaining 60% support operational machine learning.
Both OpenAI and Anthropic are highly profitable on a per-unit basis. Their reported losses stem from aggressive investment in research and development, not from unprofitability.
The argument that user spending is solely driven by speculative hopes and will collapse if unmet is too flawed to merit serious discussion.
The idea that GPU demand is infinite is incorrect. AI currently functions as an advanced machine learning tool, requiring substantial computing power due to the growing volume of data needed to generate responses. Limiting the data available to an AI reduces the necessary computation and speeds up response times. The quality of the output depends on the amount of data used.
In the future, the aim is to produce faster, accurate responses using less data and computation, which would decrease the need for GPUs. It’s uncertain when AI will reach this stage, but it may take several years. Until then, computing demands will continue to rise, though not infinitely.
Your argument lacks depth.
First, AI requires computational power, not specifically GPUs. While NVIDIA produces the leading GPUs for this purpose currently, they aren’t the only option. AMD also manufactures GPUs, and Google trained Gemini 3 entirely on its proprietary TPUs.
Second, AI doesn’t “want” anything at this stage. It’s quite possible that large language models won’t lead us to superintelligence, and future breakthroughs might rely on entirely different computing paradigms, such as bioelectric systems. In any case, NVIDIA GPUs are unlikely to remain the exclusive computational resource over the next 20-30 years.
Third, in the short term—which matters for shorting—there’s good reason to suspect we’re facing oversupply or overvaluation.
Ultimately, you haven’t demonstrated why being bearish is unwise now. While betting against AI long-term is misguided, near-term skepticism can be quite profitable.
We began using AI in the second half of 2023. How can a bubble form in just two years? These individuals have been pushing the bubble narrative every year since late 2023, and they will likely continue for the next decade.
Bubbles can form over any period. Focus on the rise in share prices rather than the duration. Market moves accelerate with faster information flow and greater investor involvement.
The current concern seems to be less about competition and more about a potential AI bubble. While competition is worth monitoring, it’s worth noting that TPUs have been used to train models from AlphaGo in 2017 to Gemini 2. TPUs held 50% of the market in 2023 but lost share in 2024 and 2025 as Nvidia’s dominance grew.
Regarding the bubble argument, there’s no short-term evidence of oversupply or overvaluation, especially given strong growth and revenues. My point is to challenge the assumption that growth will suddenly stop after 2027. This belief often stems from past experiences like the dotcom bubble, but it isn’t supported by current evidence.
There is definitely an AI bubble. AGI is a winner-take-all scenario—the first to achieve it will vastly outperform everyone else. This means capital expenditures by other companies are either a total waste or, at best, will provide additional computing resources that the AGI creator may purchase, assuming GPUs are even needed.
You’re also overlooking how macroeconomic factors could trigger a bear market in the short term, regardless of the technology’s potential.
I consider it a bubble because many companies, such as Meta and OpenAI, are spending heavily without a realistic path to AGI.
Current strong growth and revenues rely on future earnings from these companies, which are unlikely to sustain today’s valuations.
This isn’t baseless if you recognize the circular revenue streams and the winner-take-all nature of AGI. The real challenge is identifying the eventual winner.
You’ve presented five or six different bearish arguments, but none of them are particularly strong or well-explained. It seems like you’re starting from a gut feeling and then grasping at anything to justify that negative outlook.
You’ve made a series of questionable claims that don’t really merit a detailed response.
Michael Burry and other bears are often regarded as bizarre and specious, but their concerns may have merit for several reasons.
First, AGI is a winner-take-all scenario. The advantage of a scalable, perfectly obedient workforce with 140+ IQ would quickly dismantle competitors’ advantages and compound rapidly. Companies not close to achieving AGI will waste resources and face rapid obsolescence.
Second, macro factors pose risks. Rising unemployment, shifts toward socialist policies, potential conflict with Venezuela, uncertainty around rate cuts, and China possibly restricting rare earths exports could trigger algorithm-driven selling. This may depress AI stock prices even if they are fairly valued now.
Third, OpenAI and Meta are not as strong as other AI firms. If either scales back capital expenditure, NVIDIA’s share price—and the broader AI ecosystem—could decline. NVIDIA faces significant customer concentration risk, likely with OpenAI as a top client. Disclosed risks like this could reduce NVIDIA’s market cap by 10–15%, potentially causing 20–40% drops in higher-beta AI stocks. Currently, AI economics are far from profitable, and any financing issues—such as a downturn in Meta’s ad business or inability to raise equity for OpenAI or xAI—could impact NVIDIA, which is priced for the most optimistic outcomes.
Achieving AGI is a winner-take-all scenario, but we don’t know if or when it will happen, and no one appears particularly close to it. In the meantime, highly capable AI agents can already generate significant value without needing true AGI. It’s shortsighted to overlook the potential between now and the distant possibility of AGI. Hyperscalers have long believed they must invest heavily now to avoid falling behind later. While AGI may or may not arrive eventually, these companies recognize that there is immense value to be unlocked in the interim.
Hyperscalers are compelled to invest heavily since it’s central to their business model. However, individual investors have the flexibility to invest in a variety of companies, including hyperscalers, or even take positions against them.
In the context of NVDA, the hyperscalers are all that matter. If they remain committed to buying from NVDA every quarter for years to come, nothing else really matters from NVDA’s perspective.
While this is off-topic, I’m sure you saw the discussion on X today about “transparency” and the overseas political influencer accounts. Reddit seems to be experiencing this as well. There are simply too many agendas at play, which makes meaningful conversation feel like pushing sand uphill.
There is a flaw in the “infinite demand” logic: it overlooks software efficiency and diminishing returns.
Current applications like LLMs for text and code are reaching a plateau. Adding more computing power to models is producing smaller gains, while optimization techniques such as distillation, quantization, and small language models are improving.
If we can achieve 95% of performance with only 10% of the compute using smaller, specialized models, the need for massive H100 clusters will decrease.
This argument could be wrong if entirely new paradigms emerge where AI must “think” for days to solve complex, novel problems, such as System 2 thinking. However, for current market demands in inference workloads, we are likely closer to saturation than many realize.
Please stop sharing AI-generated content.
Burry’s fund has gone bankrupt.
Based on NVDA stock, I estimate a 70% chance it will trade sideways between $175 and $185 through December, followed by a 60% probability of an upside breakout above $190 after December 25, driven by positive earnings momentum.
Do you have any estimates for AMD?
AMD is currently range-bound and likely to remain flat through late December, trading between roughly $200 and $225 with a 65% probability. After the holidays, as liquidity returns and January catalysts such as earnings and hyperscaler capex updates emerge, there is a 55% probability of a bullish breakout toward $230–$245. A bearish breakdown to $190 or lower is less likely, with only about a 30% chance, given the bullish long-term structure and strong data-center growth trajectory.
I am long on AMD but recently trimmed 500 shares to reduce downside risk while de-risking my portfolio until the sector shows a definitive upward breakout, which I believe is probable but not guaranteed in the short term.
The wealth generated isn’t infinite. AI doesn’t directly create wealth.
Electricity allowed stores to extend their hours and increase sales. The internet enabled businesses to reach customers globally, not just locally.
With AI, you aren’t generating additional revenue. Most companies are seeing returns by reducing their workforce, which means the potential upside is limited. If AI eliminates too many jobs, the economy could collapse. Who would purchase AI models if widespread unemployment left people without the means to buy anything?
Additionally, GPUs require electricity, and we currently lack infinite power generation. While Michael Burry may be considered misguided for shorting, taking the opposite extreme view is equally unwise.
Demand for GPUs is not infinite; the limit is what investors are willing to accept. Currently, OpenAI has a trillion dollars in obligations but earns less than $10 billion annually. Nvidia is distributing GPUs to companies in advance, relying on their future ability to pay. This situation resembles how Cisco operated during the dotcom bubble.
Nvidia is not giving away GPUs to everyone with a promise of future payment. Additionally, OpenAI does not have $1.3 trillion in debt or obligations.
I also don’t understand the claim about them netting $10 billion a year. They have stated they expect to operate at a loss through fiscal year 2027 and anticipate becoming profitable starting in 2028. You can choose to believe that or not, but the numbers being cited about their financial situation don’t align with the idea that they are netting $10 billion annually.
Finally, you are either accusing Nvidia of fraud by claiming they had nearly $23 billion in free cash flow in Q3, or you are making baseless comparisons to Cisco. They are not simply giving away GPUs with a promise of payment. Where do these claims originate?
OpenAI’s investors are fully committed, and as a private company, they have more flexibility than others. The other hyperscalers have strong balance sheets and ample cash, and they are determined not to fall behind in this race.
It appears that demand is robust enough that if one hyperscaler slows down, others will absorb it.
Michael Burry and other bears are often taken seriously because they correctly predicted the 2008 financial crisis. As for the claim that “demand for GPUs is not infinite,” who actually argued that it was? The same could be said for Ford cars or Apple iPhones—demand for any product has its limits. What is the point being made here?
Some of the initial data centers built with now-inefficient GPUs will eventually need replacement. In the long run, Nvidia could have a strong business just from upgrading these existing facilities.
Perhaps the metaverse represents infinity, directly linked to our consciousness.