What Is Compute as an Asset Class? Why AI Infrastructure Is the New Oil
AI compute is scarce, standardized, and price-volatile—making it a candidate for futures trading. Learn why compute is becoming a new asset class.
The New Scarcity: Why Compute Is Being Treated Like a Commodity
In 1973, an oil embargo changed how the world thought about energy. Overnight, something that had been treated as abundant infrastructure became a strategic asset—one worth hoarding, trading, and fighting over.
Something similar is happening with AI compute right now.
The raw processing power needed to train and run AI models—primarily delivered through high-end GPUs—is scarce, standardized, and price-volatile. That combination has pushed compute into territory that looks a lot less like “IT infrastructure” and a lot more like a financial asset class. Companies are stockpiling it. Hedge funds are investing in it. Governments are restricting its export. And early markets for trading it are beginning to emerge.
This article explains what “compute as an asset class” actually means, why AI infrastructure has taken on commodity-like characteristics, and what this shift means for enterprises building on top of AI.
What Makes Something an Asset Class?
Before calling compute an asset class, it’s worth being precise about what that means.
An asset class is a group of investments that share similar characteristics, behave similarly in markets, and are governed by similar regulations. Traditional asset classes include equities, fixed income, real estate, commodities, and cash equivalents.
For something to function as a commodity or asset class, it typically needs:
- Scarcity — Limited supply that can’t instantly scale to meet demand
- Standardization — Units that are interchangeable and comparable across sources
- Price discovery — A market mechanism that reveals what buyers will actually pay
- Fungibility — One unit of the thing is substitutable for another
- Store of value or productive capacity — It can be held or used to generate returns
Other agents ship a demo. Remy ships an app.
Real backend. Real database. Real auth. Real plumbing. Remy has it all.
AI compute—specifically GPU compute—meets most of these criteria. And that’s a relatively new development.
How AI Compute Became Scarce
For most of computing history, processing power followed Moore’s Law on a predictable curve. You could wait six months and get more compute for less money. Scarcity wasn’t really the point.
That changed when large language models proved that scale was the primary driver of AI capability. Training GPT-3 required hundreds of petaflop-days of compute. GPT-4, Gemini Ultra, and similar models required significantly more. Each step up in capability requires an exponential increase in compute during training.
The result: demand for high-end AI accelerators—especially NVIDIA’s H100 and now H200 and B100 chips—has massively outpaced supply.
Why Supply Can’t Keep Up
Building a cutting-edge AI chip is one of the most complex manufacturing feats in the world. NVIDIA designs the chips, but production depends on TSMC’s advanced fabrication processes in Taiwan. The entire supply chain—from chip design to packaging to data center deployment—takes years to scale.
Meanwhile:
- Data center buildout requires specialized power infrastructure that can take 18–36 months to construct
- High-bandwidth memory (HBM) used in AI chips has its own supply constraints
- Export controls introduced by the U.S. government in 2022 and expanded since have restricted which chips can ship to which countries, further fragmenting the global supply
The gap between what AI labs, cloud providers, and enterprises want and what’s actually available has created the conditions for genuine scarcity.
Prices Reflect It
When H100 GPUs launched, rental rates on cloud platforms ranged from roughly $2 to $4 per GPU per hour. By late 2023, spot market rates for H100 clusters had climbed to $8 or more per GPU per hour—sometimes higher on secondary markets.
This kind of price volatility, driven by supply-demand imbalances, is a hallmark of commodity markets. It’s the same mechanism that causes oil prices to spike when production is disrupted.
The Standardization Argument
One of the clearest signals that compute is functioning like a commodity is the degree to which pricing has standardized around specific units.
The market has converged on a few key benchmarks:
- GPU-hours — One GPU (usually an H100 or equivalent) running for one hour
- H100 clusters — Banks of 8, 32, 64, or more GPUs as a unit of capacity
- Flops (floating point operations per second) — The raw computational throughput of a system
These units are comparable across providers. Whether you’re renting from CoreWeave, Lambda Labs, AWS, or Google Cloud, an H100 is an H100. The underlying capability is interchangeable, which is the definition of fungibility.
This standardization is what makes price discovery possible—and what makes futures markets feasible.
Compute Futures: The Emerging Trading Layer
The most concrete evidence that compute is being treated as an asset class is the emergence of compute futures markets.
Seven tools to build an app. Or just Remy.
Editor, preview, AI agents, deploy — all in one tab. Nothing to install.
Several platforms have begun offering or experimenting with mechanisms that let buyers lock in future compute access at a fixed price. The logic is identical to oil futures: if you know you’ll need a large GPU cluster in six months for a training run, and you’re worried about price increases or availability, you buy a forward contract now.
Why Compute Futures Make Sense
AI labs and large enterprises face genuine planning problems:
- Training a frontier model can require thousands of GPUs for weeks or months
- Securing that capacity on short notice is often impossible
- Price swings between contract negotiation and actual usage can be significant
Forward contracts and futures solve these problems by letting both buyers and sellers hedge. The seller (a cloud provider or GPU lessor) gets revenue visibility. The buyer gets cost certainty and guaranteed access.
Who’s Building These Markets
Several companies are working on compute markets with futures-like characteristics:
- Foundry and similar platforms allow companies to trade reserved GPU capacity
- CoreWeave has signed long-term compute contracts with AI labs that function economically like fixed-delivery supply agreements
- Startups specifically building compute exchanges—essentially GPU commodity markets—have attracted venture funding
The infrastructure for a proper compute futures market is still early. But the economic logic is established, and institutional interest is there.
The Oil Analogy: Where It Holds and Where It Breaks Down
The comparison between AI compute and oil is widespread for good reason—but it’s not perfect.
Where the Analogy Works
Scarcity drives everything. Just as oil-producing nations gained outsized geopolitical leverage because they sat on reserves, nations and companies with large GPU stockpiles hold disproportionate power over AI development. NVIDIA briefly became the world’s most valuable company in 2024 in part because it sits at the chokepoint of the entire AI supply chain.
Compute is productive infrastructure. Oil powers physical machines; compute powers AI models. Both are inputs to economic activity at massive scale, which is why both attract serious capital.
Price volatility is structurally similar. Oil prices spike when supply is disrupted. GPU rental prices spike when a major model release or geopolitical restriction tightens supply. The mechanism is the same.
Reserves matter strategically. Countries like Saudi Arabia measure their power partly in proven oil reserves. Today, the U.S. export controls on advanced AI chips to China are explicitly designed to prevent China from building up compute reserves sufficient to train frontier models.
Where the Analogy Breaks Down
Compute doesn’t deplete. A GPU used to run inference on a model isn’t consumed by that use. It can run the next job immediately after. Oil, once burned, is gone. This makes compute more like an electricity-generating asset than a raw material.
Compute improves. Each generation of chips delivers more capability at lower cost per flop. Oil doesn’t get more energetic over time. This means holding compute as a long-term store of value is complicated—the asset you buy today may be obsolete in two years.
Software matters enormously. The value of a barrel of oil is fairly stable regardless of who holds it. The value of a GPU cluster depends heavily on the software, model weights, and systems running on top of it. A poorly utilized cluster is worth much less than a well-utilized one.
Despite these differences, the core point stands: compute has taken on strategic and financial importance that was previously reserved for physical commodities.
Why Enterprises Are Treating Compute as a Balance Sheet Asset
For large organizations, the shift in how compute is valued has practical implications beyond investment thesis.
Compute as a Moat
Companies that secured large GPU allocations early—either by buying hardware outright or signing long-term contracts with cloud providers—now have a genuine competitive advantage. They can run more experiments, train more specialized models, and serve more users at lower marginal cost than competitors who are buying capacity on spot markets.
This is why hyperscalers have committed to spending hundreds of billions of dollars on AI infrastructure over the next several years. Microsoft, Google, and Amazon aren’t just building data centers to serve their own AI products—they’re securing the infrastructure that will power the next decade of AI-dependent software.
The CapEx vs. OpEx Trade-off
For most enterprises, compute is an operating expense: you pay for cloud resources as you use them. But for companies that expect sustained, large-scale AI usage, there’s a growing argument for treating compute as capital expenditure—owning or reserving capacity rather than renting it.
This mirrors what happened with cloud computing itself. Early adopters of reserved cloud instances locked in pricing that looked expensive at the time but proved valuable as on-demand rates rose.
Government Policy as a Signal
Governments don’t regulate commodities unless they matter. The fact that the U.S. has imposed detailed export controls on AI chips—with specific restrictions on chip performance thresholds, memory bandwidth, and interconnect capabilities—signals official recognition that compute is a strategic resource, not just a technology product.
The EU’s AI Act includes provisions around compute access. The G7 has discussed coordinating on AI chip supply chains. These are the kinds of policy interventions typically associated with critical commodities.
The Infrastructure Stack: What “AI Infrastructure” Actually Includes
When people say “AI infrastructure is the new oil,” they’re usually talking about the full stack of assets that make AI compute possible—not just GPUs in isolation.
Layer 1: Silicon
The chips themselves—NVIDIA H100/H200/B100, AMD MI300X, Google TPUs, AWS Trainium/Inferentia, and custom accelerators from various AI labs. This is the raw compute substrate.
Layer 2: Data Centers
Purpose-built facilities with the power density, cooling systems, and networking required to run large GPU clusters. A modern AI data center requires 50–150 megawatts or more of power capacity. Building one takes years and billions of dollars.
Layer 3: Networking
Interconnect infrastructure—InfiniBand, high-speed Ethernet, custom optical networking—that allows GPUs within a cluster to communicate fast enough to function as a unified system. Training a large model across thousands of GPUs requires networking that most enterprise data centers weren’t built for.
Layer 4: Power and Energy
AI data centers are power-hungry at a scale that has become a genuine concern for grid operators. The energy infrastructure required to run large AI clusters—and the renewable energy needed to make it sustainable—is its own investment category.
Layer 5: Software and Orchestration
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
Model weights, training frameworks (PyTorch, JAX), inference engines (vLLM, TensorRT), and orchestration systems (Kubernetes, Slurm) that turn raw compute into useful AI capability. This layer is where most enterprise teams actually operate.
Understanding the full stack matters because investment and scarcity exist at every layer, not just at the GPU level.
What This Means for Teams Building AI Applications
If you’re not a hyperscaler or an AI lab, the compute-as-asset-class dynamic still affects you—just differently.
Cost planning has become harder. If you’re building on top of AI APIs, the models you depend on are built on compute that has volatile pricing. API prices have generally fallen as efficiency improves, but supply shocks can reverse that trend.
Model access may become a differentiator. Companies that have direct access to high-performance models—through enterprise agreements, custom deployments, or early partnerships—may outcompete those relying purely on public API access.
Abstraction layers become more valuable. As the underlying compute landscape gets more complex, tools that abstract away infrastructure management—letting teams focus on building applications rather than managing GPU clusters—become significantly more important.
This is where platforms that sit above the infrastructure layer deliver real value.
How MindStudio Fits Into the Compute Landscape
MindStudio sits at the application layer—above the infrastructure complexity that defines the compute-as-asset-class conversation. That positioning is intentional, and it’s worth understanding what it means practically.
When you build an AI agent or workflow on MindStudio, you’re accessing over 200 AI models—Claude, GPT-4o, Gemini, and more—without managing API keys, rate limits, retries, or infrastructure. The compute questions (which model to use, how to scale, what happens when a model is unavailable) are handled at the platform level.
For teams building enterprise AI applications, this matters because the compute landscape is changing fast. Models improve, pricing shifts, and new providers emerge. An abstraction layer that lets you swap models without rebuilding your application protects you from lock-in to any single compute provider.
If you’re thinking about deploying AI agents for business workflows, the ability to access multiple models through a single interface—and route tasks to the most cost-effective model for each job—is a practical hedge against compute price volatility.
You can try MindStudio free at mindstudio.ai.
Frequently Asked Questions
What does “compute as an asset class” mean?
It means AI processing power—primarily delivered through GPU clusters—is being treated with the same strategic seriousness as financial assets or physical commodities like oil. It’s scarce, it’s standardized, it has volatile pricing, and large organizations are stockpiling it, trading access to it, and building financial instruments around it.
Can you actually trade compute futures?
Early compute futures markets exist, but they’re not yet mature. Several companies have launched platforms for trading reserved GPU capacity. CoreWeave and similar GPU cloud providers have also signed long-term forward contracts that function economically like futures. Fully liquid, exchange-traded compute futures are still emerging.
Why are AI chips so scarce?
Remy doesn't write the code. It manages the agents who do.
Remy runs the project. The specialists do the work. You work with the PM, not the implementers.
High-end AI accelerators like NVIDIA’s H100 require cutting-edge semiconductor manufacturing (primarily from TSMC), specialized high-bandwidth memory, and complex packaging that takes years to scale. Demand from AI labs, cloud providers, and enterprises grew faster than the supply chain could respond. U.S. export controls have also reduced the global addressable supply by restricting which chips can ship to which countries.
Is AI compute actually like oil, or is it just a metaphor?
It’s a useful analogy with real limits. Like oil, compute is a scarce input to economic activity, subject to supply shocks and geopolitical control, and increasingly subject to strategic stockpiling. Unlike oil, compute doesn’t get consumed when used, it improves over time with new chip generations, and its value depends heavily on the software running on top of it.
How does the compute shortage affect businesses not in the AI industry?
Even businesses that aren’t building AI models are affected. The cost of AI APIs, the availability of AI-powered tools, and the pace of AI product development all depend on compute availability. Price volatility in compute markets flows through to API pricing, model availability, and the competitive dynamics of AI-powered software.
What is the “picks and shovels” investment thesis for AI compute?
It refers to the historical pattern from the Gold Rush, where the people who sold mining equipment (picks and shovels) often did better than the miners themselves. Applied to AI, it means that companies supplying the infrastructure for AI—chip manufacturers, data center operators, power providers, networking companies—may be more reliable investments than AI application companies, because they benefit regardless of which AI application “wins.”
Key Takeaways
- AI compute has taken on commodity-like characteristics—scarcity, standardization, price volatility, and strategic importance—that justify treating it as a new asset class.
- The comparison to oil is meaningful: compute scarcity creates geopolitical leverage, drives stockpiling behavior, and has attracted serious financial instruments including early futures markets.
- The full AI infrastructure stack includes silicon, data centers, networking, power, and software—scarcity and investment exist at every layer.
- For enterprises, the practical implications include harder cost planning, potential model access as a competitive differentiator, and increasing value for abstraction layers that hide infrastructure complexity.
- Tools that sit above the compute layer—letting teams access multiple models without managing infrastructure—provide a practical hedge against the volatility that defines compute markets.
If you’re building AI applications and want to abstract away the infrastructure complexity, MindStudio lets you access 200+ models and deploy AI agents without touching the underlying compute layer.