October 14, 2025

Blog

AGI will be solar powered: Part I

By

Joel Jean

This is the first of a two-part manifesto on why solar will power the age of artificial intelligence.

This is the first of a two-part manifesto on why solar will power the age of artificial intelligence.

AI and solar are both scaling exponentially, and they need each other more than anyone realizes. 

Solar + storage can power AI.

“But AI runs 24/7 and solar is intermittent—you can’t run a data center on sunlight.”

We’ve all heard it a thousand times. But it’s missing the point.

The physics works. 

The economics work.

What doesn't work is trying to build AGI on 20th century energy infrastructure.

AI needs solar, storage, and the grid—not new baseload power

AI doesn’t need baseload the way you think

If you really want baseload solar, here’s a simple recipe: Overbuild PV capacity by 5x or more, add tens of hours of battery storage, and voila—24/7 solar.

Exhibit A: Masdar.

The UAE's state-owned renewable energy company is building the world's first gigawatt-scale solar plant delivering round-the-clock power. 5GW PV + 19GWh storage = 1GW of continuous power.

It costs ~$6/W in Abu Dhabi today—expensive, but still less than 10% of what an AI data center costs (~$50/W capex). 

And while overbuilding PV may sound land-intensive, the actual land needed is surprisingly modest. A 100MW data center covers ~100 acres. Powering it 24/7 with solar would take 2,500 acres, or 4 square miles. That's smaller than a single airport. High-efficiency perovskite tandems could then cut that by 30–50%. And solar doesn’t need to be next door—just on the same grid.

Baseload solar is coming—but AI doesn’t need to wait for it. 

AI only needs massive amounts of new baseload power if you assume two things: flat inflexible load and capacity-limited grids. Both assumptions are fading.

  1. Not every AI workload is 24/7 or latency-bound. Real-time inference may need high uptime, but training, fine-tuning, and background compute can increasingly flex. Hyperscalers like Google are already implementing demand response. Load elasticity doesn’t eliminate the need for firm power, but it reshapes when and how much we need.
  2. Most data centers will stay grid-connected for minimum cost and maximum reliability. And the grid isn’t “full” 24/7—it’s only maxed out during peak hours. We only need new capacity to cover those hours (i.e., peakers), not baseload for the rest of the time when spare capacity sits idle.

Solar and storage can power the vast majority of hours

The winning formula is a diverse generation mix: ~90% solar, wind, and storage—with gas, geothermal, and nuclear filling the gaps. Flexible loads and grid diversity already make solar the cheapest incremental power for most new data centers. Soon it will be the obvious choice.

Solar won’t be the only energy source powering AI, but it will be the dominant one.

The economics: Competitive today, Dominant tomorrow

Firm solar electricity—PV plus batteries or gas peakers to guarantee output at times of maximum capacity need—costs ~$100/MWh according to Lazard. That’s on par with gas combined-cycle plants. 

But the curves are moving in opposite directions. 

Solar and battery costs are falling 20% with every doubling of production.

Utility-scale PV LCOE is dropping even faster. While silicon PV is approaching its efficiency ceiling, perovskite tandem technology extends the runway for efficiency gains and cost reductions for decades. Higher efficiency means fewer panels, less racking, and smaller footprints—cascading 20%+ cost reductions across the system.

Gas turbines are heading the other way.

Bloomberg reports over $400B of planned gas plants at risk because combined-cycle turbine manufacturers can’t keep up. And investors looking to expand turbine capacity need to think twice. Gas will remain an important backup resource—but that’s exactly the problem for new investment. When solar and storage are riding down an exponential learning curve, a flat or rising gas cost curve looks more like an insurance policy and less like a growth bet.

New solar: Cheap, clean, and firm (NextEra)

So why isn’t every data center solar powered?

Three reasons—all temporary.

  1. Speed: Existing power is faster than any new power. AI companies are grabbing whatever they can get—coal, gas, hydro, nuclear. But the grid can’t keep up. Since 2020, wholesale electricity prices have more than doubled in some parts of the US—mostly within 50 miles of data centers—triggering backlash.
  2. Location: Data centers need fiber infrastructure and low latency, which doesn’t always align with the best solar sites. But as the cost of solar and storage falls, everywhere becomes a good solar site.
  3. Risk: No hyperscaler wants to bet a $500M training run on an untested solar + storage microgrid. Fair. But that’s an execution risk, not a physics problem. 

Behind-the-meter solar is starting to bypass the slowest part of interconnection: transmission approval. This isn’t a simple fix for the entire interconnection queue. But every project that generates power where it’s consumed lightens the load on an overstretched grid while accelerating deployment. 

As costs keep falling and solar-first data centers prove reliable, these barriers will evaporate.

The shift is already happening

NextEra has 8.8GW of solar projects lined up to serve data centers and other growing loads by 2027.

Google is building industrial parks powered by gigawatts of renewables—solar, wind, and storage running side by side with servers. 

Why? Solar deploys faster than anything else—12–18 months versus 3–5+ years for gas, wind, or nuclear. 

The AI infrastructure world is trending toward co-located and behind-the-meter renewables. Hyperscalers expect over a quarter of data centers to be fully powered by onsite generation by 2030.

Some will go even further—off-grid solar campuses with storage and gas backup. No interconnection queues. No transmission bottlenecks. No utility markup. One analysis found such systems could meet 90% of lifetime energy demand from solar for $109/MWh—cheaper than restarting Three Mile Island. 

AI doesn’t need miracle energy technology. It needs energy that scales. 

And nothing scales like solar.

Solar isn't just competitive—it's the only energy source that can scale fast enough and far enough to match the exponential growth of AI.

Stay tuned for Part II: Why solar PV will be the foundation of the AI age.

Technical notes

  1. Baseload solar costs: The $6/W figure for the 24/7 Masdar project represents near-optimal conditions—high irradiance and low construction costs. In cloudy or high-latitude US locations like Boston, we would need ~2.5x the solar capacity to generate the same average output (kWh/day) in worst-case winter conditions, plus additional storage or firm backup to handle extended cloudy periods (dunkelflauten). Seasonal lulls generally require geographic diversity or long-duration storage. Since solar represents ~45% of total system cost for a utility-scale PV + storage system (per NREL data for “normal” non-overbuilt systems with a similar PV-to-storage ratio), a US baseload solar plant might run ~$10/W today. Still expensive by solar standards, but only a 20% premium on AI data center capex—worthwhile for clean power that deploys in 12–18 months.
  2. Land requirements: A modern hyperscale data center typically has a total footprint of ~1 acre per megawatt of load—or a power density of ~250W/m2. A typical utility-scale solar project needs ~5 acres per MW = 50W/m2. To provide continuous (24/7) solar, we would need to overbuild PV by ~5x and add storage, so baseload solar would take up ~25x the land area of the data center itself. That sounds large—but solar doesn’t need to be adjacent. Even 2,500 acres of PV for a 100MW data center is only 4 square miles—smaller than many airports or industrial parks. And with perovskite-silicon tandems boosting module efficiency by 30–50%, the footprint could drop to ~3 acres per MW—roughly 15x the data center footprint, or ~1,500 acres of PV for the same 100MW load.
  3. Firm solar costs: The ~$100/MWh figure for firm solar (as of June 2025, per Lazard) represents the solar LCOE plus levelized firming costs, given the current US grid mix. Firming costs reflect the additional capacity payments—priced at the regional Net Cost of New Entry (Net CONE)—required to supplement solar and bring the combined system’s capacity contribution to 100% (i.e., equivalent to a fully firm resource). Net CONE represents the capital and operating costs of a new firm resource (gas peakers in most ISOs (MISO, SPP, PJM, ERCOT) or 4-hour battery in CAISO) minus expected energy market revenues.
  4. PV technology and cost impacts: Moving from 22% silicon to 30% perovskite tandem efficiency reduces module count by ~27% for equivalent capacity, which reduces area-related balance-of-system (BOS) costs (racking, wiring, labor, land) and O&M costs (cleaning, inspection, equipment replacements) proportionally per watt. Efficiency gains plus continued cost-curve improvements could drive 30–50% reductions in utility-scale solar costs over the next decade, even as silicon approaches theoretical limits.

Thanks to Patrick Brown, Jon Lin, Dave Bloom, Nasim Sahraei, and Miles Barr for suggestions and for reading drafts!