The explosive energy demand from data centers is breaking our grid, pushing desperate developers to build their own on-site gas plants just to get online. To figure out how we avoid locking in decades of new fossil fuels, I’m joined by Camus CEO Astrid Atkinson and Princeton’s Jesse Jenkins to break down their proposed alternative. We dig into how adopting flexible grid interconnections and clean, battery-backed “power parks” can meet this massive load growth without abandoning our decarbonization goals.
(PDF transcript)
(Active transcript)
Text transcript:
David Roberts
Hello and greetings, everyone. This is Volts for March 25, 2026: “For data centers, a little flexibility goes a long way.” I’m your host, David Roberts.
By now, most Volts listeners are familiar with the crisis facing the electricity sector: after decades of plateau, demand is rising again, quickly. Giant data centers are banging on the door, demanding to be hooked up to the grid. Interconnection queues are clogged and some grids, like PJM’s, are basically maxed out. Everyone — utilities, PUCs, legislators, and the public — is scrambling to figure out how this should work, how the system can grow to accommodate the AI revolution without placing all the costs and risks on ratepayers.
One solution that is gaining considerable attention is flexibility in data centers. Conventional energy models, including those used by utilities to project demand, assume data centers are always on, always consuming at their rated level, which, in the case of some of these new data centers, is as much as a gigawatt. Guaranteeing a gigawatt of steady service 100 percent of the year, come what may, is no small thing. That’s one reason utilities have been so slow to connect these things.
But what if they didn’t have to guarantee 100% of service 100% of the time? What if, say, 10 to 20% of the time, data centers could survive on their own, running on their own batteries and backup generators?
That little bit of flexibility could enable utilities to avoid tens of millions of dollars in grid upgrades and connect many more data centers much faster — at least, that is the argument in a new study and white paper on the subject. It’s a team-up among the energy analysts at the Princeton Zero Lab, the energy modelers at encoord, and the flexibility-software startup Camus.
A couple of those names probably sound familiar to you! Indeed, today’s guests have been on Volts before and are some of my all-time favorite guests, and people. I have Astrid Atkinson, the CEO of Camus, and Jesse Jenkins, the head of Princeton’s Zero Lab. We’re going to discuss the model for data center flexibility and the benefits it promises.
With no further ado. Astrid Atkinson, Jesse Jenkins, welcome back to Volts. Thank you so much for coming.
Jesse Jenkins
Thanks, Dave.
Astrid Atkinson
Yeah, thank you.
David Roberts
There’s a lot to talk about here, a lot going on, so much going on in this space. But I want to start before we get to what they ought to be doing, what data center developers ought to be doing, let’s talk briefly about what they are doing. My impression, and tell me if this is wrong. My impression is that everybody involved is in a big old sprint for gas. I look out there, I see data centers scrambling for off-grid gas capacity. I see utilities scrambling to build gas capacity. I see gas companies gearing up for historic demand, no end in sight. This whole thing looks like a gas disaster to me.
Is that wrong, Jesse? Maybe you start. It looks to me like the only way these guys can figure out how to get data centers online at the moment is by surrounding them with a bunch of frickin jet engines. That seems crazy to me. Is that in fact crazy?
Jesse Jenkins
I would say that’s definitely where the zeitgeist is at right now. The default assumption is that you’re just going to build a bunch of gas power, whether it’s a combined cycle plant or a simple cycle open cycle gas turbine, the jet engine you mentioned, or even reciprocating internal combustion engines, basically big diesel gensets that can run on natural gas. Halcyon is tracking 85 gigawatts of gas plant additions currently planned across the US and it is quite a potpourri of different designs. Some of those are grid connected, some of those are utilities. Some are happy to say, “Yep, I’ll build 2 gigawatts of new gas generation if you connect to my transmission grid.”
But a good chunk of those, and a very rapidly growing piece this year, is behind-the-meter projects. Projects where they’re actually just trying to build these generators on site. One emblematic example is the x.AI Training center outside of Memphis, the so-called Colossus facility outside of Memphis, Tennessee. It’s right over the border in Mississippi, parts of it anyway.
They basically rolled up with over a dozen gas turbines on trucks, on the back of a flatbed, plugged those in, and have been running these intended-to-be-temporary generators pretty much nonstop. They initially tried to sidestep air pollution rules, saying that they were not applicable because they were mobile generators. The EPA has since said, “No, we thought about this before. Those count.”
But the Trump administration’s EPA is not enforcing any rules. They are violating the rules but with no enforcement to rein them in. Now they’re asking to build over 20 permanent gas generators on site, to which the public resoundingly said “no” recently at a February open meeting in Mississippi. That’s emblematic.
They are the first ones to get to a gigawatt collective scale training capacity and have been touting that. They did that by building their own small, meant-to-be-temporary gas generators and just running them continuously in order to avoid having to wait for projects to come online on the grid and supply their energy. Once folks saw that and other projects have been announced, that model has started to spread. It seems to be the default stance of many developers right now: the grid process is broken, so forget it. I’m going to go around it and build my own generation on site. Even if that’s not an efficient combined cycle power plant, which is what you would want to do if you were using gas, it doesn’t matter, they’ll pay for it.
David Roberts
Before we move on, I just want to talk about, aside from climate change — there’s a lot of emissions — but I just want to ask about operationally. It’s crazy, isn’t it? This is not how you would run a data center if you were setting up to run data centers in a rational way. These turbines they’re using are not meant to do what they’re doing. Am I wrong about that?
Astrid Atkinson
Yeah. I can offer a little bit of perspective on this because part of my past life was in the reliability organization at Google, which deals a lot with the intersection between software and physical infrastructure. I have some experience working with data center infrastructure. Firstly, there is a real risk of locking in a lot of gas generation to meet this demand growth need. On-site, fully off-grid natural gas generation is not great from a data center perspective. The reliability profile is not great. You need a lot of redundancy to be able to do this on site. Personally, I would be concerned about things like vulnerability to natural gas price fluctuations as well as deliverability challenges.
David Roberts
The more I think about this, the crazier it seems. If one of these big gas generators that’s off grid goes out, these data centers want five nines reliability or whatever it is. If one of these generators goes out, you have to have on site backup generation sufficient to cover that. You are building double the capacity you need in natural gas on site. That seems crazy.
Astrid Atkinson
Yeah. Apart from anything else, they’re not quick to build. It’s not clear to me that this is actually a really fast path to power solution in the way that folks on the tech side are hoping it is.
In practice, what we are seeing is that while there’s a lot of discussion of fully off-grid sites, what I’ve heard from utilities working with a lot of these sites is that what the utilities will say is, “There’s no such thing as a fully off-grid data center.” In practice, there’s still a lot of crosstalk and cross traffic looking at opportunities for grid connection. There may be a gap between exactly what people are hoping for and what’s happening on the ground.
Jesse Jenkins
I think it’s funny how the zeitgeist shifts. If you were talking to somebody in 2024 about whether data centers would go entirely off grid, the answer was absolutely not. Data centers need five nines reliability, and so they have to have grid connections.
David Roberts
That’s crazy, for obvious reasons.
Jesse Jenkins
Now, the reality of actually trying to connect to our grid and all the fragmented and broken processes that entails has started to sink in. Everybody’s searching around for an easy button solution with increasing levels of desperation. Going out and trying to refurbish old jet engines from decommissioned aircraft and get them back into service or use marine diesel engines, things like this. I look at this and I just see things are broken.
This is a sign that we are between a rock and a hard place. The rock is the growing demand, which is very real for 100 plus gigawatts of peak demand growth by 2030, most likely. The hard place is that our grid and its institutions have not been designed to keep up with both general load growth and the scale of these facilities. I always like to stress for folks that a gigawatt is bigger than the entire Pittsburgh metro area or something like that. It’s like plopping down an entire city.
David Roberts
These are like small cities.
Jesse Jenkins
Not even small cities. A gigawatt is a big city. You’re trying to connect a city’s worth of demand in one location on the grid, and that’s not a trivial thing to do even if our institutions were working well, which they’re not. It is a sign of increasing desperation. I don’t think many of those behind-the-meter or largely behind-the-meter projects will actually pan out as intended. The question is, what else could we be doing better than this? Because this seems nuts.
David Roberts
My worry is that while we’re trying to figure out a better way, we’re just locking in a lot of this gas. These are going to be stranded assets. They own all these turbines and jet engines and everything. They’re going to want to use them. I just worry that we’re digging —
Astrid Atkinson
I do think that’s a risk because there are a couple of factors going on here. One is that it’s likely that while there is a lot of real demand out there, the current queues, any individual project has some risk associated with it because not all of them will end up being built. That means we’ll likely build generation in places to support sites that don’t end up happening. Then we do have a lock-in problem. The other thing is that there’s an alternative path which could support a lot more development of grid-connected generation, whether that’s fossil or renewable. It gives us a path toward further decarbonizing our overall energy mix on the grid over time. I don’t want us to miss that opportunity. I see this as a fork in the road.
David Roberts
It does seem like a fork in the road. I was talking with Jesse about this off mic before. This is the rare problem. Most of the problems people in our world have encountered are regarding incumbents who feel no particular need to change. We’re just outside, beating on the doors, yelling at them. Here, everybody recognizes that this is broken, everybody recognizes that this needs to happen, and they’ve got lots of money to spend on solutions. This is a problem space where things are actually happening and everyone wants to solve it, which is not typically the kind of problem we deal with.
Astrid, let’s talk about this model that you are proposing here in this paper. There are two basic parts of it that I want to take in turn. There’s the flexible connection part and then there’s the power park part. There’s the bring your own generation part.
Yep, those are the two parts of this model that you’re proposing. Let’s talk about the flexible connection first because I alluded to that in my intro. Describe the typical connection and then how a flexible connection would be different.
Astrid Atkinson
Yep.
Yeah, so the typical connection for data centers, which has been the industry practice up until the start of last year and in practice is still the only way data centers are really getting built today, is to have an interconnection that supplies you with 100% of your peak power need. It gets you nameplate capacity from the grid. We’ve talked about there being potentially a lot of interest in bringing on-site generation or battery. That is not really something that is accounted for in that traditional interconnection mechanism. We’ve also talked about the fact that there might be a lot of additional capacity available if you’re able to be flexible around the few hours or even up to maybe a day or two a year where the existing grid is otherwise at peak and is full.
If you can avoid those peaks, there is an additional amount of current grid capacity that we can unlock. There are two ideas of what we’re talking about when we talk about flexibility here. Broadly, what we’re saying is that if we could update our interconnection model to allow a site to sometimes use less than their peak capacity, by agreement with the grid operator, whether that’s by going to flexibility or going to local generation or what, we would perhaps be able to interconnect that site more quickly with less transmission system infrastructure required and potentially also with less overall generation capacity required to support that kind of stacked peak demand.
David Roberts
This is, as far as I know, something that only ERCOT is doing currently. That is, they’ll say, “Sure, you can hook up with the proviso that a few hours a year we’re going to cut you off.” It turns out that just that few hours a year unlocks a boatload of more capacity. Is ERCOT’s current practice what you are talking about and basically is what you’re asking for just other utilities to adopt that same model?
Astrid Atkinson
It’s close. Firstly, ERCOT has a connect and manage model for generation, and that’s really what we’re looking at here for load. It’s basically saying —
They don’t have it for load, though?
Not yet.
David Roberts
Interesting.
Astrid Atkinson
It’s in progress. They do have a rule that enables loads to be curtailed as part of the overall security constraint, economic dispatch. What they’re working on is a corresponding set of rules that enables that capability to be taken into account at interconnection time. It’s a work in progress there. But that’s basically the goal.
Jesse Jenkins
It’s worth breaking out. There are two obstacles that have to be overcome in order to get a data center online. One is you have to have sufficient transmission capacity to connect and that’s your local substation. More importantly, it’s also deeper constraints that may be caused throughout the grid when you add a city’s worth of demand at some point and that reconfigures power flows in important ways. That’s a very localized thing. Generally, the approach to accommodating new loads has been, “Let’s run the study and make sure that we upgrade the grid so that that load can always be served and that those constraints are never binding on the load side.”
That is a local transmission owner-driven process on the demand side right now, as opposed to a FERC, federal jurisdictional entity like the regional transmission organizations which do interconnection for generators. There is a proposal to shift that potentially to the federal level that FERC is considering. Right now that’s done differently in every transmission owner jurisdiction around the country. The general rule is, make sure that there’s no issues, no constraints that are going to bind our ability to serve.
David Roberts
Which is an incredibly conservative way of doing things.
Jesse Jenkins
It is, and it’s consistent with this idea that the utilities all have an obligation to serve and that everybody deserves reliability. That’s the traditional thinking. But these are not traditional players. It makes sense not to treat a giant data center the same way that you would treat my EV charger or a commercial office park. They’re not the same thing. That’s the first constraint. The second constraint is that you have enough generation adequacy or accredited capacity so that you can actually supply the data center with power when it needs it.
Traditionally, we rely on the grid for all of that. You have to go to the market and get a supplier to meet your needs. Those suppliers have to show that if you have 100 megawatts of peak demand, they can deliver 100 megawatts of accredited capacity or energy available whenever you need it. You could get some of that from the grid and some of that you could be supplied by your own ability to curtail your demands from the grid.
These two barriers, the transmission constraint and the generation adequacy or capacity constraint, are the two things that every data center has to overcome if it wants to connect to the grid. We need processes that recognize that there are other solutions besides just waiting for the utility to solve the whole problem or doing it entirely off grid, behind the meter with your own gas plant.
David Roberts
The two parts, as I said, this is a two-part plan. The two parts are to address those two constraints. The flexible interconnection thing that we’re talking about now, that’s to address the transmission constraint. If you’re transmission constrained, this makes more use of existing infrastructure.
Astrid Atkinson
The flexibility side can also help to address the generation constraint as well. As Jesse mentioned, that could be either the data center curtailing its own usage, the data center switching to battery, or even the data center procuring accredited capacity from a VPP. There are a few different ways that flexibility can support on the generation side as well as transmission.
David Roberts
When we talk about the generation side, client-side generation resources that can cover for when the grid is not supplying power, what are we talking about? As we said at the beginning, mostly we’re talking about gas. At least existing. At least the ones that are going up right now, they’re talking about gas. What do you envision? What is the solution space there? What kinds of things could they bring?
Astrid Atkinson
This is where we get to our fork in the road, because a lot of the conversation today is centering around gas. But there are alternative models that are live in the market today that look at what you referred to earlier as a power park approach, where you will potentially have a portfolio of resources supporting the data center. That could be solar and storage that’s nearby or on site, that could be renewable development under PPA that’s further away but is accessible via transmission connection to the site. That could be flexibility resources other than generation on site.
David Roberts
Batteries mostly.
Astrid Atkinson
Battery, load reduction, all of the above.
David Roberts
But mostly batteries.
Astrid Atkinson
Yeah.
Jesse Jenkins
Maybe solar. It depends on how much space you have.
Astrid Atkinson
Could be. Collectively, looking at the power challenge as a portfolio approach that uses some grid power and some new generation, which ideally is renewable along with a lot of battery, is the alternative to natural gas. We did look at this in the paper too, and I know Jesse will probably have some perspective on this kind of portfolio approach.
What we did when we put the paper together was for each of the sites, we modeled the flexibility requirement that would be necessary to get it connected to the grid at its target capacity. How often would you have to be flexible, by how much? Then we modeled what would be the stack of resources that would serve that additional need. Where would that flexibility come from?
David Roberts
Where did it come from? Tell, do tell.
Astrid Atkinson
For some of the sites, battery alone, as well as some component of on-site load reduction, was enough. That was true at three or four sites. At a couple of the sites, there is potentially a role for natural gas as a firming resource. That’s up to the developer as to the choices that they want to make about their power mix. We also, at the request of other participants in the study, modeled what it would cost to not do that. To say, “What if we just wanted to build a lot of battery instead of that natural gas plant?” It costs another billion or two dollars.
David Roberts
More to do batteries than to do natural gas?
Astrid Atkinson
A little bit. But the data centers are really expensive, and it still comes out net positive if you connect more quickly. There’s a lot of room for improvement here.
David Roberts
Batteries are going to get cheaper too. They’re getting cheaper and cheaper.
Jesse Jenkins
Yeah, they are, and that’s helpful. At the moment, gas turbines are getting more expensive and gas prices are going up. That helps close that gap a little bit. I wanted to highlight something as we talk about batteries. We typically think about shorter duration, four or six hour kind of dispatch.
David Roberts
Long duration plays a role here.
Jesse Jenkins
It might. I wanted to go back to something you said in your intro, which is, “What if data centers could be flexible 10 or 20% of the time? That would unlock a lot.” It’s actually less than 1% of the time.
David Roberts
You’re jumping ahead, Jesse.
Jesse Jenkins
But this is important to understand the role that batteries can play in relieving these constraints. When we looked at the constraints, encoord, which is one of the three organizations that did this, did actual optimal power flow modeling for a real transmission system that was provided and vetted by a utility looking at six different sites across that system where you could potentially host a 500 megawatt data center nameplate capacity.
What they found was that in two of those sites, there were basically no constraints, and in three of the sites, the annual curtailment in total was 7 hours, 11 hours, 13 hours, and 35 hours. At the highest end, that’s 0.4% of the hours of the year. The longest events at three of the four were five hours long, and the fourth one was 16 hours. There were only three or four of these a year. With the exception of that 16-hour event, those are perfect for batteries. They’re the right duration for a battery to be able to ride through the entire length of the event.
David Roberts
Let me clarify this for listeners. You did the model. The premise of the model is that these data centers are making deals with the utilities saying, “You provide 80% uptime, 20%, we’ll cover ourselves.” But that’s on paper.
Jesse Jenkins
In practice, the concept is actually, “You connect us and you don’t charge us for any transmission upgrades because instead of making those costly and, more importantly, very slow transmission upgrades, you can dispatch us instead to reduce our consumption at the point of interconnection to avoid any of those constraints.” It’s an operational strategy in lieu of a wires upgrade.
David Roberts
The contract says up to 20% of the time. But in practice, when you run the model, they’re actually only called on, they’re actually only curtailed. It’s not even close to 20% of the time. It’s not even close to 10% of the time.
Jesse Jenkins
I don’t think we constrained it.
Astrid Atkinson
But the model that we used was one where there’s a firm capacity allocation and then a flexible capacity allocation. The firm capacity allocation gets served like normal load. You get it 100% of the time. It’s guaranteed. The flexible capacity allocation is a margin on top of that that’s available some of the time.
The reason that we split it that way was that allows the data center side to bound the problem. They can say, “I have 100% of this much, and then this other capacity is contingent on being available in terms of grid capacity.” On the occasion when it’s not, I will fill in the gap on that.
There are some further constraints around that, ideally within an interconnection agreement to say what kinds of contingency events or what kinds of flexibility events might be called. That’s where you get the “maybe up to 20% of the time for the flexible allocation only.” There are a couple of dimensions in here that increase the reliability on both sides of the conversation. That’s important.
David Roberts
The thrust here is that just a few hours of flexibility a year — we’re not talking about a ton of time here. It’s only a few hours a year in practice where you need to bring this flexibility to bear. Those few hours get you so much.
Jesse Jenkins
It’s only a few hours and it’s only a few hours long each time it occurs. At most of these sites, and there’s variability, this is going to be different at every point in the grid, which is why you still are going to have to have an interconnection study. You’re going to have to say, “Where can I connect where there’s headroom?” Traditionally, what data center developers have done is go around looking for places where this isn’t a problem, where there is headroom, where you can sneak on the grid and connect under its existing capacity without causing any upgrades besides the substation you need to connect.
The problem is those sites are not available anymore and the scale of data centers is not… a 50 megawatt data center five years ago was a big data center. Now we’re talking about 500 megawatts, 1,000 megawatts. There’s no gigawatt of headroom lying around in most places. What we’re saying is there may be locations where, if you tried to connect there, you would have to curtail 50% of the time, but maybe just don’t connect there.
There are other locations, and in this case, we found six of them within this one utility’s territory. There’s probably plenty more where with just a few hours of flexibility, you could bypass those kinds of transmission upgrades. Once you do that, now you have access to the broader grid and all of the energy and capacity that it could deliver from all the various options that can flow through that grid as opposed to just what you can build on site. That’s the other big unlock here.
David Roberts
You want the grid as backup.
Jesse Jenkins
There’s a reason we have a grid. We started with little microgrids with every factory having its own diesel generator or its own coal-fired reciprocating engine. It works quite well, but the grid is much better.
Astrid Atkinson
This is where the inefficiencies that are baked into current grid planning become really interesting. The way that we design the grid today is with a significant amount of contingency capacity for being able to continue to serve either during peak time or during outages. That contingency capacity is rarely used. It’s really only called into play during an outage. Because of that, we have pretty low overall utilization of our grid as a whole.
David Roberts
This is a subject we’re going to be talking about here on Volts quite a bit in coming weeks. Say a little bit about what you’re studying because you did a study of this.
Astrid Atkinson
As part of the study that we did, looking at these data center sites and the role that flexibility can play, what we were really looking at is what is the unlock for being able to be flexible some of the time? How much additional capacity could we get? Our back-of-the-envelope calculation is that it’s somewhat on the order of about 30% once you take into account regional variation and the practical considerations of how much flexibility you could really bring and what this looks like from region to region. That’s just a guess. Jesse’s Princeton team did some of the modeling around this.
The other thing that’s worth considering here is that the grid is somewhat over-provisioned for everyday usage. This is not the first time we face the challenge of needing infrastructure to do a lot more work. This idea of oversubscribing physical infrastructure, scheduling a bit more work than it could do at peak, is the foundation of how our other networks work. That’s how the Internet works, it’s how our telecom networks work.
David Roberts
Can you unpack that a little bit? That notion of oversubscribing? I don’t know that my head is totally wrapped around it.
Astrid Atkinson
Absolutely. The way that we capacity plan for the grid today, we’re looking at a strict worst-case scenario.
David Roberts
What’s the peakiest peak?
Astrid Atkinson
At the peakiest peak, how much capacity do we need to serve all of our demand?
Jesse Jenkins
Assuming that a few things fail at that moment. A big transmission line goes out or a nuclear plant goes offline, these contingencies that they also layer on top of that peak, that make it even harder.
Astrid Atkinson
Which means that in practice a lot of resources are sitting idle a lot of the time. This is something that we faced in other contexts as well. A really interesting example on this is actually from within the data center space. I used to work in reliability at Google and a lot of that was around thinking about systems and network and application infrastructure that would allow us to operate systems reliably, but about 50% or more of the work involved was actually looking at resource efficiency. How do you get more usage out of your really expensive capital assets that you can’t get more of in a hurry? Your data centers —
David Roberts
It’s all about flattening peaks. In whatever context, it’s about flattening those peaks.
Astrid Atkinson
It is. A lot of what we worked on in that context was, “Okay, we’ve provisioned N+2 capacity for data center resources.” Let’s say that you’re serving web search. You’ve got maybe 10 web search clusters in different locations. That’s how many you need to serve web search at its peakiest peak. Then you’ve got another two clusters in other data center locations in case you lose one or two. Now you’ve over-provisioned, you’ve got an N+2 capacity situation, which means that you have a lot of very expensive compute resources sitting idle a lot of the time.
The idea of oversubscription is that you could say, “Sometimes I need these resources occasionally, but the rest of the time they’re sitting around idle. That’s really expensive for us. It’s limiting our ability to do work. How can we get extra work out of those resources opportunistically?” The idea of oversubscription says we’ll schedule a bit more work onto those resources than they can do at the peakiest peak. We can do that because we have tools to remove some of that work at the times when the capacity is needed for its primary purpose.
That’s something that sometimes happens within data centers. Not all data centers are operated that way. It’s definitely something that happens within how we operate the Internet. The Internet has a notion of quality of service tiers where when it’s extra busy, it’ll drop the lower tiers in favor of only serving the important ones. It’s also how things like 5G and cellular networks work. They’re heavily oversubscribed. They have tens, maybe hundreds of times more traffic signed up to use them than the network can actually sustain. They cleverly drop load at certain times to make that possible.
Jesse Jenkins
You don’t necessarily notice, but you’re getting throttled back in terms of your bandwidth all the time.
Astrid Atkinson
That’s right.
David Roberts
A couple of questions about this, about BY-ing your own C. You guys didn’t go for the BYONCE, sadly. I guess because you’re trying to make room for gas to bring your own new clean capacity, clean energy?
Jesse Jenkins
Yes.
Astrid Atkinson
I think it was just a failure of imagination.
David Roberts
We use BYONCE around here at Volts. But this model — a couple of questions about it. One is, if you add to the complexity and cost of building a data center the additional complexity and cost of building a power park next to it, are you skewing the market in favor of big players? Are you at risk of wiping out the guys in the pickup truck in the lower end of the market? Should we care about that?
Astrid Atkinson
There’s not a lot of really low end of the data center market.
Jesse Jenkins
Not anymore.
David Roberts
Everybody’s big now.
Astrid Atkinson
I guess everybody’s pretty big.
Jesse Jenkins
In terms of bringing your own capacity, once you’ve solved the local transmission constraints with some amount of flexible interconnection, which is backed by either compute flexibility at the data center itself and or typically some combination of compute flexibility and on-site generation or storage, you have the ability to get around transmission constraints when they’re relevant. The other 99% of hours of the year you have access to the whole grid. You can’t go from Kansas to New Jersey potentially, but you have a much wider catchment that you can draw on.
You don’t need to build a co-located power park right next to your data center. That is an option. It’s really only an option in certain places like West Texas or Oklahoma where you can actually build a gigawatt’s worth of supply in one location. That’s what Intersect Power, which was just acquired near their pipeline, was just acquired by Google, is doing at a few locations. What my company, Firma Power, that I started recently is trying to do is say, once you’ve got that unlock, where are the wind, solar, storage, demand flexibility, VPP resources across the much broader grid that if you assembled them in the right optimized portfolio could deliver all 100% of the accredited capacity, meaning it can get you all the energy you need when you need it, and some fraction as desired by the customer of the energy on an hourly basis that they need as well, and deliver that to the site of the data center using that old-fashioned grid that we’ve built for a reason.
David Roberts
You’re assembling a virtual power park basically.
Jesse Jenkins
You could call it that. But it’s just the grid. This is just how we do power.
David Roberts
This is what you’re calling nearby but not on site. Both those are important. They’re not on site so you don’t have to build everything behind the meter. But you do need, if you’re going to buy capacity resources that are not on site, you do need them to be on the same grid.
Jesse Jenkins
There are differing constraints that utilities apply. There’s what we call the big C capacity, which is the accredited, the formally accredited capacity as designated by PJM or MISO or SPP or whatever the capacity market entity is in the region that you’re in. They each have different formal constraints about how far you can move capacity from one region to another. You can’t necessarily go from one side of their market to the far other side, but you generally can move it at least adjacent to adjacent zones across a fairly wide area.
Then there’s also the little c capacity. What actually is deliverable during operations? What will the utility that has to agree to connect this resource actually trust to show up when they need it? The basic picture is we have to widen our view out from just the constraints caused locally by the data center connection and think, are there important binding transmission constraints between where this wind farm is or this large battery project is and where the data center is? If there are, do those occur at times when we don’t have available resources on the other side of the constraint, when it actually is one of those binding capacity periods?
That’s exactly what our company’s trying to do. That’s not a trivial thing to say definitively. It’s not trivial to identify the right mix of wind, solar, and battery resources that operate collectively to meet your needs. Their capacity accreditation evolves over time because as the mix of resources in the grid changes and as the demand changes from electrification or data centers, the times when the grid is stressed moves around. Solar might be great in the summer afternoon, but it’s not good at night in the winter.
The values of these resources change over time. What we’re trying to do is design optimal portfolios that manage all of those risks for the customer and take care of that complexity. If we can do that, then we can unlock hundreds of gigawatts of resources in the queue already under active development, ready for deployment in 2028-2029, 2030. Exactly in this crunch time when there aren’t a lot of other options. It’s not an easy thing to do.
That’s why we felt like we had to start this company, because outside of Google and a handful of others that are painstakingly assembling these deals, it isn’t the easy button solution. What we want to do is, using optimization software and good commercial efforts from our supply team, actually make it easy for data centers, as easy as contracting with a gas station.
David Roberts
Just to clarify what we’re talking about here, what you’re doing for data centers is you’re saying to them, instead of you having to build everything on site, which can often be the most expensive option, we can put together a portfolio of resources for you that we can guarantee are not going to trigger the need for transmission upgrades. That’s the whole point, to avoid those additional costs of transmission upgrades.
Jesse Jenkins
Or at least that they’re already in the interconnection queue sufficiently far along that the upgrades required to interconnect them can be completed before your online date.That’s not just any portfolio of resources.
That’s not any portfolio of resources. They have specific characteristics and require some digging and research and work.
And some optimization software that we’ve developed based on the research at Zero Lab. It’s the kind of thing that we were doing in a hypothetical way in this white paper: using this mix of on-site compute flexibility, flexible generators that are on site or close to the site that we can use to relieve transmission constraints, and then the broader mix of available resources across the grid region to solve this problem. This is basically how a utility does its planning.
If you think about a vertically integrated utility that has to do generation and transmission planning, maybe has some demand flexibility programs in their portfolio, they’re not turning to one resource to solve the problem. They’re developing a portfolio.
We’re trying to do that wherever the data center needs to show up, because some utilities will do that, but others aren’t capable of it, either because of the structure or the transmission owner doesn’t own generation or because they’re a small rural cooperative that has a peak demand of one and a half gigawatts. Now you’re trying to connect 600 megawatts to their grid or other circumstances like that, which we find pretty broadly across the country.
David Roberts
One piece of that portfolio, Astrid, that we have not dug into at all, but that I’m endlessly curious about, is the compute flexibility itself. The conventional wisdom as I understand it among the data center folks is flexibility via on-site power resources is easier and cheaper than trying to flex our actual compute load. The actual flexing of the compute load itself is going to be relatively marginal. But you included it in this paper pretty prominently.
Of the compute loads that run in these data centers, what percentage of them can be moved around in time or geography in a way that makes a data center more flexible? Is there a set answer to that question? Does anyone know?
Astrid Atkinson
We know a lot about this. It varies a great deal by provider who’s administering the cloud, and it varies a lot by resource type. Part of the reason why we included it in this paper was to take a forward stance on what’s possible. There are some cloud providers today that have the ability to move load around in space and time. Google’s capability to do this is probably the most advanced in the industry.
David Roberts
Google’s been beavering away on this more than anybody else for longer than anybody else.
Jesse Jenkins
Just as an aside on this, the reason they started doing that was actually to help optimize their emissions impacts.
Is that not correct?
Astrid Atkinson
No, this work long predates that. The reason that Google does this is for reliability. Their cloud model was built from the earliest days to be able to move load flexibly between data centers, so that in the event of a network failure or a data center failure or a software failure, you’d be able to pretty much instantaneously move load to another location. They have since used that for emissions optimization, but not really in a large-scale way. It’s a core part of Google’s operating profile.
David Roberts
Can you give us a sense of the scale?
Astrid Atkinson
I could drain a data center in under a second when I was operating large-scale services there. It’s nearly instantaneous and there’s a few different ways. Under a second is probably not true. Couple seconds. Their ability to move load partially between data centers or even completely take a data center out of circulation is very advanced.
David Roberts
Just to give people a sense of scale. Do you think that’s going to be a contribution to flexibility that is commensurate with on-site power resources? Less, more?
Astrid Atkinson
It could be, but only some cloud providers have that capability. For example, Meta has some ability to do this. Microsoft and Amazon both struggle to do this, not from a technical capability perspective, but because a lot of their data center usage is from cloud customers. In those cases, they’ve often made commitments to provide a certain actual set of resources.
Jesse Jenkins
It’s a contractual constraint.
Astrid Atkinson
It’s not a technical problem. It’s a contractual problem. It’s saying, “You got these chunk of servers in this particular data center, you will always have that chunk of servers, those are yours.”
David Roberts
This might be a side question here, but is there anything equivalent to a flexible interconnection in terms of cloud providers? In other words, you can have this amount of cloud 90% of the time, same sort of thing.
Astrid Atkinson
All of the cloud providers do provide that capability, but because a lot of cloud loads are a lift and shift from the older corporate on-site data centers, where it’s like, “I had a server in my office and now I have a server in the cloud,” there’s often a lot of difficulty in moving those types of loads around. You have to build differently for a flexible cloud.
The other thing is an accounting problem and we’ve actually run into this and needed to tackle it for our work with utilities. In some cases you need to be able to name and then depreciate an asset to fit into the accounting construct of the customer. You really need to be tied to a particular location. There are nested business problems in there.
David Roberts
If you’ve got all these hyperscalers, all these data center people that are panicked to get on the grid and they’ve got bags full of money, surely they are all working on this, pushing compute flexibility as far as they can.
Astrid Atkinson
Absolutely. The current ability to do this varies between provider. Typically, hyperscalers who actually operate a very large fleet have more ability to do this just because they have more locations to put things. There are companies that are offering that capability of being able to curtail in place in a single data center as a commercial option. Emerald AI, that I was an advisor for and Jesse currently is, does this as a commercial service.
David Roberts
Compute flexibility specifically?
Jesse Jenkins
They provide this sort of operating system and integration with the data center’s job workload management system so that when they get that signal from the utility to curtail consumption or to respond to one of these flexible interconnection curtailment events, they can then dispatch the compute loads in order of their priority. Just like the data center looks like a non-firm, a portion of its firm and a portion of its non-firm are interruptible for the utility. It’s the same thing going on inside the data center. You have certain jobs that must be served. You have certain jobs in different tiers of reliability expectations.
If you’re in a lower reliability tier, you pay less for that service. There are many jobs in which you don’t need to be 99.9999% reliable. You’re happy to pay less and be curtailed six hours of the year or shifted to a different site if that’s available. Because there’s a broad range of players in the data center space, besides the hyperscalers, the big names that we’re all familiar with, there are all of these other players that build data center shells and then lease out the space inside them to put racks. There are people who lease the GPUs on those racks. There are people who build the entire data center and offer cloud as a service.
There’s a huge range of players here, and it’s growing. Their capacity is all growing. Having the ability to have a standardized way to do that across all of these data centers is what Emerald’s trying to supply to the market. There are other people trying to solve this problem from different angles as well.
Astrid Atkinson
The other thing to think about when we think about the ability to curtail power usage within the data center is that most data centers also operate at a low capacity factor relative to their nameplate requirement. The grid may be 40% utilized on average. The data center is probably 40% utilized on average for the majority of sites.
David Roberts
They’re not running at their full rated capacity all the time, even though that’s what shows up in the planning models. In practice, they’re not actually doing that.
Astrid Atkinson
Depending on the type of data center, but in general, that’s true. For training data centers and also crypto mining facilities, they are peaking to their full power capacity for a period of time and then potentially turning that off for a period of time. For training specifically, they’ll use 100% of their nameplate capacity, and for crypto in general, they will also use all of the resources in that data center. But that’s not by far all of the data centers that are out there. Most of the data centers that are serving anything from inference loads to cloud loads, which is the majority of sites that are out there and the majority of sites that are in queue, have a much more organic variability of the load that they’re serving, and they use less of the resources of the facility on average during their serving period of time. Unlike the training facilities, they don’t just turn off sometimes. In both cases, the average utilization is lower than nameplate. But it does matter what kind of data center it is.
Jesse Jenkins
It’s also worth pointing out that these all have big cooling systems that are a part of that load. If you have, say, 800 megawatts of compute load, you might have 150 to 250 megawatts of cooling system load at peak, depending on how efficient the system is. That cooling use, just like your air conditioning at home, varies with the weather as well. That also contributes to some of the extra headroom that you might have when it’s not the peak, hottest, most humid day of the year. When your chillers are running at their least efficient set point.
That’s also another potential source of flexibility, making use of that extra headroom when the cooling loads are not at peak, but also changing the cooling load. I’m working on a project that the former National Renewable Energy Laboratory, now the National Lab of the Rockies, is leading on using ground source geothermal heat exchange like we have at the Princeton campus here, to shift around the cooling loads and to make them much more efficient when it’s hot out so that you can drop your peak cooling load and make extra headroom for compute.
There are a lot of interesting ideas. There’s trillions of dollars being spent here. They’re all running into the same problem of how do I actually get what I need from the grid. That’s driving a lot of creativity about different ways to squeeze more out of it.
David Roberts
This brings me to perhaps my most beloved question, which you two are probably the best two people in the world to ask about this, even though I’m probably going to do several pods on this general subject in coming months, which is: everybody needs grid capacity. Jesse, you’re involved in a company now that is going out and gathering grid capacity, selling it to data centers.
As you both know, one of the things that I would very much like to see happen is for the spare grid capacity in households and buildings and businesses to be rounded up and to count for capacity that data centers can buy, because then you can get some of that trillions of dollars diverted into the DER distributed energy VPP market, which desperately needs money. I would like some structured way for distributed capacity to count as capacity for data centers so that we can bring some of their resources to bear in developing those distributed resources. This is something I go on and on about.
Jesse, you are now doing this. You’re rounding up capacity, you’re selling it to data centers. Do you have the ability, is it easy, is it something you can do to round up distributed capacity and then on the other side of it, do the data center people and do the utilities you’re working with, are they comfortable with distributed capacity counting as capacity?
Jesse Jenkins
We definitely look at distributed resources and demand response, or you can call them virtual power plants if you want, as a potential part of our portfolios at Firma Power. We don’t go and round those resources up ourselves. But there are a growing number of players in the market who are doing exactly that.
David Roberts
Aggregators, basically.
Jesse Jenkins
There’s Voltus. There’s Basepower, which does this with distributed batteries. There are a bunch of others as well. We would basically go to them and say, “How many megawatts can you give us and what date?” One of the nice things about them is that they can be done in pretty small increments. You can refine as you go. As the load of these data centers ramps up.
That’s another thing to point out to listeners. You don’t turn on a gigawatt data center overnight. You turn it on in stages over the course of probably two or three years. Their demands are ramping up over that period as well. There’s some value to being able to fine-tune and incrementally add capacity.
David Roberts
But you trust that that capacity is real. When you sell it to data centers and utilities, they trust that it’s real?
Jesse Jenkins
I trust it if contractually they say it’s real and there’s liquidated damages if they don’t provide it. That’s the way everybody should be entering business arrangements. On the utility side, this is where it’s important to differentiate between the big C capacity and little c. If you’re talking about PJM’s capacity market or MISO’s capacity secondary market, they recognize demand response as a resource and you can participate in those markets. If we can buy 50 megawatts of accredited capacity from demand response, we can get that accredited and use it in our portfolio and supply a data center.
Where it’s trickier is on the transmission owner side, where the load interconnection happens, where they are under no obligation to accept that as a legitimate solution to their transmission constraints. Traditionally, in most places, they don’t think of those as alternatives.
David Roberts
That’s half the promise. That’s half the whole promise of these resources.
Jesse Jenkins
Exactly. Ideally you’d be getting both value streams. You can use them both to solve transmission constraints and to solve accredited capacity challenges. Right now we’re pretty confident we can do the accredited capacity piece, but there’s still more work to be done to convince utilities that they can count on these operationally to resolve local transmission constraints. There’s a wide range across the industry of some utilities happily embracing this and others that are saying, “If it’s not a gas plant, I don’t trust it.”
Astrid Atkinson
This is also part of where we come in as well.
David Roberts
This is your whole thing. What’s your take on how much utilities trust distributed capacity resources to do the kind of work we’re talking about here?
Astrid Atkinson
The distinction that Jesse drew between using distributed capacity for capacity versus for managing transmission constraints is really important. There’s increasing acceptance of distributed resources as a capacity asset and good track records within the industry of folks packaging and providing that service, whether that’s the utility itself or third-party aggregators. There’s a really good opportunity even in that, to get a lot more money put into the VPP space.
When we look at the bigger picture here, this is a different take on the bigger orchestration problem. The goal of employing all of those little resources was to be able to aggregate up small-scale demand changes, to be able to increase the capacity of the grid and accommodate a lot more demand. When we first started talking about this, we talked about the source of that demand being electrification. The demand in that model is a bit more distributed and the solution is also a bit more distributed.
David Roberts
That’s still coming.
Astrid Atkinson
That is, firstly, that’s still coming, but secondly, the data center itself does present a really interesting larger-scale version of that problem where it can bring some of its own tools to mitigate its impact on the grid in the same way that we long imagined all of the little household devices impact, mitigating the increase in household-level demand. The data center can also choose to pay for capacity resources or other services from local distributed resources. That provides, as you mentioned, a much-needed potential additional revenue stream that makes it financially viable to leverage those. That’s the way that I still think about this as an orchestration problem.
Ultimately, what we need to do is understand what’s happening on the grid and then be able to dispatch resources, demand and supply, in a way that makes the most out of the supply that we have and out of the wires that we have and will build. Connecting this back to our role and what we offer in this space is that understanding of the grid that can translate into signals to folks participating in it, whether that’s distributed resources at the household level or to the data center itself to say, “Here’s how much grid you have, here’s how much you can safely use. Turn it down next Tuesday.”
David Roberts
Seems so simple, but it’s real hard. We’ve been lacking that basic information to do this.
Astrid Atkinson
It’s difficult, but not because of the technology requirement. It’s difficult because the system has to change.
Jesse Jenkins
Dave, I know you love VPPs and I love them as a solution to these problems as well. I want to second everything Astrid said and 100% endorsement. This is a critical piece of the toolkit that’s being neglected right now. I just want to make sure everybody who’s a VPP stan out there is also clear: if you’re relieving transmission constraints 1% of the hours or less of the year, and your vision is that then we make better use of the existing grid, which is great to use the transmission lines, but also the generators. It’s important to remember which generators are sitting there underutilized, ready to ramp up if you can make better use of them. It’s very inefficient and dirty coal and gas plants that are not currently being dispatched because they’re less efficient and more costly than the generators that are being dispatched.
This is why we have to do both things. We have to pair the flexible interconnection and distributed VPPs that solve local targeted transmission constraints and the very needle-peaky generation adequacy events that we might have to deal with. They can contribute to that too with the bring your own new clean energy piece of the puzzle. Because the other 99% of the hours of the year you have to consume a bunch of megawatt hours.
If you’re not directly contracting for new clean energy that can come onto the grid and meet your demand when you’re consuming it in a place that’s deliverable to where you are, the grid is going to ramp up idle resources to meet your demand. That’s going to be inefficient gas and coal plants that are sitting on the sides.
David Roberts
If I can summarize, unless we explicitly design it otherwise, these new data centers are going to get served with gas and coal plants that are currently —
Jesse Jenkins
Not exclusively gas and coal, but the majority is gas and coal. We saw that in our study when we looked at what happens if you don’t require resources, new resources to be brought online.
David Roberts
We should say, I can’t believe we haven’t said this yet, Jesse and Astrid. This is in some sense the whole point of the thing, but part of the advantage of the model you’re describing is that the data centers are paying for the new capacity that they need when they come online. Which means that you and I are not paying. That is —
Jesse Jenkins
— very important to say. A key part of the solution. Putting climate aside, the principal concern most people have right now about data centers is, “Are these going to jack up everybody’s utility bills?” The answer to that is exactly these two pieces of the puzzle. Avoid a bunch of those unnecessary costs and delays through flexible interconnection and make sure that data centers are paying bilaterally through a large load tariff or directly through a retailer arrangement for all of the new energy and capacity that they need. Ideally, that should be clean energy if we care about the emissions and air pollution impacts.
David Roberts
Flexible interconnection plus bring your own capacity. We should at least address the subject of what needs to happen to make this happen. Astrid, maybe you can talk about this. What needs to happen to make this happen? It seems to me as I think about this model, there’s a lot of parties involved in this that are going to have to change the way they do things to make this work. The first and most obvious thing is utilities need to more explicitly offer flexible interconnection. What other regulatory or legal changes or reforms do we need to make this model the default?
Astrid Atkinson
This does vary a lot by location. As we mentioned earlier, there are rules in Texas that support this reasonably well today. There are also new rules and proposed ones in SPP which are called “hills, chills, and spills” if you want to Google it around conditional high impact load additions. There are also a lot of very active conversations in PJM around updating the interconnection model to potentially allow for that idea of firm versus conditional loads. Some version of that is required to make this model work, but it’s not necessarily a huge change from the way that things work today. That’s pretty doable.
The other piece of this is that typically the interconnection agreement for transmission capacity happens somewhat separately. It’s usually between the data center and the grid operator, the local utility, to make an agreement that accounts for deliverability of power and the actual ability to plug into the grid. There you need the utility to be willing to have a conversation about potentially offering contingent capacity or a flexible interconnection model. You also need the data center to be excited about engaging in that conversation.
When we do this now, we often go in with the data center operator who’s interested in doing this work to go talk to the utility that they’re talking to and say, “This is real.” One of the things that we ran into when we started looking in this space and looking at providing flexible interconnection for data centers is if you go to the data center, they’d be like, “We’re interested in that, but will any utility really do it?” Then you go to the utilities, and the utilities are like, “We’re interested in that, but would any data center really do it?” That’s actually a big part of the reason why we released the study that we did, to help move that conversation forward, because it was just like two shy people at a dance. They just wouldn’t talk to each other.
David Roberts
My impression is that the model is pretty sensible. It seems to address everybody’s concerns. Seems like a win-win. Is the major barrier here just introducing these two shy kids who are standing on the side of the dance floor, or is there anyone against this? Are there opponents?
Astrid Atkinson
The opponent is really the status quo.
David Roberts
Always the main opponent of everything.
Astrid Atkinson
It requires making a different sort of interconnection agreement. Most utilities aren’t very well equipped to do that kind of analysis today because you have to, instead of looking at a point in time and a couple of contingencies for an interconnection analysis, you might need to look at thousands or tens of thousands of scenarios. That’s where this is a data and software problem.
David Roberts
That’s what Camus does. That’s what you’re answering.
Astrid Atkinson
There’s a bit of an industry tool evolution that’s required to make this possible. But it’s always the process change that’s hard. You need to actually sit down and work with the planning team at the utility and understand what are their real considerations, how would this fit into their current process? That’s a lot of what we’ve been doing as we’ve been working on making this real.
David Roberts
If there’s anything that can overcome the inertia of the status quo, it is giant corporations with huge bags of money. This actually seems to be happening. It does seem to be shaking loose this incredibly sclerotic industry and sector. It’s super interesting to follow.
One question, Astrid, maybe just quickly, is: I think a lot of people who read about data centers, there are already data centers who are building on-site power. We were just talking about them, they’re building on-site gas. How’s that different from this model you’re requiring? I know the flexible interconnection might be different, but in terms of bringing your own capacity, there are already a bunch of data centers that are not quite completely offline or completely online. They have some of their own capacity. How’s this different?
Astrid Atkinson
The real difference is that we’re talking about incorporating as much of the grid mix as possible into the power footprint for the site.
David Roberts
Your on-site resources are just for contingencies, just for outside events.
Jesse Jenkins
There are two flavors of this. One is yes, all data centers have backup generation for contingencies. Usually those are diesel gensets that are only allowed to run a limited number of hours of the year or they avoid their air pollution permits, etc. Those are really just for backup. They also have uninterruptible power supplies, which are very short duration batteries that can handle little blips in grid service and things like that and keep running through. The kind of gas rush that we’re seeing is often either entirely off grid or it’s in addition to the grid connection. You might get a 200 megawatt firm grid connection and then you have 300 megawatts of gas on site separately.
There’s this large project in Pennsylvania that’s at the site of an old, I think two gigawatt coal plant, and they want to have four and a half gigawatts or something like that of total capacity of combined cycles on site. Most of those are going to be running all the time because you can’t get service from the grid. What we’re talking about is if you got a gigawatt scale data center, you have a gigawatt size substation and grid connection. It’s just that a third of that might be curtailed at some part of the year. The key difference is we need to move between this binary 100% firm or entirely not firm, not supplied at all by the grid kind of world. Remember that those transmission constraints are very rare over the course of the year and they can be managed operationally if the utility is willing to do it and has the capabilities to do that.
David Roberts
Got it. Flexible interconnection and bring your own capacity and quit building stupid on-site gas plants. From what I’m hearing, the pieces of this model are basically in place. The actors have all the motivations they need. It’s more or less just a process of hashing it out at this point, just getting people on board and signed up. It sounds like there’s a lot of open runway here to make this work. I’m curious if there’s been any reaction. I know this thing just came out, but if there’s been any reaction from the industry, it seems like they would welcome this.
Astrid Atkinson
This can be done. We have the tools that we need today. It is more complicated than just going fully off grid. You have to get multiple parties to agree to it and maybe put together a coalition of the willing. That’s where efforts like what Jesse’s company is doing can be really helpful here. I also hope that —
David Roberts
Make it a little easier, just a little more turnkey for everybody.
Astrid Atkinson
Likewise, stepping into that data entity tooling gap with our data center interconnection product also hopefully makes this a little bit easier. We definitely have seen a lot of increasingly serious interest from data centers in pursuing this path. What will really make this happen at scale is proving beyond a shadow of a doubt that it’s definitely going to be quicker, because once that is clear, there will be no debate about which direction to move.
David Roberts
Quick is the trump card.
Jesse Jenkins
The value of getting online a month earlier is so much higher than all of these things we’re talking about. It’ll be a competitive necessity.
Astrid Atkinson
Our number for the opportunity cost or corresponding value of getting a data center online a year earlier is about $7 billion for a gigawatt of capacity per year.
David Roberts
That buys you a lot of capacity.
Astrid Atkinson
It buys a lot of everything.
David Roberts
You could buy a really big VPP for $7 billion.
Astrid Atkinson
When I mentioned that modeling out what it costs to put in batteries instead of a gas plant was just a couple of extra billion dollars, that’s not super important. It’s in the context of that opportunity cost number.
David Roberts
If it shaves a year off —
Astrid Atkinson
— it pays back in under a year.
David Roberts
— for itself a hundred times over. That’s wild. Faster, cleaner, and a more stable grid. An excellent model for data centers. Lots of companies and entrepreneurs springing up to make various parts of this easier. Really cool and interesting area to watch. Thank you two for coming on and walking through it with us.
Jesse Jenkins
Our pleasure. Thanks.
Astrid Atkinson
Yeah, thank you.
David Roberts
Thank you for listening to Volts. It takes a village to make this podcast work. Shout out especially to my super producer Kyle McDonald, who makes me and my guests sound smart every week. It is all supported entirely by listeners like you. If you value conversations like this, please consider joining our community of paid subscribers at Volts.wtf, leaving a nice review, telling a friend about Volts, or all three.
Thanks so much and I’ll see you next time.












