In this episode, I’m joined by Marissa Hummon, whose team partnered with NVIDIA to tuck a credit-card-sized GPU computer with AI software into the humble electricity meter. We discuss how that edge computing digests 32,000 waveform samples per second, spots failing transformers, and orchestrates VPPs — plus the guardrails that keep it from becoming Skynet.
(PDF transcript)
(Active transcript)
Text transcript:
David Roberts
Hello everyone, this is Volts for May 16, 2025, "Embedding intelligence at the edge of the grid." I'm your host, David Roberts. One problem that we discuss frequently here on Volts is that there is a tidal wave of distributed energy devices heading for the grid out at the edge, at the distribution level. Utilities currently have very little visibility into that level, at least on any granular real-time basis.
Many people are taken with the idea of gathering real-time data at the grid edge and using AI to analyze it in real time, providing utilities and device manufacturers with valuable insights on where and how to improve performance.
One company that has been at it for a while now is Utilidata, which has created a special version of an NVIDIA chip, an AI module specifically designed to gather and analyze real-time data on electricity flows. It's already being integrated into smart meters and used by utilities in the field.
Today, I'm speaking with Marissa Hummon, the chief technology officer at Utilidata, about why grid-edge data is needed, how to protect its privacy, what it enables grid operators to do, and whether an AI-infused grid is going to lead to a Terminator-style dystopia.
All right, with no further ado, Marissa Hummon, welcome to Volts. Thank you so much for coming.
Marissa Hummon
Thank you, David. I'm very excited to be here.
David Roberts
Let's just begin by telling me what this thing is. I tried to sort of briefly describe it in my intro without saying anything too specific, because I don't know that I totally have my head around it. So, the best I can tell is you have these NVIDIA chips and you have sort of made a customized version of that. Is that what's going on?
Marissa Hummon
Yeah, that's correct. We've collaborated with NVIDIA to create what they call a module. So, the chip part is the silicon; the rest of it, you know, surrounding it, is the module. And that module is something that is specifically designed to be embedded in grid-edge devices. Things that sit outside, things that need a high degree of security and reliability. They need to be able to sit on the grid for, you know, 10, 15, 20 years without, you know, sending a truck out to repair it. And so, all of those kind of like physical characteristics are part of what we did with NVIDIA to bring that compute into the utility space.
David Roberts
Yeah, and this might be a futile question since I don't know anything about this stuff, but just like when I think of chips, I just think of sort of computations, you know, sort of like raw computations, and then I think of software as, like, things you tell arrays of chips to do. So the whole idea that you've packed software onto the chip itself is blowing my mind a little bit. Is that something that other people do for other reasons? Like, is that a thing?
Marissa Hummon
In the field of embedded intelligence or embedded systems, that has been around for a long time, but for the most part, people have built, you know, single-purpose chips that do like one, one thing, and they do that one thing really well and they do it really efficiently. But what we saw a need for, and NVIDIA agreed, was to have a, like a real computer that you could program and you could reprogram it later if you wanted it to look at something else or calculate something differently or take a different action. And in order to have a real computer, you need a lot more than just a chip that does one thing right. So you need an operating system and you need memory and you need storage for data, connectivity to things.
David Roberts
And all that is on the chip?
Marissa Hummon
Yeah, and it's less than the size of your credit card, and about as thin.
David Roberts
Yeah, the size involved in all this is a little bit mind-boggling. So, it's a programmable computer with sort of like your proprietary software on it, embedded on a chip?
Marissa Hummon
Yes, and proprietary software being — I think that's probably the wrong term for us. I think what we have done is build a platform on top of NVIDIA's tools. So, we incorporate all of the NVIDIA tools that allow you to utilize the GPU part of that chip efficiently. And we've just made it really easy for somebody else to build an application, manage that application, make sure it's secure, make sure the data is secure. Because all of those building blocks, the things that you were mentioning at the beginning — this device can't create a security risk for the grid, and it can't be something that the utility has to worry about servicing at a hardware level over a long period of time.
And so, we took all of those requirements and turned them into a platform that is easy to embed in a meter, a transformer, or a switch on the system.
David Roberts
Mm. So, the idea here is you will sell these chips to entities that will then program them to do specific things.
Marissa Hummon
Yeah, your use of the word programming is very similar to how we would think about an IoT system. Right. So, this idea that you could take a sensor, you could program it to look for, maybe it's a temperature sensor, and you say, "Okay, I'm going to take temperature sensor readings continually and every one hour, I want you to send me the average for the last hour." That's a program. And this is much closer to your computer. So, you have an operating system, you have some libraries and services, and then you have applications that maybe you wrote your own applications, but maybe somebody else wrote an application and said, "Hey, if you run this, you'll get this really cool thing out of it."
David Roberts
So, the analogy, or maybe it's not even an analogy, is you would sell these to people who would then put their own apps on them, basically? Apps to do specific things. And so when you say it can be embedded in any electronic device, does that mean any "any"? What is the class of devices you're targeting? It can't just be all things that are electronic.
Marissa Hummon
So, the class of devices for the electricity system that we target are ones that are measuring properties of the electricity system. The meter is measuring the voltage and current at the house or the building premise. A transformer also measures voltage and current, and then it has additional things that it's trying to control for. We're suitable for anything that is making a measurement on the grid that you would like to turn that measurement into better information.
David Roberts
And just to be clear, because one of the things I wondered about this is just measuring. So, if you're going to do something with the insights you've gained, you can't do anything with this chip. This chip is just measuring. In other words, is that...?
Marissa Hummon
No, actually, the meter is measuring and we're taking the data feed off of that meter and bringing it into the chip. Then, the chip actually computes things like: Does it see if an EV is starting to charge? And if so, was there a degradation in power quality that the utility wants to be aware of? Or has it increased the loading on the transformer near that premise above 50%? And again, the utility might want to start to take action or at least, you know, start to manage for that new event. So, it really is the piece of the meter that is going to understand what those measurements mean.
And because it has local communication and backhaul communication capabilities, it can take that information and either create the right event packet back to the grid operator's room, system operator's room, or maybe it actually does something locally in real time to make the grid more reliable, more secure.
David Roberts
Like if it measures, you know, if it's measuring an EV charging and the voltage is getting too high or something, or something's happening? Can it intervene in that and dial back the voltage? Can it do things, in other words?
Marissa Hummon
Yes, absolutely. It can do things. But what it's allowed to do is part of that programming you were talking about. Like, the utility can decide how it wants the intelligence at the edge of the grid to manage the actual grid.
David Roberts
Right. So, in theory, if a utility wanted it to, it could measure the EV charging happening, determine that something's going wrong, and it could respond with an automated routine that then dials back the amount of power going to the EV. Like, you can program it to take action in response to the things it is measuring.
Marissa Hummon
You could choose to have it do that. And I would say that might be — I think that is something that will be necessary when we have enough EVs on the grid that the utility needs to manage their charging in order to keep infrastructure up and running.
David Roberts
Yeah, I guess what I was just trying to get at is this purely to show what's going on in the grid, or is this also a tool to manage the grid and make the grid do things? That's kind of what I was trying to get at. So, you call this an AI module. You probably decided on that terminology before the current war over AI broke out. And AI sort of has gotten a bit of a — at least in my social media circles — gotten a bit of a bad reputation, mostly associated with large language models confidently telling you false things, that kind of thing.
This, I think, is what people associate AI with now. The other thing that makes people mad about calling everything AI is, and there's a lot of the nerds in my social media circle saying, "Look, what you're doing here is like advanced sensing and measurement and automated routines and analysis and telemetry software. This is just all software. This is all normal stuff. None of this deserves the term AI." So, you get people mad at you from both sides. So, maybe you could just tell us a little bit, like, what do you mean and what do you not mean when you say this thing is an AI module?
Marissa Hummon
So, it's a great question. I think you can kind of divide AI into pre-ChatGPT and post-ChatGPT. AI, at its kind of basic elements, is using data to build a model, not based necessarily on the physics that would explain the phenomena, but just letting the data speak for itself.
And what large language models did was they took that way, way further than we've seen in the past, and they built those huge inference models based on reading the Internet, for lack of a better term. And the computational infrastructure you need to build a really big language model is these graphical processing units, these very specialized transistors that can, in parallel, like in massively parallel, compute across a whole range of things. And so the reason why we call ourselves an AI module is because that module has the capability to do that. We have a CPU and a GPU.
We have that graphical processing unit that allows you to do some really computationally intense modeling. I think this is the part that I think is really interesting and was not available five years ago, is that it's really power and thermally efficient. So, like, you can get 100 or a thousand times more math problems done for the same amount of power delivered on a GPU. And that's important because — well, first of all, inside of a meter, it's a closed space. So, you've got to be really cognizant of how much heat you generate.
David Roberts
Yeah, true.
Marissa Hummon
But also, you don't want to use a lot of excess power to do something if you don't need to.
David Roberts
Yeah, what is the power draw on this relative to a normal NVIDIA chip? Like, it's also microscopic. It seems like it couldn't be that much, but I guess it adds up.
Marissa Hummon
It's actually really slim. So, the off-the-shelf version is in that kind of 10 to 15 watt range. Probably peaks out at 20 watts. We've pulled that back to right around 5 or 6 watts is what we're aiming for. And that is mostly through, like how we actually handle the operation of the computer and then a little bit of how we handle the data going into it.
David Roberts
And so, that is technology, basically, that was not around five years ago, like making things this small and this power-thrifty.
Marissa Hummon
Yes, and the credit goes to NVIDIA for this, not Utilidata. We just happened to pick the right base technology.
David Roberts
Yeah, they sort of really grabbed the AI chip thing a few years ago and kind of ran with it. And now they own the world. This is kind of my — and just tell me if this sounds right to you — this is the way I've tried to conceptualize AI: it takes a bunch of data, finds patterns in the data, uses the patterns to sort of build a model of how the data works, and then uses the model to predict the future. Predict what will happen next.
Marissa Hummon
Correct, yes.
David Roberts
And so in this case, what is the raw data?
Marissa Hummon
So, the raw data that you have at the edge of the grid is the voltage measurements and the current measurements, and it's across any of the legs that are being measured. Most of the houses in the United States have a split phase. So, it'll measure the voltage and current on all of those pieces that are coming into the house. Upstream, you'll get that full three-phase measurement. The thing about the digitized voltage and current data is that it resolves the waveform at a really high level of resolution. So, we have about 32,000 measurements per second.
And at that sampling rate, you can see things about the grid — if you analyze it correctly — that you wouldn't be able to see if you needed to bring that data back to a data center. Right. So, like, to get that kind of data back from the edge of the grid would require a massive pipeline.
David Roberts
Right. It's a lot of data, and you have to send it back to the cloud and then send it back to the meter. Presumably, that lag is too long; it will no longer matter by the time it gets back.
Marissa Hummon
And too expensive, frankly. So, having the compute right there allows us to extract things from that waveform data that we haven't been able to do in the past. So, we can see specific harmonics in the power quality that will tell the utility, it's an indicator of when other equipment is going to break down sooner. We can actually see a transformer's insulation is starting to fail. We can see things like tree branches rubbing against a line or when a power line starts to become a risk.
David Roberts
So, when you say you see tree branches rubbing on a line, what you see is several hundred thousand data points about current and voltage. And the AI is looking at those and inferring from that there must be a tree rubbing on a line somewhere.
Marissa Hummon
Yes, and at the beginning of the journey, it will not know that it's a tree branch. It will just know that something is wrong.
David Roberts
Right, right.
Marissa Hummon
And after, you know, after enough data and enough connections, then it will be able to identify that as like, "Oh, that's a tree branch and you should send a truck that can trim trees," as opposed to some other kind of truck.
David Roberts
Well, here's a question. If you're measuring like 32,000 times a second, you're measuring sub-second events, basically. And I wonder, do we have technology that can intervene in sub-second events? Do you know what I mean? To harmonize the three phases, are we capable of doing that, like on a sub-second level?
Marissa Hummon
We are, and we do it right now for transmission lines on a regular basis. It's part of the protection of power flow at the transmission level, and the measurement device there is called a PMU, a phaser measurement unit. That has about the same resolution, but it's looking for very specific faults and events in order to protect the system so the grid can take action that quickly. The other thing that I guess is important about being able to measure at that higher resolution isn't so much about how fast you could take action, but if the event itself is very, very short, but you know that it's forecasting a problem in two minutes or five minutes, you want to be able to capture that very short event that is going to become a problem.
David Roberts
I see. Well, that's sort of my other question, what are you learning from these sub-second events that we couldn't know before?
Marissa Hummon
The first kind of bulk of work is really — you asked me earlier if it's just visibility or if you can take action as well. That first piece of visibility has been an amazing adventure. So, as in, I think there's a lot of things that are happening on the grid that the utility just wasn't aware of. And now that they are, they can manage to that.
David Roberts
What we always hear about the grid is that it's this super finely tuned and balanced machine where, you know, it has to be balanced exactly. You have to be putting on exactly as much as being consumed and keep the wave phase. So, like, it sounds intuitively like it's already dialed down to the sub-second level, you know what I mean? But now you're telling me that there's all this stuff happening out there that grid operators just didn't even know was happening at all.
Marissa Hummon
Yes, and I think the example is that — okay, I'll take my neighborhood as an example. So, this is a 1950s neighborhood. The grid is about 75 years old now. It's above ground. So, it's wires strung between poles. And when they originally built this neighborhood, no one had an air conditioner and no one had an EV charger. I have both. I know all my neighbors have air conditioners, and I think probably 10 or 15% of them have an EV. And when they originally designed the equipment for this neighborhood, they anticipated some level of load growth, but they probably didn't anticipate that level of load growth, at least not uniformly across the whole area.
David Roberts
I mean, in the 1950s, there literally were not consumer devices available that could consume that level of power.
Marissa Hummon
Right. So, their design criteria fit the needs at the time. But now, the utility needs the insights into where those hotspots are. Which transformer is routinely at 75% of its capacity and peaks at 100% of its capacity, and which transformers are not, because those surrounding houses or buildings don't have as high of a load? And so, having that granular information allows the utility to invest in infrastructure upgrades where they know they need it right now and defer the ones that they can defer till later.
David Roberts
I see. Does it allow them to do anything in the moment? Do you know what I mean? Like, they can go build more capacity to handle the higher capacity, obviously. But like, if that one transformer is getting dangerously overloaded, do they have the tools to route power away from it? Do you know what I mean? My sense is that their control over the grid is much more kind of chunky and not quite as sophisticated as people have in their heads, you know what I mean?
Marissa Hummon
Yeah, I think the tools for really fine-tuning power flow, those tools have not been invested in. And frankly, if you were to invest in them, you would want to do it on the basis of where you need them most. But back to your question of, "Hey, could we turn down your EV charger because the voltage is peaking too high or the transformer load is reaching its capacity?" Today, you could do that if you had the right information and you had the right agreements with the customers.
David Roberts
That's what a VPP is, right?
Marissa Hummon
Exactly. But this would be a VPP that would be directed at managing local power flow to alleviate a congestion problem, as opposed to most VPPs, which are like a big supply/demand balance, part of the balance equation.
David Roberts
Right, but presumably, if you had the VPP, you could do either thing with it or both.
Marissa Hummon
Yes, absolutely.
David Roberts
So, let's talk about privacy a little bit, because I feel like now when people hear AI, the first thing they think of is just data mining and all this kind of stuff to like customize ads at you and things like that. So, it sounds like when I threw this out on social media, I'm like, "Oh, we're putting AI at the edge of the grid to gather data." Everybody immediately is like, "Oh, they're going to know when I turn my stove on and when I'm driving my car," and you know, their heads go to privacy invasion. It sounds like these devices could know that stuff, like by measuring current and voltage could be like, "Well, now Dave's turning a stove on."
So how do you think about privacy? Who has access to this data? Does the homeowner have ownership over the data? Who gets the data and is that kind of a little bit downstream of you? Like, what privacy mechanisms are you thinking about and building into the thing?
Marissa Hummon
So first, I'll kind of set the stage. Data privacy and data access rules are state by state, because this is governed at the state regulatory level. But I'll use California as one example because they are probably the most strict on privacy. Their rules kind of mimic what's going on in Europe. And I think the ability to do that kind of analysis at the edge of the grid gives you two things. One is that the data doesn't have to leave your premise. Right. It's on the side of your house. And then I think both the utility and the customer now have a choice as to how that data is used.
Maybe that data goes over Wi-Fi just to your phone, and you can see the information. But maybe the only thing that makes it back to the utility is events that are going to cause a grid reliability issue. And so, if you have that computation infrastructure, you can program that in. Does that make sense?
David Roberts
To your app? To return to our previous analogy, your app can have privacy built into it?
Marissa Hummon
That's right. And basic things like encryption of the data at rest, encryption of it in transit, all of that is built in from a security standpoint. But what you were asking for was, "Who will know what's going on in my house?" And I think that by having a more sophisticated computer at the edge of the grid, you can actually keep that data local if you want to. And you can, you know, your application can have specific permissions from maybe the customer or the utility about where that data goes and how it's used.
David Roberts
You could program your app that way. You could program your app to be invasive and terrible, but basically, like, the app programmers are downstream from you, I guess, is what I'm trying to get at. Like, do you have any influence over them? Are there things you can hard code to prevent them from doing? Because it's just like everybody's very paranoid about this now, about misuse of their data.
Marissa Hummon
Yeah, so what we have built in on the platform side is some basic frameworks for handling different kinds of data: This is PII (personally identifiable information), this is not. And again, the utility could use our platform to grant a particular application access to only non-PII data, or they can grant access to all of it, but what is sent back to the cloud has to be non-PII. And we could check that that data is actually non-PII. Does that make sense? The platform can provide some governance and some oversight.
David Roberts
I want to talk a little bit more about what these insights allow you to do. What do you do with this information? You said you can look for particular transformers that are getting overloaded. You can look for voltage or current flaws or fluctuations that indicate some sort of developing problem. What are some other things? Maybe talk about the EV study just as an example of what you can do with this information.
Marissa Hummon
I think one of the biggest use cases that utilities are looking for more advanced computers at the edge of the grid to handle is distributed resource management. And that's both things that produce power like batteries or solar panels and things that consume power in a variable way. So, you know, your air conditioner has some leeway in exactly when it turns on and your EV charger, you could decide, "Oh, I want to start that at 12:35 am instead of 12:30 am" and what the utility has noticed is that some of those devices, especially devices that push power back onto the grid, like solar, that the local conditions are the most important thing to solve for. And you do need that reaction time to be very fast.
So, you don't want to send the data back up to the cloud for analysis, especially if the analysis could take place locally and not have to worry about it not making it back down. Like, the analysis doesn't make it back to the device in time. So, the utility is looking at this AI compute to manage the coordination of those distributed energy resources in a highly localized and highly real-time way. There still needs to be something that coordinates across locations and over broader areas of time, so the next day or something like that.
But it's that autonomous kind of grid operations that the utility thinks having an intelligent decision-maker and orchestrator will really help them better utilize and better incorporate those resources into their grid.
David Roberts
Well, here's a question: Are distribution grid operators equipped to deal with the amount of information incoming being exponentially increased? Are they equipped to do anything with this data? Because one of the things we discuss here on Volts a lot is a bottom-up grid and distribution service operators like they have in England, just sort of like utilities dedicated to distribution systems that can do more detailed control and management. Because you know, managing them from an ISO a million miles away is... So, are there utilities ready to handle this amount of information?
Marissa Hummon
I think that, like real-time, lots of insights, I actually think that they're probably not. And maybe I should say, I don't know if their system operations room needs to. So, I think what happened or is happening on the grid is that the amount of change at the edge of the grid, new types of loads, new types of resources, that has evolved much faster. And if there was a technology that could manage that, that was complementary to, like, the system operations, the coordination of generators and loads, the coordination of the substations, I think that's the system where we're trying to get to.
And I think it's complementary to what the utility's toolbox is today. This is adding a new layer of information and controls, and hopefully, the information that comes back into the system control room is fairly sparse in terms of really only sending back the things that the system operator needs to know in order to run the whole grid. Does that make sense?
David Roberts
In other words, these local problems will be sort of diagnosed and solved locally without the utility having to hear about it at all in most cases. Like these sort of micro, little micro events, micro corrections.
Marissa Hummon
I think we might be like a couple of months, maybe a year away from that. But, yeah, that is where we're going.
David Roberts
But the idea is that eventually, the grid edge will have enough intelligence and computing power on it that it will be sort of continuously diagnosing and smoothing out these micro-fluctuations at the edge of the grid. And you're not going to need some central person pulling levers, basically. It's going to be automated; this is the shorter way of saying all that.
Marissa Hummon
Yes, I mean, imagine it's very analogous to your Internet service, which is managed by routers and distribution devices for all of those packets. And those packets have a little bit different quality than electricity. But the concept is the same, that there is automated management of the system.
David Roberts
One of the things I think is difficult for people to wrap their heads around is, I think people have it in their heads that we already have something like this. You know what I mean? I think people have it in their heads that the grid is more sophisticated than it is. But it is a little wild to me as I learn more about the grid, how kind of analog it remains even in 2025. Like, how kind of crude the information is and how much of it still involves, like, making phone calls to people and asking them to turn things off.
So, this is just all part of the march toward digitization and automation. So, you've got a meter company that is building these into its meters now? And you've got those meters installed in some numbers. What have you learned from actual field deployment of these equipped meters?
Marissa Hummon
Well, I think the first thing we learned is that you can have an AI meter. So, you know, meters traditionally are designed to accurately measure how much electricity a premise is using and then to bill that customer accurately. Right. So that is the primary purpose of it. And I think there was not skepticism, but hesitance to turn that device into multipurpose: This is not just for billing, but this is for billing and for grid operations. And so I think the first major achievement is just getting, frankly, it was us and Aclara, NVIDIA, the utility, to all work together to make sure that that specification was going to meet everybody's needs and then to build it.
David Roberts
Because it's a very basic building block for utilities. I mean, it is sort of the foundation of everything they do. So, you can understand why they're nervous about messing with it.
Marissa Hummon
That's right. And we're working on putting that meter through UL right now. I should say Claire is doing that, and we will be deploying it this summer. The devices that we have in the field today, where we've been kind of getting our insights, are actually in a meter collar or a meter adapter, which is basically a device that you can put between the meter and the socket. It has its own measurement device. It has its own communications network. And that's where we've been embedding Karman as a way to test or trial. And that's because we didn't want to get in the middle of the billing system right away.
David Roberts
Why not just stick with that? Because you could put a collar on any meter. You know, you could just go stick your collars on all the meters in the world. Now, why build a custom meter?
Marissa Hummon
So, when you roll out meters with Karman in them, it is a little bit cheaper than putting — actually, it's probably a lot cheaper than putting a collar behind it. And then it's an extra piece of equipment that the utility wants to not worry about. So, I think that it's a great way for utilities to get comfortable with the technology, to understand how they want to use it, and to write their specification into their next meter RFP. The meter collar is a good way to do that. And then, I think it makes a ton of sense to not repeat the metrology, not repeat the communication network, and instead just add the right computer into the meter.
David Roberts
Yeah, I mean, there is a certain logic to putting all this stuff in the meter. Like, it is the one piece of equipment that literally every building has; all the electricity goes through it. Like, you know, it is a pretty obvious gateway for putting a lot of intelligence in. Although, I'm sure utility executives everywhere are listening to this and cringing in fear. But let's talk about, I mean, one of the things that actually brought this all to my attention is the sort of application of this same model. So the model here is that on the distribution grid, you've got all these little micro-events and flaws and fluctuations that could be smoothed out to make a more efficient grid that would do the same amount of work with less energy input, basically.
And your idea that they originally emailed me about is that the same basic structural problem is replicated in data centers. Tell me a little bit what you mean by that.
Marissa Hummon
So, the design for power delivery in a data center is very akin to how the distribution system is designed. They are looking for reliability and risk management. The traditional data center, especially when we weren't power constrained for them, delivered 2x what they anticipated using. They would have a full failover. If they lost one power source, they could fall over entirely to another power source.
David Roberts
And when you say 2x, you mean the pipe. Basically, the electricity capacity reaching that building is 2x what they typically use on a day-to-day basis.
Marissa Hummon
Yeah, so if it was a 50-megawatt data center, 100 megawatts would be delivered. But then, and actually, I'm not even sure what a 50-megawatt data center means exactly. Because in addition to that kind of like site-level reliability, there's also over-provisioning of power for every server rack row. And that is because of the anticipation of power spikes at full capacity utilization of the servers.
David Roberts
So basically, yet another system — I end up discussing these on almost every Volts episode, it seems — yet another system that's basically built to the peak, built to satisfy the peak. In this case, the peak being demand for compute which has these weird spikes. Not really — I mean maybe I'm wrong about this, but you know, when you look at like electricity demand at a distribution area, there's variation, but there's patterns day to day, month to month, it's semi-predictable. Is that true in data centers? Like, is there any rhyme or reason to the spikes in demand for data centers or is it just like anything could happen anytime?
Marissa Hummon
No, no, I think there is, there are patterns and I think there are qualities you can predict. And we've started to do a little bit of that with a data center kind of POC. And you're right that when we build power delivery for that very worst-case scenario, we're leaving a lot of opportunity on the table, basically an opportunity to use that excess power for additional computation. What I think the parallel that we've seen between the distribution system and data centers is that it is the lack of visibility and the lack of controls that are preventing somebody from changing the way their data center could be operated.
Now, there's probably also like a bunch of resistance to change and fear of risk. But when you get into kind of that server level or rack level architecture, the crossover from like power delivery to power utilization is not well coordinated. So, the power that comes into the power distribution unit and then the way that power is distributed amongst the servers, those are not well coordinated and they could be. And we could save quite a bit of power that is wasted right now.
David Roberts
So, the long and short of this is a lot of the power that is going to current data centers is being wasted by inefficient distribution. Basically, maybe the way to think of it is like inside the data center there is another distribution system, another electrical distribution system that basically replicates the flaws of the larger distribution system that it's embedded in and thus could have the same types of solutions, basically.
Marissa Hummon
Absolutely, yeah.
David Roberts
And so the idea is you would put these AI-enabled chips all throughout the data center and it would just read the power demands of the servers more closely, distribute it more accurately, like what exactly would it be doing in the data center?
Marissa Hummon
So, the first thing is, you do want to pair the measurement of what's going on, so that is the measurement of power flow, with the compute. Because you do want to immediately take that high-resolution information, turn it into a forecast. And that's that AI model building that we can do on chip that we can't do on a regular computer. So, we'll build that model, you know, the inference model of like, "What's coming in the next 2 seconds, 5 seconds, 5 minutes?" And then yes, connecting that up to the control system to change the way the data center is operating in order to get more compute for the same amount of power delivered to the site.
And that's kind of the metric that is used for efficiency, like site delivered power and the amount of compute you could get out of that.
David Roberts
Right. So, give us some sense of the magnitude here. You've got your 50-megawatt data center, 100-megawatt pipe coming to it. Could you, through your clever use of AI, cut that down to like a 75-megawatt pipe, could you cut it down to like a 51-megawatt pipe? You know what I mean? How efficient is efficient? How much energy could we save from these data centers?
Marissa Hummon
We're estimating that we could increase compute utilization at the rack level by about 30%.
David Roberts
Interesting. That's big. That's a big chunk.
Marissa Hummon
It's a big jump. And that one is very low risk. Taking away the redundancy of power delivery is a different calculation. And I think that one, visibility and forecasting come into that, but we're not currently trying to tackle that. I've seen some universities put together models and demonstrations that they think that that could also come down by about what you were saying, like 25%. So instead of delivering 100 megawatts, they could deliver 75. And then there's basically a risk calculation going on about how you manage the compute load to that new failover capacity.
David Roberts
Right. But basically, if your system is smarter, you're going to end up needing less redundancy. It's sort of the take-home there. And is that like, are you doing that in a data center currently? Like, are you getting data back about performance? Like, are you seeing this at work in any data centers yet?
Marissa Hummon
So, we have a proof of concept running on our own NVIDIA servers that are the rack type that would go in a data center. But we're not fully deployed in a data center and that probably won't happen until 2026. We're working through the research phase. This is very new compared to where our product that's going out on the grid is fully commercialized, productized, ready today.
David Roberts
Right. Let me ask you another, what is probably another dumb and naive question about AI. So, say I have this AI and I put it to work in this data center. It is learning, as you say, so like its performance will improve over time as it learns the details of how the servers work and how the power is distributed, etc. And I've just always wondered about an AI, will it get better and better and better forever, amen? Is there some asymptote we're reaching of like perfect reliability that it can't get any better than? Or is that just like different from system to system?
You know what I mean? This whole thing about their learning, is there an upper bound on the learning or how much better can they get? Do they just keep getting better forever or is there like a ceiling?
Marissa Hummon
So, there are two, in my mind, factors that go into that. One is the quality of the data you're training on. So, we'll take the data center analogy. Let's say this is deployed on the power flow, the distribution of power inside of a data center. That's a fairly limited set of data. I mean, you're probably going to see the full breadth of power flow characteristics in a three to six-month timeframe. And so, that will probably limit how advanced that piece can happen. And then, the other side of it is, what's the size of the computation that can put that model together?
And this is where I think we had that kind of big breakthrough in language models because we really increased the amount of compute that we could give it.
David Roberts
So, there's the data and the compute. Would there be any way? Obviously, the chip in the data center is learning about that data center that it's in. But presumably, there would be something like shared learning across data centers, patterns that hold across data centers that could inform the operation in a particular data center. Do these things have ways of sharing their learning and knowledge?
Marissa Hummon
In theory, absolutely. And we can definitely transfer learnings without transferring private information. And I think we'll see how that plays out in data centers because I think —
David Roberts
I'm sure they're very paranoid about any information leaving the data center.
Marissa Hummon
Yeah, I think any sort of operational devices are entirely isolated inside of the data center. They're not connected to the internet.
David Roberts
But maybe to take it back out of the data center, just in normal distribution systems, like presumably if you have these AIs, you know, they're learning all the granular details of like Dubuque and then, you know, Tacoma and etcetera, there's going to be commonalities and patterns. Are those able to share or are distribution utilities also paranoid about information?
Marissa Hummon
Yeah. So, today our load forecasting algorithm builds the model on the device, shares model parameters to the cloud. So, that's very obscured from the actual data that it used to get those model parameters. And we use those model parameters across devices to build a better load forecasting model, and then we push that model back out to the edge. And that sequence is a way to share insights and share learnings without sharing the actual raw data.
David Roberts
And my other final, dumb, naive question about AI is, you know, everybody's favorite sci-fi vision of AI is, it eventually AI no longer needs you running it because it has in a sense learned enough to start self-improving. Is that possible here? I wonder, at some point, is the level of data and information they're getting going to be so granular and so complex and so voluminous that it's going to be a little bit of a black box to us that we're just going to have to trust the insights that come out of it?
And I think that's kind of what makes people nervous. Do you know what I mean? Like, does that apply here, or am I just rambling at you?
Marissa Hummon
I think that concept is really far in the future compared to where we're at today.
David Roberts
I keep hearing podcasts telling me it's like a couple of months away.
Marissa Hummon
Well, I can tell you, on the grid, it's not a couple of months away. There's still a lot of humans and paperwork in the loop, so don't worry there. And really, it comes down to — you put guardrails around what you're allowing, what are the decision parameters that this device has to stay within? So, as humans, we have full control over how we see this implemented.
David Roberts
For now, okay, final question. You've built this Grid Edge Advisory Board. I spend so much time on Volts on this grid edge stuff. Like to me, it's just the most exciting thing going on, the coolest thing going on. And I'm just sort of curious — it's another difficult question to answer — but I'm just sort of curious like what the vibe is among grid edge people. Like, I guess one question is because, as you notice, I keep raising privacy because I did this pod with Cory Doctorow about, I don't know if you followed his work about enshittification of tech platforms. Basically, about the way tech platforms and modern life kind of capture customers by promising them all sorts of great service and then once they're captured, start degrading the service to the users in favor of serving advertisers or corporate users.
And then, you know, the whole nine yards. So, like I've become nervous about tech platforms and I've become very nervous about them — you know, it's one thing if it traps you on a social media site, but it's another thing when these things are like controlling your home and your hot water heater and your furnace. It seems to me like the possible consequences of platform capture and abuse are much worse when we get into this area. And I just want someone to tell me that the people who are coming together and talking about this stuff are appropriately sensitive to that danger and the possibility of consumer blowback if they're not sufficiently sensitive to it.
Marissa Hummon
So, these are utilities and their risk aversion is very high, and they're regulated.
David Roberts
It's also true.
Marissa Hummon
And they can't sell your data to an advertiser. They are very limited in what they're allowed to do with your data in order to serve you better, you know, by either making the grid more efficient or providing you directly with services. But there's a huge amount of governance over the utility and then us as vendors trying to meet that governance requirements. And then when we get those utility execs in a room to talk about AI on the grid, of course, that is top of mind for them is the security, safety. And then you know, obviously, like this has to pose no threat to the reliability of this system.
So, we spend a lot of time talking about that. And the utilities that are there, they're there because they see the possibility of serving their customers better if they had better tools.
David Roberts
This is the. My other question is about the Grid Edge Advisory Board. Everybody's sort of on this, has mixed feelings about utilities. Everybody involved in this area has mixed feelings about utilities. I'm just sort of curious, like in your experience, when you're talking to utilities about the general subject of just sort of like grid edge computing, grid edge energy, grid edge energy management, all this stuff. My sense, at least in the early years, is that utilities were very slow, very averse, very... they didn't like distributed energy, it takes away from their revenue model, et cetera, et cetera.
Are they catching on to the sort of scale of what's possible? Do they view it with dread or are some of them actually like excited about doing cool stuff with it?
Marissa Hummon
I agree with you that two years ago, if you'd asked me that question, I would have said that the utility still has a lot of questions and needs a lot of convincing. If you walked the DTEC floor this year, that's the trade show. The idea that AI is inevitably going to be part of grid operations was highly apparent. I think it's not so much as to whether or not they believe it's the right technology for the edge of the grid or believe that it's going to help them. I think they've concluded that it will and now they're working on all of those things, governance and implementation and execution of that strategy, and obviously coming up with the right benefit-cost analysis to justify the investment in that technology.
I think the other shift that happened is distributed resources were initially seen as, at least the ones that produced power, as maybe a threat to the utility. But I think as that adoption of that has occurred, it has also become apparent that the grid is essential. Like, even if you want to be off-grid, you still want to be connected to the grid. The grid is the backbone of the country.
David Roberts
Well, these entities were on a self-identified death spiral just a few years ago. And now we're like, "Guess what, not only are you not dying, you're like the hot center of literally everything in the world now." Like, "You're the hot molten core of tech advancement in the world," which is like vertiginous and odd in a different way, I imagine, for utilities. What a decade for utilities, right?
Marissa Hummon
Absolutely.
David Roberts
So, they're clued in on this and active. Yeah, I've just been wondering about that. Also, here's a final question, which maybe is outside of your area of expertise, but similarly about big institutions. One of my theories is sort of that the hyperscalers, these big data center people, are just going to be a forcing agent to drive all kinds of change. That has been sort of in the works for a while, but now all of a sudden, like there's big moneyed corporate people here demanding this stuff. And as I've said many times on this pod, the logic to me is they need energy fast.
That is their number one thing now, right? It used to be like energy, siting, water, whatever. Now it's just like energy, energy, energy. And it's just inexorable in the logic of the power system that the slowest way to get new power is a nuclear plant. You know, the next slowest is gas, then wind, then solar. But if you really want it fast, the fastest way to get it is by exploiting the spare unused capacity that we already have lying around everywhere. And that is the kind of thing that you're doing on the grid edge. I'm sort of curious, I've been wondering, are the hyperscalers going to eventually realize that, eventually realize that the fastest way they can get power is through the VPP kind of stuff? And your conversations with them, are you getting a sense of that?
Marissa Hummon
I think the answer is definitely yes. There will come a point where they have either to wait a long time for a new interconnection or they can take an existing site and get more compute out of it. The investment in the technology to get more compute out of it is going to be pennies compared to what they can actually make on it. And so, I think it's going to be a no-brainer for them. And I really liked your description of the cheapest and fastest capacity is the capacity that is already there and is just not being utilized.
David Roberts
Yeah, I feel like that logic is inexorable and eventually all the hyperscalers are going to show up demanding VPPs and AI-enabled computing at the edge of the grid and all this stuff that I'm so into. Eventually, they're going to realize this is our golden goose here. This is where the most capacity fastest can be found. Fun and exciting stuff, Marissa, thank you for coming on and talking through it. I'm sorry if my questions were dumb. I always feel like I'm asking dumb questions about AI, but I guess probably everybody feels like that.
Marissa Hummon
There are no dumb questions about AI.
David Roberts
All right, thanks for taking the time.
Marissa Hummon
Thank you.
David Roberts
Thank you for listening to Volts. It takes a village to make this podcast work. Shout out, especially, to my super producer, Kyle McDonald, who makes me and my guests sound smart every week. And it is all supported entirely by listeners like you. So, if you value conversations like this, please consider joining our community of paid subscribers at volts.wtf. Or, leaving a nice review, or telling a friend about Volts. Or all three. Thanks so much, and I'll see you next time.
Share this post