Volts
Volts
Wrapping our heads around AI and climate
3
0:00
-1:20:32

Wrapping our heads around AI and climate

A conversation with Alp Kucukelbir of Fero Labs.
3

In this episode, I have a lively conversation with Alp Kucukelbir, co-author of a recent “Artificial Intelligence for Climate Change Mitigation Roadmap,” about the strengths and limits of AI in relation to climate, where it all might be headed, and how concerned we should be about the energy use of data centers.

(PDF transcript)

(Active transcript)

Text transcript:

David Roberts

The hype around artificial intelligence has reached a deafening pitch lately. Most of it is focused on large language models that seem designed to steal the work of, and destroy the jobs of, creatives. And to fuel this, we’re told, data centers must proliferate to the point that they overwhelm the grid.

It's pretty irritating. But there is more to artificial intelligence than the goofy consumer-facing stuff, and its impact on climate change, writ large, is more complicated and interesting than a lot of public discourse reflects.

Alp Kucukelbir
Alp Kucukelbir

So I thought I would talk to someone who both thinks about and works with artificial intelligence in service of reducing carbon emissions. Alp Kucukelbir is an adjunct professor of computer science at Columbia University and the co-founder and chief scientist at Fero Labs, which develops software to optimize and economize industrial processes.

Share

Kucukelbir was a co-author on a recent “Artificial Intelligence for Climate Change Mitigation Roadmap,” so I thought he'd be a great person to talk to about what exactly AI is doing for climate mitigation, what it can’t do, what it might do in the further future, and what we should make of the energy burden of all those data centers.

With no further ado, Alp Kucukelbir, welcome to Volts. Thank you so much for coming.

Alp Kucukelbir

Thanks for having me. It's a delight.

David Roberts

I think every pod I've ever heard on artificial intelligence starts this way: Let's just start with defining a few terms. Let's maybe help listeners distinguish machine learning from artificial intelligence, from large language models. Let's just get a sort of lay of the land here.

Alp Kucukelbir

Yeah, that's perfect. So, all of these are types of software. Traditional software has been programmed explicitly by computer scientists and engineers. Machine learning is the stepping stone where we develop software that detects patterns from large and sometimes messy data without explicit programming on the part of the person writing the software. Machine learning has been really the foundational block enabling a lot of activities that we then associate the term artificial intelligence around, which is loosely just the science of making computers perform things, tasks that we traditionally associate with human intelligence. So, we'll dive into some of these, like optimizing a steel recipe or figuring out how to route power optimally from producers to consumers.

That's where the AI term comes in. Large language models is a recent development in AI, where through the consumption of enormous amounts of text data, we have achieved effectively a piece of software that finds patterns in historical text data to produce new text data, sentences, paragraphs, documents that really are capturing, again, things that we associate with human intelligence, such as summarizing a document or explaining something. And so, if I were to draw those kind of cascades, it's helpful to think of machine learning as the core block, working with patterns and data, AI as the thing using machine learning to do things like forecasting and optimizing, and large language models being a form of that, specifically focused with text data.

David Roberts

Got it. And am I right in saying — and this is something that's always bugged me a little bit — am I right in saying that the way you describe it, there's not really a clean, sharp line between sort of just very sophisticated programming and machine learning? Like, is it a binary, or is this sort of, like, when it gets smart enough, you start calling it machine learning? Like, is there a clear distinction?

Alp Kucukelbir

For that particular line, yes. Between the line of machine learning and AI, less so. So, let me give you an example from a computer program that plays a game like chess. Standard traditional software would involve a computer scientist programming the rules of chess and further programming — good ideas to do in chess. Like, if you can take your opponent's queen for free, take your opponent's queen. That's explicitly written by a human. It's in the source code of that software. Machine learning approaches will say, "Give me a historical database of everyone who's played the chess match on some online platform."

Okay, here's 20,000 historical matches. Infer from that dataset what are the rules of chess. What are good ideas to pursue in terms of tactics and strategies?

David Roberts

Right. So, what immediately comes up here is that you have, with that shift, just to take the chess example, you have kind of a higher ceiling, it seems like, in that the program can sort of learn and potentially even learn things that you don't even know, right, about chess. But the problem is, because you have not made the rules of chess explicit in the program, there's also the chance that it could make some wrong inferences. And so, this might be jumping ahead a bit. But this is where I wanted to start, which is, I think, two of the things that kind of lay people have learned about artificial intelligence, mainly through interacting with these large language models, is two things.

One, AI hallucinates, right? This is a feature of the large language models, which is that they will very confidently tell you something that is not true. They will take that immense body of data and infer something from it that is not true. And of course, the large language model can't distinguish between true and false, so it hallucinates. Two, it is opaque in the sense that even the people who wrote the program cannot tell you, after it produces something, why it produced the thing, exactly how it produced the thing. The reason I bring these two features up is that we are going to be talking today about taking machine learning beyond sort of like cheating on your college paper and into stuff where you're like running a factory, where all of a sudden hallucinations, you know, can get people killed, can cost you a lot of money or a big deal.

And opacity seems like if I'm running a factory, there's no way I'm turning over control to something I don't understand, basically to something that I cannot tear apart and diagnose and figure out why it's doing what it's doing. So, I guess I wanted to start with those two features, and just are those intrinsic features of AI, or is that just a large language model thing that I've got stuck in my head?

Alp Kucukelbir

Yeah, it's such a great point that you raise, and I'm so glad that we're starting off on this point. So, it is not an intrinsic property of all types of machine learning and AI systems. Another way of phrasing that is not all AI systems are created equal and the same. The community of academics, researchers, and companies, who now play a huge part in the development of these methodologies, have focused on different parts of the overall task of detecting patterns from historical data and then using those patterns to do something like make a prediction or provide a forecast, so on and so forth.

The very complex models that are powering things, like large language models at the moment, suffer precisely from these two challenges that you raise. And there is a huge flurry of activity seeking to provide additional insight and interpretability into those. But those remain beyond the horizon for those types of models. But what's exciting is that there is a huge unspoken, I would say, other set of applications of machine learning and AI, where we can explain what these algorithms have detected in terms of what was in your historical data and can quantify their uncertainty when they are making forecasts and providing recommendations.

David Roberts

They'll say something and they'll say, "This is how confident I am that what I'm saying is true," basically.

Alp Kucukelbir

And it's essential because — we'll get into this, I think, in a bit as well — the number one impediment that I've seen in my career trying to get AI into mission-critical settings like you're describing, factories, so on and so forth, is trust.

David Roberts

Right.

Alp Kucukelbir

People will not adopt a tool into their workflow, especially when the risk of things going south include safety, let alone huge financial cost, if they don't trust the tool.

David Roberts

So, those two features are not features of AI as such. It's just certain models and certain applications have those features to make machine learning systems that will tell you their degree of confidence and that you can see into, basically, and make sense out of.

Alp Kucukelbir

Precisely.

David Roberts

Okay, so we're going to talk about AI in the sort of realm of climate mitigation. And I want to start with sort of a few distinctions. One of them that you bring up a lot is sort of the "incremental" versus "necessary" thing. Let's get into that a little bit. There's sort of two families of things we want, things we envision AI doing for us. Explain that distinction.

Alp Kucukelbir

Absolutely. So, AI, machine learning, these are all general-purpose technologies. So as such, it's helpful to think about what we can deploy them towards and what we expect the result to be when we do so. If you think about applications of software in real-world scenarios, again, let's take a factory, for example. You can easily imagine that there are some inefficiencies. Maybe we're using a little bit more energy than we need to, maybe we're generating a little bit more waste than we expect. But in terms of just regular operations, I'm running my plant the way that I always do.

There are opportunities for so-called incremental gains. And so this is measured, I'll put a number on it, somewhere between 5% to 15% x better, whatever. The necessary applications of AI are where I'm really excited because these are scenarios where without the software present, you simply cannot do the thing that you are trying to do. And this ties in really well into a few objectives that scientists have been really telling us about for, at this point, decades that we're trying to achieve globally as a society, such as increasing material circularity, otherwise known as just recycling better. Recycling is a super accessible, capitalized good idea™.

David Roberts

Right. And it is something we are doing. I mean, we are doing some version of it.

Alp Kucukelbir

Most certainly, but we need to do much more at levels where, without technology like AI, to me, it's unclear how we will do so.

David Roberts

Got it. Interesting. So, places where AI is helping us do something we're already doing better, and then places where AI is enabling us to do something we couldn't do before, couldn't do without AI.

Alp Kucukelbir

Exactly. Both have value, but they have different properties.

David Roberts

Yeah, yeah, yeah. And I'm guessing the latter is a little fuzzier and more speculative. We'll get into some examples. And so then the other distinction that would be good to start with is sort of tiers of what AI can do for you, sort of from the simplest to the most complex. You have these three tiers, maybe walk through those real quick.

Alp Kucukelbir

So, one property of machine learning, we've said, is to handle data sets. We've discussed them being messy and large and finding patterns from them. That large component is important in that AI and machine learning have really enabled a new form of search. Meaning that there is some historical record of interest, and these types of software technologies really allow us to extract what's important or valuable or informative from those vast data sets. So, with something like a large language model, that's just text data of everything that you feed into it. In more down-to-earth examples of around climate change, it could be weather data, historical weather data, electricity consumption.

David Roberts

Every time AI comes up, I think about weather. I was like, "This is a classic example of a huge body of data that's very chaotic, out of which we are able, sort of only crudely able, to make predictions now." It just seems like a perfect place for AI. I think about that a lot.

Alp Kucukelbir

Absolutely. And we'll see weather forecasting, if we dive deeper, helping with not only climate change mitigation around predicting power demand, power generation from renewables, but also helping with climate change adaptation.

David Roberts

Yeah, right, right, right.

Alp Kucukelbir

It's invaluable to have high-quality forecasting. And again, this exactly, looking through historical data. But let's not get distracted. So there's three forms of how AI can be used. The most simplest is sifting through large amounts of data, finding what's relevant, make that useful for the rest of us. Great. The second canonical example of AI and machine learning is around forecasting, sometimes also called prediction. The idea here is, can I, in a virtual environment, use these patterns from data to estimate what is going to happen either in the future or what's going to happen under certain circumstances, just really using this data to provide that visibility.

And based off that visibility, we'll take actions that are beneficial for whatever application. And that ties into the last body of how machine learning can be used, and that's around optimization. So, you can think about forecasting and predicting as going from the past into the future. Optimization is saying, "I know what my future is, I know what I want it to look like. Tell me what I need to change now to get there." So, it's inverting the problem, so to speak, providing recommendations of how should I operate my plant right now to reduce my carbon footprint, how should I route my power right now so that I maximize the amount of renewable generation that I have? Things like that.

David Roberts

All right, so roughly, search, forecasting, and optimization are kind of the three basic families. That's what you'll find AI out in the field doing these days. Volts listeners will remember my interview with KoBold Metals. I always think that's a good example of, especially these first two. They have a big messy body of data about previous mining, about subsurface, sort of the conditions on the subsurface, messy data like stuff that was written down on vellum in the 1800s type of stuff that they're feeding into this huge database. And then the AI is chewing all that up and forecasting where they can find metal deposits underground.

It's a great example of the first two of those, but we're going to get into optimization a little bit later. So, with all that foundation laid, let's then talk about what AI is doing for us now in terms of climate mitigation. I wanted to start with the two areas where I think it's probably doing the most, and two areas where I'm really sort of keenly interested, which is power and heavy industry. Let's start with the power sector then. My very favorite thing in the world, the generating, moving, storing, managing electricity. What's going on there? Maybe give us some examples of how machine learning is being applied here.

Alp Kucukelbir

Absolutely. Let me divide the examples here into generation, transmission, and storage. Maybe we'll take a bit to discuss where the challenges of adopting AI in these sectors are. So, the planning of generation infrastructure is, as you know, complex. It's technically complex, it's politically complex. It is a challenging task. But if we look at the optimization kind of category where AI shines, this is a great opportunity for determining, for example, the optimal size and location of solar projects, siting for wind farm planning, adapting to the morphology of the terrain, and what are historical kind of wind speeds and directions, and what kind of turbine types should we use and modeling even the economics of what can we expect in terms of when this will pay for itself, and so on and so forth.

So, these are tasks where there are, again, messy, potentially large, disparate sources of data that all factor into that optimization. And machine learning and AI can be quite helpful on that front.

David Roberts

Let me ask, because this is something I want to keep in mind as we're walking through examples throughout which is, is that something that you think machine learning could do or is that something that someone is out there using machine learning to do right now?

Alp Kucukelbir

Right. Great question. Yeah, so these are already being used. And the rule of thumb, I would say, if I omit clarifying that, is that anything that sounds more planning and executionary is much more accessible because you're working with typically highly sophisticated teams. They're centralized, they're used to doing these types of optimizations using other tools, and they're just reaching for these new and better tools to do that. Whenever I talk about AI or machine learning in operations, that's where there are a lot more footnotes in terms of adoption and what's prohibiting them and so on and so forth.

David Roberts

Got it. And then, transmission next.

Alp Kucukelbir

Exactly. Transmission and distribution. So again, if you look at transmission and you think about transmission kind of expansion planning, where we're trying to figure out, you know, where do we put what's the optimal location and capacity of new transmission lines, which are going to cost a lot of money, but we really need to build them and make sure we don't make mistakes. And in building them, same thing. AI is being used. It's collaborations between utilities, academics, methods are published, they're open source. These are all kind of technologies that are available and being used, which is good.

So, these are complicated problems, similar to the generation problem. On the other hand, optimal flow. So, this is a huge problem. As we think about renewables, increased renewables, we have to think about how we're getting the energy to the people who need it depending on when and where we are generating them. Lots of fantastic work is being done here. A bunch of my colleagues as well have been working on this. The actual application of this: way more challenging. So, this involves the integration of AI into antiquated legacy software systems.

David Roberts

Yeah, whenever this comes up, I always sort of make a point of saying, it's funny to me, like, as I've learned more about this, grids are actually run. It has been sort of continuously shocking to me how low tech it is. How often it's like Bob, who's worked there for 20 years, has a feeling and goes and throws a physical switch just like very — I think it's much less like space age than people imagine when they envision something like this.

Alp Kucukelbir

I'm so glad you brought that up because it's so true. Right. And we have this inherent dependency on folks with enormous amount of expertise, literally enabling energy to be delivered. And we just take that for granted. You might think it's a super sophisticated high-tech situation. But —

David Roberts

Yeah, it's just Bob.

Alp Kucukelbir

It's just Bob. It's just Bob. And we all appreciate Bob and Susan and all the Bobs and Susans who help get power to us. Yeah. So that's challenging, right? And it's legitimately challenging because it's a regulated sector. We rely on it. If you have a brownout and you don't deliver power to a hospital, that is a legitimate societal concern. So we take it very seriously as a society to vet this technology and adopt it once it's been proven. But that, of course, inhibits the speed at which we can adopt this type of technology. And so: challenging.

David Roberts

Is that happening, like, are there utilities using this stuff right now to sort of dispatch power in real time and manage the grid in real time? Or is that still kind of a little ways off?

Alp Kucukelbir

Little ways off, I would say. At least not to my knowledge, in terms of the most state-of-the-art kind of machine learning AI that really very quickly, optimally gives you a solution to the flow problem, and then you can really adapt and then immediately get that information where you need to, where it's automatically executed. Like, we're not there yet, but there are a lot of conversations around this.

David Roberts

Yeah, yeah, got it.

Alp Kucukelbir

And then, maybe another thing to throw in here is fault detection. So again, the idea here is that we have a huge amount of data coming out of transmission kind of infrastructure. Faults are never good. Reacting to faults: never good. Proactively addressing faults: better. And this is a scenario where AI has a higher rate of adoption, I would say, because it isn't part of that legacy kind of core infrastructure of the software that's routing the power. The utilities have maintenance schedules anyhow, and so machine learning and AI can complement how they decide to maintain and what schedule they follow in maintaining their assets, what to focus on when, and that's a lot easier to adopt into their workflows.

We can talk about storage, briefly — connects into material science as well. So maybe that could be a transition. But renewable power, non-consistent power generation, a great idea is to store it when we make more of it and use it. Another opportunity for AI there is to think about the amount, the location, and the operations of energy storage as we continue to invest into energy storage. So, what kind of batteries, if they are batteries or other types of storage, where, how much should we invest? AI is again, helpful in that planning and exercise and is being adopted.

David Roberts

Honestly, this is the place I'm most excited about the whole AI thing, is on the grid. Because I think people don't appreciate just how much slack there is in the grid, how much wiggle room we have to build in because we're using these crude systems, because we're using Bob and Susan. So, there's just a lot of unused capacity there out of caution. And just optimizing all that stuff, I think, is going to be really, really huge.

Alp Kucukelbir

There's applications that are a little bit more experimental, but I think equally exciting as we think about the number of electric vehicles that are effectively becoming vehicle to grid or vehicle to everything protocols. How does that get optimized? You're going to use your vehicle even in your home electric grid. None of that was designed for any kind of application like that. And is software going to be the thing that we can deploy now to plug that gap while the hardware catches up and whatever?

David Roberts

Yeah, EVs, I mean, EVs, water heaters, all DERs, all distributed energy resources. This is a classic area where there's, I think, pretty soon going to be so much data coming in that only something like machine learning is going to be able to handle it. It's going to overwhelm Bob and Susan once they're dealing with hundreds of thousands of separate devices all behaving independently.

Alp Kucukelbir

Precisely. Precisely.

David Roberts

Okay, so that's power. The power sector, which I think is a real low-hanging fruit in a real promising area for this. Let's talk about industry and manufacturing. So industry, there's a whole long history in industry of sort of efforts at optimization. You know what I mean? It kind of is the history of industry. There's famous books, these CEOs who come up with their new schemes for optimizing this or that. So the whole idea of optimizing an industry is already ripe. So this seems like, again, just like a perfect place for machine learning to slipstream in.

So, let's talk about industry and manufacturing. And maybe, if you want to, we can do this through the lens of Fero Labs, which you run, and what it's doing at the steel plant. And maybe that would give us like a concrete way of wrapping our heads around what happens here.

Alp Kucukelbir

So, no need to describe to your audience how big of a slice of the pie manufacturing constitutes in terms of the carbon footprint. Steel, cement, and chemicals make up two-thirds of the industrial global footprint. And that's what we focus on at Fero. The main two approaches where AI makes a difference in the manufacturing sector fall into that incremental versus necessary kind of classification I made. And so, when you were talking about optimization, the manufacturing sector historically has been really focused on getting really super sharp on those incremental optimization opportunities. So, how do I run my plant as stable as possible?

David Roberts

"Six Sigma," I don't even know what the hell that means, but I know people say it.

Alp Kucukelbir

Exactly. That's one protocol, I'll tell you, the production system, there's all — yeah, exactly. There's a long history of that. And machine learning and AI has a role to play there as well. And the applications there are going to be things like, "How do I use less water, less energy to produce the same amount of goods?" They are still high quality, no problem there. "How do I run my plant to get a little bit more yield?" So that's basically minimizing your waste or your losses of converting raw ingredients into products like cement or chemicals or steel. But the necessary applications of AI fall around that idea of circularity, material circularity, recycling.

And that's where I can give an example that hopefully you might find interesting. So, steel is a wonder material in the sense that it is really, really easy to recycle. This I say, in contrast to plastics, for example, where we have a lot of public awareness around recycling and there are a lot of challenges — again, very interesting movements in increasing circularity in plastics as well, we can talk about that. But steel is actually chemically really easy to recycle. We also have great markets for scrap metal.

David Roberts

Am I right in thinking it's like a third of steel is recycled or something like that? It's higher than for any other metal, right?

Alp Kucukelbir

Yes, so steel is very valuable in terms of the kind of market incentives for recycling. So, if you get rid of a car, the metal in that car will make it to a steel producer. If you tear down a building, the rebar that went into the reinforced concrete will make it to a steel producer. There are well-established markets and workflows for that. The other nice thing is that not only is it easy to recycle so we can take scrap metal and make new steel, the process of doing so uses electricity. And we collectively are figuring out how to make more green electricity.

And that's fantastic, right? We're not trying to replace a chemical process where burning fossil fuels is, in some sense, "necessary," because the chemistry of the fuel itself is providing a value to the production. This is pure electricity, and these are called electric arc furnaces.

David Roberts

Right. I don't have the schedule in my head, but I think when this comes out, roughly a week earlier, we'll have heard from a company called Boston Metal, which is eliminating that chemical process that produces the carbon with electricity. So, even it's making steel even more dependent on pure electricity with no waste. So, this very much ties into recent, recent episodes.

Alp Kucukelbir

Absolutely. Yep. And we love Boston Metal. We share investors, actually, with Boston Metal. So that's great. So, exactly, we've established that you can make new steel from old steel really easily. You use electricity to do so. So that's a great pathway to making that type of steel manufacturing green. So where's the challenge? What's stopping us from using 100% recycled steel? So here's the problem. I get a batch of steel from a scrapyard, and it has a bunch of Fords in it today. And tomorrow, I get another batch of steel from maybe the same scrapyard or a different one, and it has a bunch of Toyotas.

The day after, I get a bunch of metal and it's dishwasher parts or rebar from an old building. So, manufacturing as a whole is trying to remove variability. If there's one enemy of all manufacturers, it's variability.

David Roberts

Yeah. Right.

Alp Kucukelbir

24/7/365, you want consistently high-quality products, nothing to change.

David Roberts

Right.

Alp Kucukelbir

High variability scrap is a nightmare. Because you now have the raw ingredient — if not ideally, the primary raw ingredient — of your manufacturing process is highly variable, and it's really hard to control because scrapyards are messy and you are recycling this material from the market. So, what do manufacturers do today? They basically blend the unknown recycled scrap with higher quality, reliable, virgin material. So, you'll see steel manufacturers limiting the amount of recycled steel that they're using because we don't really know what's in this. And based off of that, you have a few moments during the production when you've melted the steel with, let's say, 80% of the virgin stuff, 20% of the recycled stuff.

You add a little bit more stuff when it's molten to make sure that things look good, and then you cast and roll the steel. So, this is an approach that basically limits the amount of recycled material you can use at your plant.

David Roberts

So just to clarify, when I'm envisioning the recycled steel, we don't know what's in it. You're not talking about like chunks of rock or something. We're talking about more like down at the kind of molecular level, what's mixed in with the steel, right?

Alp Kucukelbir

Yeah, so steel is steel, but it might have slight variations, right? Some steel might have a residual of some chemical, higher nickel, lower tin, things like that. And so, this is a buffering approach. Right? This is very common in manufacturing. You basically see I can't control for the source of variability. So, I'm just going to either limit it, I'm going to add the stuff that I can control, a buffer around it. Buffers are margins of loss. So, scrap metal is way cheaper than the virgin stuff. So financially, steel manufacturers are actually incentivized — it's good for the environment, it's also good for their bottom line to increase the amount of scrap that they're using.

So, instead of going from 20%, they can go to 50%. That's a huge win. Now, when you go to 50%, you basically no longer can buffer that variability. So, what do you do? The paradigm that we present is you adapt to the variability. So, if you measure the chemical composition of the steel as you are melting it coming from the scrap, you now have an opportunity to say, "Okay, I'm looking at 25 different things that are variable in my steel recipe. How do I optimize how I produce this batch of steel so that I have a high-quality, strong product once I cast and roll this particular beam of steel?" This decision of optimizing, figuring out what to do with that recipe of steel, happens once every ten minutes.

And so, this is a scenario just like you were describing, where Bob or Susan — there are equivalents of Bob's and Susan's at steel mills, I know a bunch of them — they can't do that. The plant is running 24/7. They can't, every ten minutes, figure out a 25-dimensional optimization problem. But do you know what can? Machine learning. So, machine learning and AI now sit in the pulpits where the operators are operating on new batches of steel every ten minutes. And it tells them, "Hey, here's how you need to process this batch of steel. And we're adapting to what's in it, so you don't need to worry. It's a dynamic recipe."

David Roberts

Right. So, the steel coming out of this in one ten-minute period could have 10% recycled steel in it, and in another ten-minute period, could have 30% recycled steel in it. Just depending on what's in the particular recycled steel that's coming out of the line, basically.

Alp Kucukelbir

Absolutely. And with a technology like this, you can keep on increasing that amount of recycled steel. So on average, you are hitting 40%, 50%, 60%, you are increasing the amount of recycled steel that you're using.

David Roberts

Got it.

Alp Kucukelbir

And that has a market impact on the per ton carbon footprint of that batch of steel.

David Roberts

Got it. So, it saves them money, saves emissions just through reducing the amount of virgin steel you have to make.

Alp Kucukelbir

Exactly. And you see this story in, for example, cement as well. So in cement, the thing that's bad in final cement is called clinker. That is the product that has the highest carbon footprint in terms of its composition for the final product. And so there is a huge movement towards doing two things. One is creating final cement products, where you replace the amount of clinker from 80% down to 50%. So you're substituting materials in your final cement product, trying to minimize the component that has a high carbon footprint. It's a very similar analogy to steel. But then you can also say, "Well, how do I produce this clinker, this thing that is the glue in my cement itself, with the lowest carbon footprint that you can do?"

And this is where alternative fuels come in. So historically, you make clinker, burning fossil fuels, you need to reach enormous amounts of heat, and you need the fuel to be clean and you know how to handle it. But now there's a huge movement towards burning biomass, towards burning alternative fuels in that furnace, which is great, but then anything that's in that fuel, all the variability in that fuel that gets into your clinker. And so if that gets into your clinker, you have low-quality clinker, you can't use it. That's no good either. So, you need to optimize in real time.

David Roberts

Right? And so, I think listeners can use their imaginations and imagine something similar in almost any industrial process, right? I mean, optimization by Bob and Susan is limited by their time, cognitive capacity, and their tools. And it's just in almost any industrial process you can imagine. Just knowing more, making more frequent sensing and calculations, just allows for real-time optimization in a way that a human couldn't do.

Alp Kucukelbir

Absolutely. And to connect this maybe to the original point, none of this can happen if Bob or Susan can't trust what the machine learning AI output is.

David Roberts

Yes, yes. You do not want to mess with concrete and steel. You really do. You do not want approximations in your concrete and steel.

Alp Kucukelbir

Nope. Very hot, explosive, very dangerous. Right. So, you need to quantify the uncertainty. You need an algorithm that says, "Bob, I've seen this type of steel that you're melting. It's kind of similar to these past few times that you've melted like this. Those past few times, the machine learning algorithm said, hey, you got away with just like processing this a little bit cooler. Great, you can do that again now." But then, Susan needs to know, like, "Hey, I've never seen anything like this. Like the batch of steel, the Toyotas that you just melted, has a huge amount of titanium in it. There was some contamination," and says, "call someone."

"Right? And that's what our algorithms do. They say, like, "Call someone, call your quality manager, because someone needs to look at this that's smarter than an AI."

David Roberts

Interesting. And it follows, I mean, it just intuitively, that the performance of these systems improves over time, right? As they gather more data. And I had another question about this as it applies to the industry. You know, the best way for machine learning to learn and improve is just to gather more data, is to get more data. But when you start talking about the industry, you're talking about companies that have a lot of IP, a lot of intellectual property, a lot of processes and things that they don't want to share. It seems like the way machine learning could work best is by operating in a bunch of diverse plants and pooling that knowledge.

Like, that would be the fastest way to have it get better and better. But, like, owners of factories are not going to let their — unlike us poor journalists — owners of factories are not going to freely give up their information to AI. Yeah, how much of a limiting factor is that? The sort of straitjacket that IP and those types of concerns.

Alp Kucukelbir

It's a great question. So, two things there. I would say, in general, that is a challenge that we try to raise, and I'm so glad that you did raise awareness of, in terms of the broader application of AI in hard-to-decarbonize applications. We think of data as whether it's available and whether it's accessible. And you're talking about data accessibility, which is whether it is internal to specific entities or whether it is sold at cost, or whether it is completely open source and shared. And really, moving more towards making that data accessible is really going to enable broader adoption of AI, for example, in the power sector, emissions monitoring, and weather forecasting and things like that.

David Roberts

Yeah, yeah. Are you trying to, I mean, do you find yourself trying to make the case to, like, the owner of a steel factory, like, "It's scary for you to share, but the improvement in the machine learning that results will compensate?" Is that the kind of argument you're making?

Alp Kucukelbir

So, in industry, the interesting kind of footnote there is that each mill, each manufacturing plant actually has its own idiosyncrasies. And in fact, if you think about where the opportunity gap for machine learning or AI beyond the physics is, you've got chemical engineers, metallurgists running these plants who have decades of knowledge, who've ingested every textbook on this field, they understand what they're doing. So why can't they do it perfectly? Well, it's just because every plant has its own oddness. They're at scale, equipment ages, they make slightly different types of steel. So in fact, we almost have the alternative problem.

A lot of manufacturers think that they have enormous amounts of data. They're like, "Oh, you know, we don't know if your AI is going to be —" Most manufacturers' data fits on my watch. It's actually small data, you know, because they're only coming from that plant, you know, so you need technology that really adapts to their specific production and provides insights for them. And that's a huge component of the trust. Right. You go to Bob, you go to Susan. They've been working there for 30 years. They want to see how your technology, AI, machine learning, the silver bullet, works for them on their data, on their equipment.

David Roberts

So, there's a bespoke aspect to it in industry.

Alp Kucukelbir

Indeed, indeed. And the nice thing is, again, with machine learning, you write the template almost, right? There's nothing in my software that has steel-specific knowledge. There's nothing in my software that has cement or chemical-specific knowledge. But there are features in our software that know how to handle industrial production data that understand what the common optimizations that manufacturers want to do. And it's all about marrying that and empowering the existing work. This is another thing. There simply isn't enough kind of talent to build from the ground up. Machine learning, AI talent in industry.

David Roberts

Yeah, well, this was going to be a question I was going to ask later, but like, if I'm a factory owner, I have to find someone who is an expert at my thing and then also an expert at AI. Like, do such people exist? Are those people out there?

Alp Kucukelbir

Few and far between. And they are concentrated in a few R&D divisions of maybe some of the largest companies in each sector. And that's about it. And so that's why our vision has always been, is to build a software tool that non-computer scientists but people with deep expertise in what they do can use. And that's where we hit scale. It would have been easy for me to just build a consultancy and just go work with a few steel plants and teach them how to do machine learning to reduce their costs and reduce their carbon footprint.

But if I'm going to hit the scale of making the entire steel sector more cost-efficient and carbon-efficient, I need to empower them with a tool. So, it's actually possible really to engage with a workforce with some amount of training, quite actually lightweight and a powerful software tool that is in their language. There's almost nothing in my software that is machine learning jargon, AI jargon. It's all described in terms that chemical engineers, quality managers, operations people can actually understand.

David Roberts

Right? So, you just embed the AI expertise in the software, and then all you need to use it is subject matter expertise, basically. So, let's move on. We talked about power, we talked about manufacturing, and I think in both those areas, it's real easy to see, just like, the sky is the limit here. Like, there are so many areas where there are large, messy datasets and optimization waiting to happen. Let's talk briefly about material science, because this is the one that I'm sort of like — you know, what do I know? But just, like, personally, this is kind of my dark horse.

Like, I am excited about the applications of artificial intelligence in materials science just because people are trying to come up with new materials, and they generally have to make the new materials and test them to do that, and it's just hard to make a new material. It takes a long time. And so, as I understand it, AI is sort of enabling a lot of this to be done virtually. So, say a little bit about material science and some of the things that are being done now with machine learning there.

Alp Kucukelbir

Yep, absolutely. I share your enthusiasm. It's very exciting. So, the energy transition is going to require new materials. The easiest way to think about this is, like, more efficient batteries: good idea. More efficient solar photovoltaic cells: good idea. Lighter materials, stronger materials for buildings, for cars —

David Roberts

For transmission lines!

Alp Kucukelbir

For transmission lines, you got it. All this infrastructure that we're going to need to build. First of all, it's important to note that these make a difference, right? Like, if we think about what's happened in lithium-ion battery technology, it's really a material science innovation process that has led to it becoming better, cheaper, lighter. You name all the properties that we want. But what you described, which is sometimes called kind of the Edison method, which is, you know, really, it's — There's lots of stories around whether Edison himself was persuading, as he famously said, but, you know, a lot of trial and error, most likely with a bunch of people that he was working with, to discover materials that have certain properties.

That's the objective. How do I manufacture a specific type of material that has certain properties that either needs to conduct electricity in a certain way, it needs to bend in a certain way, it needs to be lightweight and bear some load in a certain way? And so, machine learning and AI — coming back to that search and forecasting, two components — is ideally suited here. So, you can think of machine learning, AI basically searching over possible materials, materials that either don't exist yet, or we have very preliminary data about, because there are novel materials that have only been manufactured in labs in a few places.

Or, you know, there's historical materials that we know very well and are familiar with searching through that data and using those measurements to say, "Okay, let's discover materials for batteries or hydrogen storage or nuclear fuels or semiconductors, all of these kind of things that have certain properties."

David Roberts

I always think about graphene, right? This is the example that always comes up in this —

Alp Kucukelbir

Bang on. Bang on. Absolutely.

David Roberts

Graphene, it's like the wonder material, but as I understand, it's just kind of too expensive to make. Now, it seems like machine learning could figure out ways to, I don't know, make it cheaper.

Alp Kucukelbir

Yep. So, you can think about existing materials and say, "Okay, what is the bottleneck here?" Right? What is the aspect or property of this material that makes it impossible to manufacture at scale? Okay, fine. So now, I'm going to search over a bunch of materials that do not have that property. That's a bad thing. I want to avoid it, but I like these other properties of it. It's strong, can conduct electricity, all that. So, how do I find that happy medium? And of course, there's no free lunch. There isn't some sort of guarantee.

But minimizing the amount of physical processing that's required to explore new materials is huge because, as you described, you have to manufacture the material. But then once you manufacture it, you also need to use very expensive equipment to be able to measure its actual properties in the real world. So, it's like an MRI machine in a hospital is being optimally used at capacity. It's a bottleneck. And so, the same thing. All of these pieces of equipment that are used in materials science, in the production of it and in the quantification of its properties, are bottlenecks. So, you want to try as few materials as you can, but you want to be as close to the target as you can be as well.

David Roberts

Right? So, material science, tons of stuff possible there. And this is one of those areas. I mean, I think this is true for every area we've talked about, but it seems particularly true for material science, which is that there are lots of fairly obvious ways you can see it applying, but then there are also, like these tantalizing kind of possibilities hovering out there on the horizon of genuinely new materials that can make cars super lightweight, that can transmit electricity without resistance. There's all kinds of — we can't predict obviously now what those will be, but there are some real prizes out there on the horizon in this area, it seems to be.

Alp Kucukelbir

Absolutely. And this might be one area where I will defend, and I'm surprised that I'm going on record for this, I will defend the hallucinations as being a possible good thing, right? Because this is an application we're actually asking a machine learning or AI algorithm to hallucinate, to otherwise hypothesize materials that are completely outside of the realm of imagination actually could be interesting, right? That's an exploration. That's an exploration rather than the exploitation task. Could be interesting to have some ideas through there.

David Roberts

Yeah, yeah. Okay, so that's the power industry and materials science. That's probably enough in terms of examples of what's happening. Let's briefly just talk about, because we made a lot of big claims here about what machine learning and AI can do. If you're trying to sort of rein in the hype and keep it realistic, what can't AI do? What do you caution people about the limits here?

Alp Kucukelbir

So first and foremost, AI is software. Software is virtual. Virtual things can't move things in the real world. AI can't move molecules around. We have to move a whole lot of molecules in a very specific way to win this climate war. So that has to happen. AI is not the silver bullet for this, and that's really important. So because AI itself is not moving things around, we have to think about how AI complements all of the necessary investments we need to be making that are painful and going to cost money and are not politically straightforward. And think about complementing those two ideas.

And so the advantages, the way to think about AI is that AI — because of the software, because it's virtual — is now meaning that we can, we have it now. It's not some sort of future thing that we are banking on. And the second thing is you can copy/paste it. So scaling software is good. So think. Whenever I think about use cases, let's say for example, weather forecasting, and we briefly alluded that that's important for power generation and demand generation forecasting, so on and so forth, as well as adaptation. This is something where you want to scale.

It's a data problem. It is a software problem. You want everywhere in the world to have access to as high accuracy weather forecasting: good idea. That's a good idea. Now, where you can expect challenges, where AI can't, again, make things different is the physical world. And so, it only can change how we interact with the physical world, whether it's agriculture, whether it's how we design our built environments, again, how we generate and route power. It can help with how we do that, but it's not going to fundamentally do the thing.

David Roberts

Anyone who's followed the clean energy transition or climate politics at all, or is aware of politics or has ever met a human being, knows that one of the family of problems impeding us in climate mitigation. Some are technical, but a lot are human, a lot are just sociological, a lot are about psychology, a lot about how to organize people, how to change behavior, all that kind of stuff. And that's just the fuzziest of fuzzy data. But also, we have a lot of that data. So I wonder, are there examples of people trying to use machine learning to sort of like, you know, you can imagine like a database of transmission, citing examples and like, you know, go over that data and like, which ones worked, which ones were easy, which ones were faster and which were slower?

What are the characteristics of the communities that were welcoming to transmission versus the ones that were NIMBYs, et cetera, et cetera? Is anyone trying to use machine language or AI on that kind of like fuzzy social and political stuff?

Alp Kucukelbir

Yes, it's such a good point. So, it's a lot harder, but it's a lot of —

David Roberts

Harder for humans, too.

Alp Kucukelbir

Of course, kind of challenging a problem. So, for example, let's take nuclear permitting. So that's an interesting kind of direction. We're actually expanding this report that you're mentioning with a few new chapters this year to be presented at COP 29. One chapter is going to be on nuclear, and we're discussing this. We're going into — the amount of time it takes for navigating a process like nuclear due to the huge amount of regulations that go in, would benefit from the ability to, for example, sift through huge amounts of text data to glean whether there are, you know, let's say, insights like the ones that you were describing.

Now, that's a regulated kind of field where you're thinking about the application of a technology that has potential pitfalls like these hallucinations. It might, you know, not actually summarize something that's important and omit it. So, what are the lower hanging fruits here? Well, the lower hanging fruits that are potentially interesting and, you know, see some academics thinking about this, is how can text data inform climate sentiment, so to speak. So, the idea of generative production of images is one where, for me, it's always been a little bit of a question mark. I'm like, okay, that's cool, but, like, what is the actual, you know, where's the win here?

And here's an idea, right? So, folks who are struggling to understand what the impact of climate change means for them actually can be provided with generative AI-generated imagery that shows the impact of rising water levels in their neighborhoods, with street names that they recognize, landmarks that they recognize. And so, from a psychological standpoint — I've seen there are these computer-generated images that one of my co-authors, David Sandalow, likes to show of what's going to happen to Shanghai under different water rising scenarios. It's extraordinary, right? From just an emotional perspective, looking at these images of, like, "Oh, my God."

David Roberts

You know, what you're talking about basically is like using AI for persuasion, which you can imagine positive examples and also, but boy, howdy, can you imagine lots of, I mean, how to get people to believe false things? It can learn what words and images lead people to false conclusions, too, and pump those out. Like, do you worry about a sort of, like — I mean, this is a little bit off our topic, but just like, in terms of the information atmosphere like that, we're entering a future where it's just like, good AI's battling bad AI's forever, and the rest of us just being baffled and confused by all of it.

I mean, what's — I don't know what, I don't know, what is there to say about that? But is this something that haunts you?

Alp Kucukelbir

I'm certainly worried. I mean, disinformation powered with AI keeps me up at night.

David Roberts

Yes.

Alp Kucukelbir

The only thing that I'm relying on is that there have been technological advances in the past that have made disinformation easier and broader. And I hope that as humans, we've developed the intuitions to protect ourselves against them, whether it's technological or not.

David Roberts

You sweet summer child.

Alp Kucukelbir

I know this one seems really hard, to be honest. Like, with half the world voting in 2024, just the amount of disinformation that's powered with AI does keep me up at night.

David Roberts

Yeah, yeah. Well, anyway, that's thankfully outside our ambit here. Quickly, I wanted to touch on policy. I mean, one of the reasons you originally got in touch with me is that the DOE handed out these industrial decarbonization grants recently. We did a pod on it, and they were mostly about building factories, building foundries, stuff like that. And your point was like, "Hey, what about AI?" So, talk a little bit about policy and what you'd like to see in terms of positive machine learning examples, maybe getting more support or just more attention or what kind of things would you like to see policy-wise?

Alp Kucukelbir

Yeah, for sure. I think both. Right. So, when you think about what does the DOE grant do, it helps de-risk for the operators of whatever, a utility, a factory, whatever it is. It helps them de-risk adopting a new technology that the DOE deems is good for society. So, the really high-risk things are, of course, these hardware investments that need to be made, right? Like, we need to stop making steel using blast furnaces. In the steel world: obvious. For steel manufacturers: less so. If that's what you've operated and that's your world.

It's easy to say from the outside, "Hey, stop doing this and build an electric arc for this." Who's going to pay for that? All these types of things. So, the DOE is providing an essential resource on that front. But these are medium-term impacts. So obviously, we're looking at 2030, 2050, all of this is great, but a gigaton of carbon dioxide mitigated today is valuable. We can't be just waiting for some 2030 solution where we figured out the cement problem or figured out the steel problem. Having AI be part of that narrative, I think, is important for those applications that I described that are necessary. Thinking about what do manufacturers who want to produce things cheaper, which is what AI and machine learning typically enables, how do they then de-risk the adoption of these technologies?

And same thing, grid operators like power flow. What does the DOE need to do to de-risk a utility to adopt technology that will help get power cheaper to where it needs to go?

David Roberts

Right. And that's very much a public policy thing, like anything utilities do, is going to have to pass muster with PUCs and all that. There's definitely going to have to be public administration involved in it.

Alp Kucukelbir

Precisely. So, there is a rising awareness. The DOE now has an AI office, which was newly formed last year, led by a wonderful, wonderful team of people. And so, this awareness is increasing. For me, it's been, I mean, let me be completely frank, the hype and the craze around large language models, while it's captivated the public imagination, it actually hurts those of us who are trying to get machine learning and AI out into these real problems. You know, because it just increases this distrust that it's a silver bullet, that's how it's pitched to people. It's going to replace you.

It's not — this isn't what AI is here to do. Right. AI is here to complement the energy transition with the hardware and the big capex expenditures that we need to make with technology that enables us to operate these facilities in a way that's circular, that is adapting to variable power, alternative fuels, all these types of initiatives.

David Roberts

If I could insert a quick rant here, because it's something I've come back to a lot. It's something I wrote a piece on in Vox a long time ago, and I've always, like, no one noticed it when I wrote it, but I've always clung to the thesis. But there's this sort of Vaclav Smil school of thought that energy transitions are very slow and very physical, and they take a long time. And from that perspective, it's sort of silly to think that we can do this massive energy transformation by 2050, et cetera. But my whole point is, it's not all physical this time, right?

Like, there's a huge element of it that is digital, that is about dematerialization, basically, where we're substituting intelligence for material, basically. And that's what this is all about. Like all this machine learning and artificial intelligence stuff is so that it's not this overwhelming physical task. It's much lighter lift, I guess, is the way I would put it.

Alp Kucukelbir

I'm very glad that you made that point, because thinking about, call it the operating system of adopting this whole set of new technologies that are going to enable the green transition, the status quo won't work. Bob and Susan, it's just not going to work. We don't have consistent power to consistently generate and consistently get to — we have variable power, we have alternative materials, we have recycled feedstocks. That's exactly what you're describing. The one thing I will say, however, is, as you know, Vaclav is very optimistic and positive and says "yes" and "great idea" to everything. And we presented this report in Japan.

He sits on the panel that sponsored this report. I'm very pleased to say that we got a thumbs up from him, which I consider maybe the greatest compliment in my entire professional life.

David Roberts

The Smil badge of approval.

Alp Kucukelbir

The Smil thumbs up.

David Roberts

Well, I've gone over time, but one of the main things I wanted to ask you about, and maybe we could just kind of touch on it briefly. But of course, the main thing people are worried about here, the sort of thing that's thrusting AI into the energy world news, is just the extraordinary energy demand of this sector in the form of data centers. And, you know, there's all this worry that we need all these data centers, and it's going to overwhelm the grid and we're going to have to build a bunch more gas plants because we don't have the technology to keep up yet, et cetera, et cetera. What do you make of all this?

What do you make of the basic demand forecasts? Because one of the things that's always struck me is, yes, you need all these data centers, but part of what machine learning is going to be doing is making data centers more efficient. So, it's going to be taking the edge off of that. So, how does that all balance out in your head? How do you wrestle with that? Is it worth it? Is it going to be the crisis that we think it's going to be, etcetera? How do you wrestle with that?

Alp Kucukelbir

Yeah, it's such a good topic, and it's really complicated. So the first thing I'll also say is that there's huge uncertainty bars. I'll take a page out of my own book here and quantify my own uncertainty. You know, there are so many projected scenarios where there's a huge amount of uncertainty. So here are some facts. I would say is that this recent craze around large language models has put an unprecedented demand for energy and for compute. That is fact. Where that will get projected to is unclear, but that's fact. Another interesting, I would say, fact is that the people, largely the tech titans Microsoft, Google, Amazon, who are building these data centers, are also obsessed with procuring green electricity.

David Roberts

Yes.

Alp Kucukelbir

I mean, obsessed. I mean, just, just truly walking the talk. Remarkable engagements. In fact, even Microsoft and Google, Vijay Vaitheeswaran was writing in The Economist, had partnered with Nucor, who was a steel producer, to collectively procure green electricity.

David Roberts

Interesting.

Alp Kucukelbir

A steel manufacturer and software —

David Roberts

Walk into a bar.

Alp Kucukelbir

Exactly. Trying to get some green electricity here. So, okay, so that's another factor. I would say what you describe is absolutely true in two ways. One is machine learning and AI are going to help optimize how we run the data centers. But the second way is that unlike something like Bitcoin or cryptocurrencies, AI actually is getting more efficient through the work of just clever algorithms and mathematics. And so we are able to increasingly see, and this is active research, academics focusing on "How do I get something like a large language model to produce equally as whatever, 'accurate' sentences or summaries, but using less energy by just doing less computation?"

Now, these all get intermingled into a kind of Jevons paradox rebound effects.

David Roberts

Yes, I was going to ask about that, too.

Alp Kucukelbir

"If it gets cheaper, will they use it more?" and all that kind of stuff. And so, there is so much uncertainty kind of around that as well. Here are some other kind of, I would say, facts. At the moment, the amount of energy that AI is currently drawing is something, globally, it's like something of the order of about 10% of what Americans consume in terms of just watching television. Right. You know, so Americans watching television consume ten times more energy than all AI consumption. So, fine, it's easy to point to something in the past and say, like, okay, maybe it's going to be similar in the future, but really in terms of quantities, that's what we're looking at.

My belief is that there's going to be a kind of balance of increased demand and integration of this type of technology into things like search and things that we do a lot of at scale, with it becoming cheaper, it becoming more energy efficient, and the electricity being used to generate those results coming from green, renewable resources. Therefore, the carbon footprint at least will remain manageable.

David Roberts

And so, I guess maybe this is perhaps obvious since you have devoted your life to this subject. You think that net-net, machine learning and AI is going to be a vast help basically here. Like, there's no scenario where the energy demand is somehow worse for the energy transition than the things it's doing to optimize various processes and whatnot. Is that fair?

Alp Kucukelbir

I couldn't agree more. I think there's more risk of machine learning and AI as a general-purpose technology being used for ill than there is for coming from its own carbon footprint.

David Roberts

Yes, if you want nightmares to worry about, that's where you look.

Alp Kucukelbir

That's where you look. Right. If you think about, like, does machine learning and AI also make drilling for oil and gas cheaper and more effective? Yes. Right. Like, can machine learning and AI be used as a weapon, you know, in various — yes. It's a general-purpose technology. So it's up to us to frame the entire policy environment to make sure that we steer all entities, private, public, to using this technology for good. I don't see the carbon footprint of machine learning and AI as being a major threat to that adoption.

David Roberts

Right. Well, speaking of threat scenarios, when you talk about Bitcoin and crypto, those are tied to compute, you compute to do them. So, sort of, by definition, you can't produce the same thing with less compute, since the compute is the thing. But AI, you can accomplish the same things with less computation, less time, less material, whatever, because the AI is getting smarter and more efficient. My question is, are we now at a stage where there are AIs making themselves more efficient? In other words, are AIs improving themselves continuously? Because this is the thing, of course, that everybody freaks out about, where you end up with the Terminator or whatever, Skynet, all that. Is that happening?

Alp Kucukelbir

No, I think is the short answer. The idea of using technology to make it more energy efficient: yes. But it's being done by researchers who are saying, "Hey, AI can optimize factories. Can it optimize energy consumption of things that look like itself?" Sure, but that's still a program that a human wrote. We don't have some world in which AIs are figuring out how to draw less and less energy and ingest more and more data and go out of control. But I really appreciate that you brought up that point about crypto because it gets a lot of comparisons.

The crypto consumption in 2022 was like 100 terawatt hours or something like that. That's like the Netherlands or some country of that size. It's equivalent to its global annual output. But then in 2022 or 2023, Ethereum passed this change where they said, "Hey, look, we're going to do this thing that reduces the amount of power that Ethereum is drawing because we have this mandatory compute aspect to it." That reduction was 81 terawatt hours.

David Roberts

Good God.

Alp Kucukelbir

They decided to make a change that reduced the annual consumption of Ethereum's energy consumption of the size of Belgium. That's the delta.

David Roberts

Oh, my God.

Alp Kucukelbir

We're nowhere near these numbers with AI. Like, AI is nowhere near these numbers, and it doesn't require this much compute.

David Roberts

I don't know if I want those guys, those particular guys wielding power, to have Belgium-sized effects. It's profoundly unsettling. But, yeah, as you say, totally separate thing. Okay, I think we've covered the waterfront here. This is super fascinating. What do you see as kind of the, just by way of concluding, sort of, what do you see as kind of the immediate barriers to all these things we're talking about improving, coming true, improving and dematerializing processes and making power distribution more efficient and all these great things it can do, what are the kind of barriers?

Is it a human resources thing, like, people, just not enough people trained in this, or, like, policy attention or just available compute? What's the kind of, what are the bottlenecks to getting where we want to go with this stuff?

Alp Kucukelbir

Yeah, that alone can be its own podcast. Let me focus on two. I would say data and people are the two primary barriers I see to broader adoption. So, we need to bring AI machine learning literacy into the workforce of folks who are in the domain areas where we want AI machine learning to have an impact. So, we need the manufacturers, the power sector operators, we need the agricultural sector to get that awareness so that they can adopt digital technology and really start incorporating it in their workflows. How's that going to happen? My perspective is that professional societies are a great venue for that, because the experts in those fields already have these networks, and they share, they have thought leadership, they have networks where they share their insights.

And, bringing in training, there is a great way to scalably bring that. But of course, when we think about fundamental education, the next generation of the workforce, the young workforce, are already much more digitally literate. And of course, we want to encourage them to go into not just working for large technology companies, also, they're nice companies to work at, but work in other companies, such as chemicals companies, manufacturing companies, things like that. But then data. So, I talked about data, actually accessibility. There's a much more fundamental aspect of data availability. So, data availability is whether the stuff that we need to power machine learning and AI is measured, is it digitized, is it calibrated, is it stored?

This is a major impediment for the majority world of really being able to use the technologies that we're developing here in the west. When you think about it, here's a weather forecasting model, it's machine learning-based. It's amazing. It requires this kind of radar infrastructure, these kinds of inputs and whatnot. Like, do I have that in the majority world? If not, how do I then think about the requirements for where I want to deploy this technology? What are the data availability problems there? And of course, once the data is available, how do I access it? The most free and empowering version of that being open source data.

I'll give an example. You know, the technology that recommends the next movie you should watch or the next article you should read. All of that was catalyzed by Netflix releasing a ton of data and putting a money prize on it called the Netflix Challenge, basically encouraging the academic computer science community to start tackling this problem of, like, I've got people consuming, you know, movies. How do I recommend new movies to same people or new movies to new people? So that availability and accessibility of data engendered a whole branch of machine learning that we now get to take advantage of, and it's providing value.

Can we do the same for other aspects of climate change and, in fact, Climate Change AI, which is a not-for-profit I'm also involved with? We have an initiative called Data Gaps, where we're literally trying to say across all of the high-impact applications of machine learning AI, where are the data? Are they available? Are they accessible? If not, let's raise awareness. Right? Who's sitting on this data? Can they share it? Can we incentivize? Can we buy it off them? Things like that?

David Roberts

Well, I have two reactions to this, and at this point, I just apologize to listeners. I can't shut up. I can't shut up about this stuff. But I just have two more quick things. One is it strikes me that alongside more sophisticated computing these days, we're also getting a lot more sophisticated sensing, just like tiny little sensors, drones, self-powered sensors. Our capacity for data gathering, just in a technological sense, is improving alongside computing. So it seems to me that more data is incoming, no matter what we do, just a ton of data. These are complementary developments, basically, our ability to gather more data and our ability to chew it up and make sense of it, that are both sort of advancing alongside one another.

The other thing I just want to touch on briefly, because it would be criminal for us to do this whole pod and not mention it, which is just the equity implications of all this. This is raised by, you know, your answer, sort of like, the people, places, and industries who are wealthy and sophisticated enough to have data are then going to be the ones who benefit from this stuff. Then the sort of poorer or developing nations or don't have the data and thus don't benefit from this stuff. How do you think about equity and what to do about it?

Alp Kucukelbir

Absolutely. So, the entire approach that we take in developing these types of technologies needs to consider, as you described, the requirements of the broadest set of stakeholders that we can incorporate into that. And for me, technologies where we develop and iron out all the wrinkles and get it into a form where it is cheap and reliable is when I see a huge opportunity for the adoption of those technologies, again in a very efficient and cost-effective way, in, for example, the majority world. So, an example here would be western nations figuring out exactly what type of data you need, what kind of hardware you need to operate your field, to maximize the crop yield of the food that you are growing there.

So, here we can do all of the testing. We can figure out whether, "Hey, do I just need drone footage? Do I also need satellite data? Do I need ground measurements? Do I need people going out and taking soil samples? Like, what do I need? Okay, I've got all these experiments." I could afford to do it because I'm the west. I'm rich. Okay? Now I've gotten down to the bare minimum. All I need is a drone. And with a drone, I get 80% of accuracy. Great. Now we ship that solution, which has been tried and tested, down into an environment where we can immediately start delivering value, effectively providing leapfrog abilities for the majority world to catch up.

David Roberts

That was exactly the word that was on my mind. We talk about developing nations, leapfrogging fossil fuel development, but this opens up the possibility, at least, of them also leapfrogging inefficient, materials-intensive, wasteful, you know, industry and agriculture, and etcetera.

Alp Kucukelbir

And it'll be cheaper. And that's the core thing, right? Like, you know, when you think about machine learning AI, it's not capex. We're not building — I mean, yeah, they're building, like, data centers, but, like, when you think about the AI that we're running, run on your phone, on your laptop, you know. This is just making everything cheaper, more profitable, and that's great, right? They want high crop yields. You want to produce cheap steel in the majority world. You want cheap concrete, cheap materials. So it all goes hand in hand.

David Roberts

Yeah, yeah. Okay, well, I have to shut up or, like, someone's gonna — a cane is gonna come out and drag me. Drag me out of my office or something. Thank you so much for coming on. This has been enormously helpful and demystifying for me. I think this is gonna give people a way better way of thinking about all this stuff other than just, like, how annoying Sam Altman is. So, I appreciate you coming on, and, you know, maybe we can check in in a few years and see what sort of magic and wonders have sprung up in the meantime.

Alp Kucukelbir

I would love to do that. David, thanks for having me.

David Roberts

Thank you for listening to the Volts podcast. It is ad-free, powered entirely by listeners like you. If you value conversations like this, please consider becoming a paid Volts subscriber at volts.wtf. Yes, that's volts.wtf, so that I can continue doing this work. Thank you so much, and I'll see you next time.

3 Comments
Volts
Volts
Volts is a podcast about leaving fossil fuels behind. I've been reporting on and explaining clean-energy topics for almost 20 years, and I love talking to politicians, analysts, innovators, and activists about the latest progress in the world's most important fight. (Volts is entirely subscriber-supported. Sign up!)