Don't miss out

Don't miss out

Don't miss out

Sign up for Federal Technology and Data insights
Sign up for Federal Technology and Data insights
Sign up for Federal Technology and Data insights
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Subscribe now

Energy in 30: Harnessing the power of generative AI

 

Tune in to Energy in 30 hosted by Joan Collins and David Meisegeier. In this episode, hear from ICF innovation strategy and services lead Nick Lange. Together, they discuss generative artificial intelligence (GenAI) and its impact on the energy industry.

Topics in today’s episode include:

  • Three reasons why ChatGPT went viral in 2023
  • Is GenAI trustworthy? How to assess the reliability of GenAI output
  • Working to be good stewards of powerful responsibility
  • What does a GenAI-fueled future look like?

Full transcript below:

Joan: Welcome to Energy in 30. We'll use the next 30 minutes to explore how utilities and the industry are reacting to forces that are shaping new offerings for customers in order to meet decarbonization goals.

David: If you are a utility manager, consultant, technology provider, or just curious about energy, we hope to push your thinking about the changes that are happening in the energy industry with me, David Meisegeier.

Joan: And me, Joan Collins. And David, what more topical subject than to push our thinking around generative artificial intelligence—that's a mouthful; GenAI, I think, is how people are referring to it—and the impact that it's having on our industry. I don't know about you, but all summer, I felt like every turn I took, I was talking about ChatGPT [Chat Generative Pre-trained Transformer]. Did you experience the same?

David: It's showing up everywhere: "How to make $100 an hour using ChatGPT."

Joan: "How to create book titles."

David: Yes, it is everywhere.

Joan: It really is. So we've invited our colleague, Nick Lange, to dive into this world and share some of the practical advantages of GenAI and some of the testing he's been a part of with some of our industry partners.

David: Nick's an innovation strategy and services lead at ICF. He has 20 years of experience at the leading edge of energy policy, program, product, and people-based solutions. He started as an engineer, and his journey has evolved to include the emergent space of the social sciences. And today, he's working intensely on solutions that leverage generative AI. So, Nick, welcome to the show.

Nick: Thanks, David. And thanks, Joan. It's great to be here.

Joan: We are so glad to have you here because AI has been around for a while, but it's like, what happened? Why is this all of a sudden such a big deal?

Three reasons why ChatGPT went viral in 2023

Nick: Yeah, you talk about hearing about it all summer; I've been hearing about it all year. And I think the big tipping point was this past February. You might've seen news that it was the fastest growing tool, growing to 100 million users, in the history of tools. And the question is: Why did it go viral? Those who have been watching this closely understand that under the hood, it didn't seem like much had changed, but there was a lot that came together at the right time.

There were sort of three big reasons that people have identified as to why ChatGPT created such an impact, even though technology-wise it wasn't that big. AI, as you mentioned, has been around for a while, but there was something different about ChatGPT, and it really turned into three big things.

The first is there was a significant jump in the capabilities. I'll just boil that down into it seemed like a line was crossed where this tool could be as good or better than many humans in a wide array of areas, and that surprised a lot of people. So, capabilities was one big reason.

The next big reason was ease of use. A lot of the historical AI that had been around, you had to have a Ph.D., or a million dollars, or access to people with millions of dollars and Ph.Ds to get to play with it. And here was a chatbot that seemed to be this good—the cutting edge, the best of the best—and it was available for free on a public website, and you could do something [with it] and share it with friends. Word-of-mouth is mostly how they got to 100 million users as quickly as they did, within a month, frankly.

So capabilities was number one. Number two was how easy it was. And the last big piece was cost. I mentioned it was free to use, and early on, it wasn't just locked up in a laboratory. Developers and the industry could start to tap into these powerful new capabilities that were easier to use than ever.

So all of that came together in one moment. Any one of those things would've been big news on its own, but really, the reason why you heard about it so quickly and why we're still hearing about it is what happens when all of those come together.

David: And it is not just ChatGPT; we're seeing a lot of companies coming out with their own variations of generative AI. Is that leveraging the same foundational technology?

Nick: Mostly, yes. So if you want to go very far into the weeds, there are really great resources available online. But there was a breakthrough three to five years ago in a new architecture, and the key to that was just how far you can get if you have a really good model for language. If we think about it, that's the way we get around the world, the way we understand what's in pictures, the way we understand computer code. There's a lot of different things that have the structure of language: names for things, relationships to things, how things work together.

Language encodes quite a lot about our world. And that new model was hooked up to more data than ever before, in terms of how much language was fed into it. So the relationships that the model could understand and learn about the world and how it operates, that's really the fundamental underpinning of many of the advancements you've been hearing about. Whether it's OpenAI or Google, these are [the GenAI tools] using that new architecture.

David: And ICF has a version now as well. Is that right?

Nick: That's exactly right.

One of the things that's important to recognize is that there's a limit to what was used to train these models. When I talked about [new GenAI models] hooking up to more [data] than ever before, there's also a lot that it wasn't trained about. There are a lot of humans that out-of-the-box AI is not better than, and that's about these niche areas—or sequestered areas—where that data isn't part of a publicly available resource, say, on a webpage, but [with] expert scientists in particular areas that aren't well represented in the training data sets. We've been tapping into the same architectures and extending these models to be able to work with private data, and tapping into these same intelligent capabilities, but working with a different resource.

WYSIWYG is now more about what you say than what you see

Joan: We've heard a lot about this as kind of almost revolutionary, that it's democratizing access, and I think that's so powerful. And when you look at this, it's being used in so many different industries. I think what would be really interesting to those listening, and even just to me, is how is it affecting the energy industry? How are we using this? And, I think, you are really in a nice position to be able to talk a little bit about that.

Nick: Yeah, it's a good segue. Briefly, the ease of use is probably [more revolutionary]—even more so than the capability leap—than being able to ask AI for what you want. For those of us who are old enough to remember the earlier days of computers, there was a time when you didn't have windows to click on, and you had to write it all into what's called a command prompt. There was a big change with “what you see is what you get”—WYSIWYG—which opened up computers to a lot of people, so you didn't have to be a coder to word process or do spreadsheets. That was significant.

A lot of people are talking about this new era as stepping into a “what you say is what you get.” And this type of ability, to work with experts that might not be data scientists, but to talk about the process and how we might use this AI—the interaction as we talk about the way we're looking at using this—is very similar to the way you talk to a new-hire about understanding the process and the way you do things. And now, the new hire is a computer program, which opens up all sorts of questions.

Joan, I know you wanted to talk about the way we're using it [in the energy industry], but one thing that I really want to stress is just how important it is that the approach you take when using a new technology or tool is really critical. The metaphor I've been using, I borrowed from healthcare, frankly, not the energy industry—but let's say there's a new wonder drug out, and we really want it to be able to cure illnesses. But first it's really important to make sure that it's safe and effective and approved for use. I think AI raises a lot of concerns around safety, yes, and I think we should talk about that. And effectiveness. And separating those is important. Safety, first and foremost, is: Can you trust these things? For many people, this new powerful architecture is a bit of a black box.

We've probably heard over the summer about how the language went off the rails, and it started talking crazy talk. Safety is also for things that we can trust in society, and if change happens too quickly, it can be very disruptive, and there can be unintended side effects. Like a drug, there are side effects, and if you want to treat this illness, you want to make sure that the cure is not worse than the illness itself. So we're thinking about that a lot when we talk about what types of projects make sense to start, and we're trying to go to someplace where we're very much going to be closely watching and being very incremental into where we're looking to deploy this, to test its effectiveness.

Is GenAI trustworthy?

David: Biases, right? I mean, that's a concern, because depending on what this was trained on, [the output] could result in biases. So that goes to kind of the accuracy and safety, right?

Nick: Absolutely. And this is a key issue when you think about what a lot of these models were trained on. They're trained on what was available, and there [may be] misrepresentation or overrepresentation. Let's say if you ask for “a successful professional,” it's going to give you what is in its data, what's associated with those words. And where there's bias in the training data, there will be bias in the outputs, which is why the type of extensions and training in your own data need to control for bias, and also what's called hallucination. Those are key barriers to using this for productive work. And that's where I'm very excited to say we have some early results to talk about and what that looks like. That's a huge concern, yeah.

Joan: Can you expand?

Nick: Gladly. One thing is though, it's really fun to ask the AI to just answer questions for you, and it will gladly answer questions for you, but that's not the way we're starting to use it. I'll give one example: As an industry partner, we often are drowning—or swimming, at least, before we drown—in all the different overlapping, whether it's policies or programs, there's a lot going on. That's sort of a good problem to have in our industry—that there is a lot of investment in innovation and new equipment, new measures or new rules, and making sense of that and informing strategic plans is a big, hard problem. And our current way of doing that is having a lot of good experts read all of those materials and make sense of them. For this project in particular, we wanted to know if we could help our experts by doing some of that initial reading. In the same way you could imagine you might, if you're in a law clerk's office, hire an intern to help sort of organize information so that you, as the expert, can come in and have easier access to it.

We use these technologies to read and look for specific indications of, say, heat pumps or new regulatory standards and to try to crosswalk where that showed up across all these different data sets and organize that into a research aid so that when we come to a larger set of knowledge, we can look it up in that field. And so we did some early tests on how well this would work and how susceptible to hallucination or bias it was. We included citations for everything and basically asked the AI to read through and then restructure key bits of information as it related to key themes and key questions we had. Then had our experts work on that. It saved probably at least a few weeks of time when we didn't have weeks of time. The tight timeframe is really why we turn to this, as a way to look at how AI could be part of our team, as we sought to tackle some of these challenges.

Joan: Okay. So does that fall in the category of analyzing or organizing, or both?

Nick: In this case it was both. So we actually packaged up, and we did a lot of close co-creation with our experts. We worked with our analysts, who were not developers, they were not software writers, but we asked them how they would do the job. We asked them what types of questions they would be asking of each document, what sort of things would they be looking for. And we took a lot of their expertise, and some of what we brought to the actual programming side came from stakeholder interviews.

So it was a really wonderful blend of qualitative research from our experts, tapping some of their professional judgments of the way they do their work, and trying to embed that as instructions for the AI of what it should be looking for in these documents. And then when it found them, we also asked the team, okay, once you have it, what do you want to do with it? So we also worked to structure the output in a way that could be this sort of an atlas of cross-linked connections around the area of inquiry. And so again, getting lists of questions, key themes, and also what would be useful to them and packaging that up. And it turned out it was an Excel spreadsheet with filters and tables, but it was custom to their need.

How to assess the reliability of GenAI output

David: So how did you know that the output was acceptable or good?

Nick: This is one of the most important questions we asked early on. Because these tools can look like they can do a good job. Early on we try to really pin down measures of success criteria, testing standards, to make sure that we really can understand that what it's producing is going to be reliable, I guess, this is the safety and the effectiveness, right? So it's unsafe if it's not going to be a reliable agent producing things. So we developed some tests early on. We always worked on one initial document, or a few, to make sure before we consumed all of it. At some point though, you do worry if you spend all this human time making sure the AI is doing well, if you’ve saved yourself any effort.

So for each time, I think the standard for success and quality assurance is distinct to the use case, but we are building out processes to check against some of the bias issues you mentioned, some of the concerns. But ultimately it begins as a pretty closely scrutinized assessment of early outputs and then always having the non-AI source linked, too. So citations in a text—if you write a term paper and you make a claim and it's cited, we'll check on those citations, and if we're not getting 100%, we go back to tune it up a little bit more. And that means basically the standard of acceptability for the outputs of these models.

Joan: So you're still involved, humans are still in the loop?

Nick: That is essential at this stage. Maybe someday in the future, we will get to a place where we feel comfortable stepping back in a few key ways. But for all of our work—not just to know that we're doing a good job—again, the upside of democratizing access to these capabilities is that the expertise of experts of what's good. Who knows what's good? These people do. So the experts, wherever we're partnering, they stay involved at the beginning, in the middle, and in the end, and we do, too, just to make sure that we're really not outsourcing very much other than the drudgery of these tasks at this stage.

David: In the energy industry, the initial CFL [compact fluorescent lamp] experience was overwhelming. It took years to overcome that and to convince customers that the new CFLs were in fact the quality that they should have been in the beginning. Do you see a similar thing happening here? Like everybody's jumping in on this, as you said, you're spending quite a bit of time making sure it's doing what it's supposed to be doing. Do you fear that we might experience a little pushback, or do you think that just like it almost seemed like a switch was thrown on in February and all of a sudden this new algorithm or whatever it is did something that we couldn't do before, that it'll just keep getting better, faster and faster and we will get to where it needs to be incredibly quickly? Which way do you think it's going to go?

Working to be good stewards of powerful responsibility

Nick: There's a lot in that question. I'm really glad you asked it, and if I miss a few key parts, please let me know. The short answer is that I'm very concerned about that. You mentioned CFLs and that painful lesson learned, and perhaps forgotten by some, that efficiency measures need to, at the beginning, at least, be as good as whatever we're trying to say they should be replacing. I mentioned in my medical analogy earlier, safe and effective drugs, FDA approval of that seems reasonable, but let's say you have a person about to give you an injection of an FDA-approved drug. There's something called the hygiene factor, which is, is the needle clean? I'm very concerned that even if the technology is capable, there will be people that misuse it. In a competitive race, people will do different things. It's so accessible, and we've already seen a number of examples.

We take the responsibility for the future opportunity of this technology to help us do what we do very seriously. That is why our approach is this co-creation with experts. A lot of people are worried, perhaps rightfully so, about disruption to their current work, and how it operates. Our approach though, rather than sticking our heads in the sand or fingers in our ears, being afraid of that, we feel like the best way is to try to make that future and to try to be good stewards of the powerful responsibility, which includes not tainting it. There's been a lot of learnings already about what these are good for and trying to protect ourselves, our clients, and our partners from "oopsies."

Humans make mistakes too, but especially when it comes to AI, I think everyone's on a hair trigger for an expectation of what that could look like, especially the ones that we might not see coming, the side effects that might take a little while to show up. We haven't trademarked this, but a lot of us take a human-centered design approach to the work we do, this idea of working incrementally, cross-functionally, collectively on a problem and testing it and then coming back to that, observing what happens. We are borrowing a lot of those methods in our early applications here and going step-by-step, making sure we understand how it works and always, always, always have humans very much in the loop at these different stages to try to catch those mistakes.

Joan: And I would think too, for companies to customize, that also helps a little bit with that. It seems like that adoption to customize has increased over the last year.

Nick: Yeah. And, actually, one way that shows up, I mentioned the assistance we wanted to give to our team. A lot of our early efforts that we can be confident will be useful are in this sort of assistance role. So I mentioned people are worried about the disruption to jobs that AI can have and [question] will AI take your job away? It's been said a lot, but I think it's true that it's not that AI will take your job away, but people using AI who will outcompete you and the customization of being able to leverage these tools to help you do what you already do now, but better or faster or more cost-effectively. That's really what we're trying to do.

So when we talk about safe and effective, safety we've discussed a little bit now, but how are we looking at effectiveness and what does it mean to have quality and value through these tools? We're working very closely with those people who are feeling those pains most acutely right now. These conversations often begin with where there's the most constrained resource within an existing area. Where can we relieve some of that pain by designing some of these tools to work with that, the brain, but also customized to your use case? The affordability, the flexibility, the ease of use we've talked about earlier, all allow us to build those custom solutions much faster and much more cost-effectively than we could before. And that's the language side of it.

What does a GenAI-fueled future look like?

David: I read one person's theory that we'll see variations of the generative AI that will be specific to applications. So maybe there's a healthcare generative AI, maybe there's an energy one. Do you think that it'll get down to a company level? Like IBM had Watson, maybe still has Watson. Would there be an ICF and we name it, and our competitors have the same thing? Will it be at a company level, is what I'm trying to get at, or will it be more at a sector level?

Nick: It'll be an ecosystem. So we're seeing in these early days already custom applications at those levels. We have them internally at the enterprise level, but also at the project level and in the market level. Where there's data and where there are applications where a tuned approach would be better than a general approach, we'll see those specialized solutions thrive.

I'm reminded that people who predict the future often get it wrong. In the early days of computers, who could ever imagine wanting a computer on your desk back when computers took up an entire room? And now computers are everywhere, probably where many of us don't even suspect it, in part because cost came down, but a lot of it was specialization. And with the combined effect of cycles of iterations of improvements, I see no reason to think that it’s not absolutely the same sort of recipe we'll see here where we'll have many different types of agents working on many different types of data sets.

Some of those need to stay private and secure and focused, and some may be more generalized. We'll be seeing a proliferation and explosion of different applications for the reasons I talked about, the combination of capability, ease of use, and affordability. And we are still very early days.

It's very intimidating to think about that, and I've been humbled when I've been asked to sort of predict the future, but our strategy is to be humble and to explore and to safely validate the value to help ourselves understand this as we go. It can be too much when you think about the pace of change if you're not surfing that wave as it's rising. And that's really the only approach that I think we can expect right now, and that includes knowing where it's not safe to use yet. So we're not in a hurry to create things quickly that don't have humans in the loop.

I think we've got good reasons for that. So it's a matter of time before we invite them into other parts of our lives in different ways, but there's a lot of good that they're doing already, internal to our own organization right now. So I'll just share one more case study. So we are a significant organization with resources, but we have constraints. Legal resources for contract review is a pain point for us. We want to give high-quality review. We want to make sure that data is protected and secure. One of the ways that we thought we could make that even better was by adding an extra layer of review, again by AI and trained by experts, to ask and interrogate and then surface things for us.

We still have humans playing that role, but it helps us catch what we might otherwise not be able to spend as much time with and play a role for us. So every contract that comes through is now being treated in that way. And there are lots of other examples of that, one already existing, and more to come. And so wherever there's a painful effort that's high-value and where we can maintain some level of quality, it’s ready today, I think, for examination.

Joan: Nick, I love your viewpoint on the ecosystem, and I think initiating conversations with stakeholders in that ecosystem is such a great start. You talked about the evolution of this, and it seems it starts with those conversations and identifying those pain points and figuring out a way to maybe shortcut the system to get some sort of resolution with that. It seems like a really good, reasonable approach.

Embrace it: GenAI is accessible and workable for everyone

Nick: I hope this conversation is helpful to start other conversations. I think that's right, Joan. We've had a lot of valuable early conversations that helped demystify, that take away some of the fear. A lot of people don't know just how workable this space is. They think you have to be the research lab or you have to be a data scientist. That's not the case. But also they might think that the future of when it could be relevant to them is far away. And so we're trying to have these conversations, help people have an awareness, and also surface what are those concerns around safety? What are those issues? And then step into them together to look at what we can perform as tests to explore in the early days before it gets harder to do this, in terms of the complexity.

Right now, as sophisticated as the underlying AI is, it's really relatively basic when we want to try to apply this and use this in a way. So we shouldn't feel intimidated, we should have those conversations, and we should raise those concerns now because there are things we can do about them and there is progress we should be making and informing our future efforts around.

David: I love your approach of, start with the conversation, right? Understand where the pain points are and then start to explore how generative AI might play a role in solving some of those pain points. It doesn't have to solve all of it, but even if it can add some value, it's learning, right? As you said, stepping stones, it's taking you in the right direction.

Joan: Well said.

Nick: Exactly. Yeah. And learning by doing; some people might be able to read a book and fully understand physics, but a lot of us as toddlers, we dropped blocks to learn about gravity. I think that type of incremental learning as organizations, as industries, as markets—I would encourage all of us to embrace, perhaps with some trepidation. What does this mean for us? Early on in February, it became most of my day job to wrestle with the question of the implications for this. And it wasn't abundantly clear that we could necessarily see all of those. But what is clear is that this is here; this proverbial genie is out of the bottle. And in many ways, I think that's a wonderful thing. I think it's a call for all of us to understand the responsibility that comes both in the form of action, but also inaction.

And I think when a new capability comes along, it's incumbent upon us to talk to others. What are you doing with this? How are you looking at this? Well, what about this concern? Or a lot of that is the right type of conversation to have. This stuff is moving quickly, but that's not a reason to not act. Going back to CFLs for a moment, David, back in my earlier life, I remember a lot of people didn't want to replace their incandescents because they heard about these LEDs that were coming along in a little while. They're too expensive now, but I don't want to replace my incandescent now because I'm just going to replace my CFLs once LEDs come around.

And the math said, well, that's foolish because the CFLs will pay for themselves 10 times over. And at that time, CFLs were a good enough color quality, and they were very affordable. So working through that hesitation to act because it's only going to get better is one of those myths we want to work through in the same way we have in the energy industry for a while. And in many ways, if we do the design right, we can avoid some of the missteps of a light bulb that was too big, that lampshades didn't fit on, that took five minutes to warm up, and made your clothes look funny.

Joan: Oh, we're so fortunate to have you out front on this, Nick. You've got all the years of experience and challenges behind you. If there was one thing you could do, though, to change the future, to change the industry or GenAI and its impact, no limits, what would you do?

Nick: That is a huge question. Again, I'm getting older, right? I've been doing this for two decades now, and I've had the opportunity to get to know a lot of great people in many different parts of the industry. And the one thing that's occurring to me now, when I think about the challenge we're facing at this moment—all the different types of challenges—with how hard those are and how much harder they get with some of the infighting and some of the tensions and the mistrust, some of it's earned. We talked about the lessons of CFLs, but really what I would change is the extent that we avoid getting in the same room, that we avoid some of the collective problem-solving efforts because we don't attribute good intentions to different stakeholders, or how we navigate that as individuals.

I think what I would change is to try to be on the same team and recognize there's a lot of significant upside we can find by collectively working together on it. And now we've got some bigger tools that might help us look at it differently. Innovation often comes when one constraint is relaxed. And in this category, we've been talking about, it's significant, and we don't even see all the different ways it's relaxed. So collectively looking at some of our common problems in new ways, cutting through the things that get in the way of that is what I would change.

Joan: It's real, Nick. It's here.

David: That's awesome. Nick, thank you so much for taking the time to talk with us. This has been a fascinating conversation, and I can't wait to see how we apply this to new problems. So I'm looking forward to continuing working with you.

Nick: Thank you very much. I've really enjoyed it, and I'm sure we'll have a number of opportunities in the near future to keep this going.

Joan: Agree, absolutely. More discussions to come. Nick, thanks so much.

David: If you've enjoyed this conversation, we'd sure appreciate you liking, sharing, and even subscribing to our podcast.

Joan: And thanks to you all for listening in to this episode. And here's to our next Energy in 30.

The latest Energy news, explained.

Subscribe to get insights, commentary, and forecasts in your inbox.

Meet the authors
  1. David Meisegeier, Vice President, Finance and Smart Homes Programs

    David helps innovate customer-centric energy programs that meet utilities’ current and future needs, with nearly 30 years of experience in the energy industry. View bio

  2. Nick Lange, Innovation Strategy and Services Lead

    Nick is an expert at helping people through strategy enablement, human-centered design, and technology solutions. View bio