Meeting

From AI to Microchips to Robotics: Frontier Technologies and the Changing Geopolitics

Wednesday, February 26, 2025
Speakers

Fortinet Founders Chair of the Department of Electrical Engineering, Yahoo! Founders Professor in the School of Engineering, Professor of Computer Science, Stanford University; Faculty Council, Stanford Emerging Technology Review

Hank J. Holland Fellow in Cyber Policy and Security, Hoover Institution, Stanford University; Director and Editor In Chief, Stanford Emerging Technology Review

Richard W. Weiland Professor of Mechanical Engineering in the School of Engineering and Senior Fellow and Science Fellow, Hoover Institution, Stanford University; Faculty Council, Stanford Emerging Technology Review

Presider

Morris Arnold and Nona Jean Cox Senior Fellow, Hoover Institution; Co-Chair, Stanford Emerging Technology Review; Member, Board of Directors, Council on Foreign Relations

Emerging technologies, from AI to microchips to robotics, are transforming societies, economies, and geopolitics in profound ways. In light of these timely transformations, the Council on Foreign Relations (CFR), in collaboration with experts from the Stanford Emerging Technology Review (SETR), discuss how the United States can seize opportunities—with a particular focus on AI, microelectronics, and robotics—and mitigate risks in these fields and ensure America’s innovation ecosystem continues to thrive.

CFR and SETR are excited to launch The Interconnect, a new podcast series that features leading minds in cutting-edge technology and foreign policy who explore recent ground-breaking developments, what's coming over the horizon, and the implications for U.S. innovation leadership.

To hear the trailer for The Interconnect, click here.

SIERRA: All right. Good morning, everybody.

AUDIENCE MEMBER: Good morning.

SIERRA: Oh, thank you. (Laughs.) Welcome to “From AI to Microchips to Robotics: Frontier Technologies and the Changing Geopolitics.” Quite a morning. It’s going to be great. I’m Gabrielle Sierra, and I am the director of podcasting here at the Council. I want to thank you all for coming today, and in particular thank our panelists for joining us for what is sure to be a super-cool conversation.

And what makes it double cool—which is a phrase that I’m assuming you’ll all adapt by the end—is that it also celebrates a joint adventure between CFR and Stanford Emerging Technology Review. We’ve embarked on this together in the form of a new limited podcast series called The Interconnect. I’ll tell you a little bit about the show and then we’ll play a trailer for about a minute or two, and what I’m saying is that you all have more than enough time to subscribe before you have to silence your cellphones and put them away. So at this point I want to see all those phones out, you know, subscribing on the QR and, you know. Thank you.

Each episode of The Interconnect brings together experts from critical fields of emerging technology to explore recent groundbreaking developments, what’s coming over the horizon, and how the implications for American innovation leadership interconnect with the fast-changing geopolitical environment. The show is hosted by—I’m going to put him on the spot right now even though he just walked in— Martin Peter Giles, the managing editor of the Stanford Emerging Technology Review, who we have right there. Martin is a fantastic host. And—added bonus—he has a British accent, so who wouldn’t want to listen to that?

Our first episode, which dropped on February 13, focused on semiconductors. Upcoming episodes, including one dropping this week, will feature discussions on space, robotics, AI, and biotech. All three of today’s stellar panelists—Mark Horowitz, Herbert Lin, and Allison Okamura—and your fabulous presider, Amy Zegart, are actually featured on the show as guests. So if you’re a fan of what they say today, there’s way more of that where it came from. So subscribe on CFR.org, CEDAR.Stanford.edu, right here, or any platform where you prefer to get your audio.

As promised, here is the trailer for the show, and then Amy will take over. Thank you all very much. We hope you like it. (Applause.)

(An audio presentation is played.)

SIERRA: Pretty cool, huh? (Laughs.) Thank you.

ZEGART: Good morning, everyone.

(Off-side conversation.)

So I’m Amy Zegart. I am so delighted to be here wearing two hats—actually, three; I can’t keep one job—co-chair of the Stanford Emerging Technology Review, Hoover senior fellow, proud board member of the Council on Foreign Relations. I hope you’ll all tune into the podcast. We’re really excited about it.

I want to give a shout out. There are a couple people in the audience who are in the first episode that dropped last week: Martin Giles, who’s the fabulous British voice you hear hosting on the podcast—(laughter)—who’s the managing editor of the Emerging Tech Review; and Sebastian Elbaum—where’s Sebastian? Ah—the first technologist in residence here at the Council and a professor of computer science at the University of Virginia.

So this is, as Gabrielle mentioned, a partnership between the Council and the Stanford Emerging Technology Review, but it’s also a partnership of partnerships. So the Stanford Emerging Technology Review is the first-ever partnership between Stanford’s School of Engineering and the Hoover Institution. Each of these institutions is a hundred years old, and this is the first time we have worked together in this way. We’re really excited that this effort brings together a hundred faculty from across forty different departments and institutions on the Stanford campus, bringing engineers and the labs and scientific discovery teams together with social scientists like me and policy experts to better understand emerging technologies—what’s happening in those labs, what’s happening in companies, what are the implications of those inventions, what are the opportunities of those inventions, what are the vulnerabilities or pitfalls or perils of those inventions.

I’ll just say a word about why are we doing all this, right? Really, there were two guiding principles that led to the creation of this effort.

The first, as you probably know, is that technology is changing everything. Technology has always changed a lot, from the time of Roman aqueducts to nuclear weapons, but not like this, right? It’s the convergence—and Herb will talk a lot more about this—it’s the convergence of technologies. It’s not just AI. It’s not just robotics. It’s not just semiconductors. It’s not just satellite technologies. It’s that they’re all converging and interacting in dynamic ways at the same time, transforming economics, geopolitics, and societies. So that’s the first driving idea, is that this is a unique technological moment.

The second is that policymakers need help. Policymakers aren’t just Washington folks here. Policymakers are inventors, investors, leaders of technology companies. We’re kind of living this with DOGE today, right? What five things did we do this week? We could talk about that. (Laughter.) And so policymakers in the public and the private sector need better information, they need faster information, they need more continuous information about these technologies and how they can make better decisions.

And so that’s our goal. I know Washington—we’ve been going around Washington for the past couple days, and we were on Capitol Hill yesterday, and everybody says what’s your ask in this town. That’s a very strange question for academics, what’s your ask. Our ask is: How can we help? Right? That’s what we’re here to do.

So with me here today are my fabulous colleagues. We’ve been—we’ve been doing the roadshow and gotten to know each other even better, which is one of the great things about this effort.

So Dr. Allison Okamura, who you heard a little bit in the trailer. She’s professor of mechanical engineering. She does incredible work on robotics at Stanford. She’ll talk about, I hope, her soft robots and some pretty amazing things she’s doing in medical robotics.

Dr. Herb Lin, who is a physicist by training, a cyber expert, and the director and editor in chief of the Stanford Emerging Technology Review. And he’s really the brains behind how we think about cross-cutting things, how do we think about what’s happening across all these technologies.

And Dr. Mark Horowitz, who, as you heard, he’s in the first podcast with Sebastian and Martin. Professor of electrical engineering, so semiconductors and quantum.

And so I want to ask them a few questions. And we have a great crowd today. I want to thank you for getting up early, caffeinating, and wanting to hear about technology. We want to make sure we get to your questions as well.

So, Mark, I want to start with you. One of the things we want to do in this effort is to demystify and debunk things that are in the news. There’s been a lot in the news in your world. The DeepSeek deep freak, number one.

HOROWITZ: Yeah.

ZEGART: Right? And quantum in the news with the breakthrough announced by Microsoft. So talk about both. What do you think is important? What’s been overhyped about those developments? How should we make sense of those two breakthroughs?

HOROWITZ: Thanks so much for the question.

I read a book a very long time ago—because, as you can tell, I’ve been around for a while—called I Can Tell You Anything, and I think it came out in the ’70s or maybe ’60s. And it was—you know, it said something about what advertisers do and how they talk about things. And so when announcements come out from various organizations, you have to think about what’s their view on what they’re trying to say. And so the DeepSeek announcement which, you know, rocked the world is actually a very significant advance in many aspects, but the notion that that whole model was put together for $5 million or something like that is misleading. It’s not wrong, because in advertising you don’t—you don’t say something wrong; you just don’t say the whole truth. The whole truth is that the last part of the model, the reasoning part—which is the fine-tuning on the model—was probably done for about $5 million. The model itself cost a lot more, right? They don’t talk about that. And so the implication, which is the whole thing in advertising, is you try to imply—you can’t say something that’s not true; you just try to imply it. So the implication is that, you know, we did this whole thing, OK? And then you go pounding on your chest about how much better we are than the other people, right?

So one thing that I will just suggest to you is when you read something think about what they are not saying, right? Because that’s often as important in the announcement as what they are saying. So that’s just, you know, generally true for people who try to push a cause in most companies that are announcing things or have a cause they’re trying to push. All right. Does that make sense?

So DeepSeek is very interesting because they put something out in the open-source community. It basically is nearly on par—I’m not going to argue about who’s a little bit better, but it’s a very good model, and that model now has open-source parameters. It showed that by factoring the model in a certain way that you could build it and be a little bit more efficient. That’s also very important because the cost of running these models and inference is very expensive, and ways of making that less expensive is going to be very important. And we can talk about that more if you’re interested.

So there are many things of significance there. It’s also an indication that the United States is not, like, leaps and bounds ahead of where other people are. And frankly, it also indicates that the rate of growth of large language models, at least from my impression, is slowing down, because if it was continuing to grow at a large rate then the fact that you got there first would mean that you were continually getting better things. So I think there’s some acknowledgement that the scaling laws that underlie large language models are no longer still holding up.

Now, what that means is invention and creativity is extremely important. We’ve gone to now reason on inference, which has been very effective. And I’m sure there are new inventions and ideas out there that will continue to improve the space. So I’m very optimistic; it’s just the nature is changing a little bit.

ZEGART: So say a little bit more about reason on inference so everybody here knows what you’re talking about.

HOROWITZ: Oh. Oh, OK. So, basically, the way large language models work is you have this training phase; you take a whole bunch of data and you create a model. And that model then you use with inferences is you ask the model a question, you talk to it, and it produces an answer. It producing an answer when you’re talking to it is the inference phase. The training phase you’ve created the model, and then inference is you ask it and it produces an answer.

What reasoning is, is instead of the model producing an answer it produces a whole bunch of answers, and then it kind of asks itself some questions about the answers it produced. And it turns out when you do that—and you know, I’m not the expert; there are other people here who are probably more expert than me, so I—Sebastian, if I say something wrong—(laughs)—it can reason about its own answers and therefore produce a better answer. And that was a discovery that happened relatively recently; I think, you know—I don’t know, within the past couple years. And that’s now been pushed out into the current models because it produces much better responses. So that’s reasoning in the—

ZEGART: Thank you.

I want to turn, Allison, to you. We’ve talked a lot in the past few days about developments in AI, and really the frontier that people aren’t talking enough about is the developments between AI and robotics. So talk about embodied AI, and spatial intelligence, and how AI is affecting your work, and what a robot actually is.

OKAMURA: Yeah. Sure.

So the large language models that many of us are familiar with interacting with, because on a—on a daily basis we can query ChatGPT and so we’re used to that now as an interactive technology, there is what—if I can challenge my colleague Fei-Fei Li, the godmother of AI, what she says is the next generation of AI and these large models is going to be around spatial AI: things that are 3D, video, motion, physicality. These are aspects that aren’t captured in large language models. So she sort of thinks of it as an orthogonality. Spatial intelligence is everything that large language models and textual intelligence is not.

So moving into spatial intelligence, in addition to being able to allow us to analyze videos and understand 3D spaces much better, is the physicality aspect of it, right? So what robots can do is they can take this intelligence and bring it in to do useful physical acts in the world. And this could, you know, be a great thing for scientific discovery. It can be a great thing for human health. It can be a great thing for making many things that we do more efficient and safer.

But there are also challenges because the physical world, it turns out, is really difficult to model and fully predict. And so when robots use intelligence they’re not simply going to take information and then wait for a human to press a button to say, act on that information, the way we do with outputs from systems like ChatGPT. Instead, they directly take that information and then use it to plan a physical act.

The robots also have to use real-time information that it gets about the world to handle the uncertainty in these physical interactions. And that reaction, which depends on these sort of minute interactions, even little frictions between robot fingers and the thing that they’re manipulating, and that system is going to possibly come out with results, outcomes, behaviors that aren’t exactly what we expect. So this is an area, using spatial intelligence and making it embodied in physical systems, that has great promise, great potential, but also there are challenges to be able to predict the outcomes and ensure that they’re safe and effective in the ways that we want them to be.

The other point I’ll make about robotics is that, especially because right now there’s a lot of hype and investment in humanoid robots, right? The idea that robots should be anthropomorphic and they should mimic human bodies. And the argument for that does make a lot of sense, in that if you would like to have robots live in worlds that we built for humans, that having the body of a robot that mimics the human body might be an efficient way to go about things. But there are really huge challenges to actually making these humanoid robots function, exactly the types of challenges I mentioned in spatial AI in general. And in many cases the hardware is not quite there yet.

So I think what we will see in robotics is while a lot of long-term investment and interest happening in the humanoid space, we are also going to have task-specific robots—robots with different morphologies, different bodies, that can allow us to get into spaces and not think so much about how do we replace a human or replicate human capabilities, but how do we—how do we do things that are entirely different, things that that people can’t do? And maybe we can’t even imagine, because it’s not something we can do with our own bodies. And what are the new industries, the new healthcare applications, for example, that could come out of that?

ZEGART: So one of the things that we look at in the report is—and that cuts across all of these technologies—is the innovation ecosystem. What exactly is the model of innovation in the United States? And what role do universities play? A topic that has become of renewed interest in this town of late. So I want to spend a little time talking about that, and, Allison ask you to share an example of what you do. So as many of you know—I see Jane Harman; you know this better than anybody—The model of innovation that led the United States to be the super—the innovation superpower of the world started with Vannevar Bush in the 1940s.

And that model consisted of two fundamental pieces. The federal government invests in long-term, risky, generational, fundamental research in national labs and universities. Both pieces of that are important, right? The federal government is the only patient investor at scale. It’s the only organization that can make those generational commitments to research. Fundamental research in universities and national labs isn’t about commercializing a product next quarter or next year. It’s about big, hairy questions on the frontiers of knowledge. How does the immune system work? What are the laws of physics? That’s part one. You have to have that under—that underpinning.

Part two is universities publish that research openly. And then the private sector does its thing. And it’s amazing—commercializing, innovating in ways that the folks doing the fundamental research may have never have envisioned. You have to have both of those pieces. And so many of the innovations that we think about today—Mark was talking about this in the car. You know, we don’t know that fundamental research at universities and digital libraries was so important decades and decades ago, but we all use Google. And Google wouldn’t have existed without that decades of investment in fundamental research. So this is the model, but there’s increasingly—you hear this in the valley a lot—all investment is the same, all research and development is the same. Why can’t companies just do what universities do? And the reality is that we need both.

So, Allison, you have this wonderful example of what fundamental research is in the lab. And I want you to share what you’re working on.

OKAMURA: Yeah, sure. I’ll give one example. And so I think it highlights one of the special things about universities is they bring together a very interdisciplinary perspective, right? And what place do you have engineers, and basic scientists, political scientists, doctors, lawyers, all under one roof, essentially, that allow us to work together to develop new ideas, and also understand their implications on society? So one example I can give is an ARPA-H project. We’re currently at Stanford one and a half years into this four-year project. And the goal of this project is to 3D print a heart, a biological heart. So this would be a lifesaving technology, as we have an aging population within the U.S. and around the world, an increased prevalence of disease. There really is a need to be able to both heal and, when necessary, replace parts of the human body when they become damaged.

So the heart is an interesting case. And, you know, I’ve learned from my colleagues in biology that part of the reason we start with the heart as the first organ, the first complex organ to 3D print, is it’s actually the simplest of the complex organs because it’s primarily a muscle. It’s a mechanical action, which as a mechanical engineer I really enjoy. And it also requires fewer cells than some organs that do maybe more difficult filtering or chemical processing. But nonetheless, even though it is the maybe easiest organ to recreate, it’s still extremely complicated.

So what we’re doing is we have biologists who are designing novel bioreactors to generate the millions of cells and differentiate the different types that need to be able to go into the heart. And then myself, as a mechanical engineer, we’re building on technologies that my lab developed almost twenty years ago for minimally invasive surgery, with very small, dexterous manipulators that can now become novel 3D printheads, that instead of printing in layers, like you might be familiar with with a traditional 3D printer, instead you print from multiple sides with dexterity and kind of print inside out, like the layers of an onion.

And what this allows you to do is to speed up the 3D printing process, which is especially important for printing something like a heart because the biological cells, the millions that you created, will die if you don’t print them and get the heart working together and beating quickly. So the goal of this ARPA-H project is to 3D print a heart in under an hour, and then by the end of four years be able to actually implant that in a pig and demonstrate its performance.

So this is going to be, first of all, something that is building on a number of technologies that came before it. But it’s also not something at the four—end of four years where we’re going to have a product right? Part of our mission with ARPA-H is eventually to create products. Eventually maybe it will be—will be actually human hearts. But it could also be some of the fundamental technologies then, like novel 3D printing technologies, that could also be used to make other things outside of biological hearts.

So being part of this interdisciplinary ecosystem where I can work with surgeons, pediatric cardiologists, biologists, and engineers to put all of these pieces together and do these creative, moonshot projects, that is something that is very difficult to do outside of academia because we benefit also not only from the people within our university, but from collaborators and the scientific literature and the work that we cite that comes from all over the world. So that unique ability to bring together people, and tackle projects that is just going to be a long time before we see the dividends, and build on work that has been done decades before as well, is really the hallmark of the kind of research that we can do at universities.

ZEGART: I’m sure I speak for everyone in this room when I say hopefully not that long to have a 3D printed heart. (Laughter.)

Herb, let’s talk about cross-cutting themes. What do you—are the most important things that we should be thinking about that cut across all of these technologies that most people tend to overlook?

LIN: Well, I think the first thing, as Allison has mentioned, the interdisciplinary nature, which is just another word for saying that a lot of different technologies and fields come together to enable and advance. The idea behind interdisciplinary work is what you care about is the problem, not the discipline. And we see this in many, many places. As an example, we now know that it’s possible to land a launch vehicle—send them—send stuff up into orbit, and then retrieve the vehicle. We used to throw them away, right? Not a very inexpensive way of doing it. But now that we can land a rocket back on the same—you know, go up and then come back down and land itself, that’s a really important innovation.

But there’s no new—there’s no single technology breakthrough that enables that. It’s actually the sum of a bunch of incremental advances in a variety of technologies that have come together now. We’ve known that it was cheaper to not throw it away for fifty years, but now we have the technology along many lines of advance to bring the—to be able to bring a rocket stage back down safely, and to be—to be reused. So the idea that—this is an example of many different converging technologies to enable and advance, is a big deal. This rocket is also another—a good example of what we’ve—of what we refer to in the report as a frontier bias, that many policymakers often think of the cutting edge of technology, which is—we agree that’s really important. I mean, ChatGPT and large language models, and stuff, big advance, no question about that, OK?

But there’s also lots of advances that come from little things together that don’t make the front page of the—of the New York Times. SpaceX is—the reusable launch vehicle is one example of that. The assembly line, arguably one of the greatest inventions of the twentieth century, right? Completely revolutionized manufacturing. A new way of making things, OK? No new technology, just a new way of organizing them. The shipping container. A steel box, and completely revolutionized the shipping industry. Dropped—made costs go down by a factor of fifty, OK? Many examples like this. And there’s a lot of innovation to be had not at the frontier of cutting-edge technology.

And I guess the last thing to talk about, especially near and dear to the heart of academics but should be to everybody, is that—is the importance of talent, of cultivating talent in this. And to have new ideas for organization, for new technology, or what have you, you need people. People are the source of this. And for a country there’s only two places to get it. You can either grow it indigenously or you have to get it from somewhere else. And in terms of the indigenous supply, the U.S. ain’t doing very well. Thirty-fourth in math education—in math in the world, and going down. Not a good sign. And with our foreign talent coming in, we bring them in for several years, we educate them, and then we throw them out. They want to stay, and then we throw them out.

And incidentally, we are not as attractive to the rest of the world as we used to be. So we’re letting our infrastructure—you know, what makes us attractive—sort of decay. Have you told the story of your—of DeepSeek? You should do that.

ZEGART: No. So I have a—(laughs)—we woke up early this morning, can you tell? I have a research assistant. And the DeepSeek paper came out. I asked her to look at open source all the information about every author on that paper, 211 authors on that paper. She, of course, smarter than I am, said that’s not one paper. That’s five papers that DeepSeek has released since DeepSeek was created. So she looked at every author on every paper released by DeepSeek. And she tracked, based on what was publicly available, where those authors trained, where they worked, where they went to school, and for how long. And the headline is that this is a homegrown talent story. So we often think that the model that China has is the best and brightest from China come to the United States, get trained up, take our ideas, and go back to China. That’s not the DeepSeek story. Half of the DeepSeek authors were educated and trained nowhere outside of China—half, OK?

And then there are other interesting things from that data. Of the forty-nine that spent time in the United States, only seven stayed. Those are the ones—the others are the ones that got away. So this idea that it’s our decision whether we let the world’s best and brightest study in American institutions of higher education, not true anymore. There’s good news and bad news here. I talk about the knowledge power map. If you were to look at the world as a knowledge power map, knowledge is a much more important source of national power today than it’s ever been because of technology. That knowledge power map looks very different today than it did ten, twenty, thirty years ago. It’s the rise of education in the rest of the world. That’s undoubtedly a good thing. But it also means that the best and brightest have options outside of the United States, including institutions—STEM institutions that are world class in China. So that’s one of the implications.

I can’t resist, since I’m a national security, dark corner of the room girl, the implication of what Herb talked about, frontier bias, is incredibly important from a national security perspective. If we think about the most important thing that our intelligence agencies do, it is preventing strategic surprise. That’s what they’re supposed to do, right? Strategic technical surprise is a thing. We are not organized to do it. And if we only look at the frontier as where strategic technical surprise comes from, guess what? We’re going to be surprised, because transformational innovations aren’t just on the frontier. And if we’re only looking there we’re going to get blindsided by transformational developments, by the accumulation of technologies that Herb just mentioned.

OK, I want to make sure we have time for all of your great questions, so I want to end—since I talked about something so depressing—let’s end on an up note. (Laughter.) Preview of coming attractions. In 2025, what are you looking for? What are you, in particular, most excited about technological developments in your field? Allison, let’s start with you.

OKAMURA: Yeah, let me start. In robotics I think I’ll riff a bit on some of the things I highlighted earlier. That we’re going to see this rise in spatial intelligence, so embodied intelligence in robotic systems, where robots can more directly take advantage, first, of large language models, in order to sort of do semantic reasoning about the world and embody that in physical action, but also going beyond that to have more models that actually the models themselves will be describing motion in the physical world. So we will—we will see more of that. And then I think also the robot bodies themselves are changing, right? New materials, new applications are driving different morphologies. So I think we’ll continue to see new and interesting and fascinating, sometimes bio-inspired, ideas in those areas.

ZEGART: Mark.

HOROWITZ: So within the semiconductor and quantum space, I think the areas that you’re likely to see changes is more computing specialized for these machine-learning applications, and evolutions of those applications to be more energy efficient in their jobs. Because we’re currently in a space where, if you project the growth in the uses of machine learning and the computational requirements that would be needed, it’s kind of inconsistent with the infrastructure that we have. So we’ll see a large build out of the datacenter computing. That’s for sure going to happen. But I think along with that you’re going to need to invent ways of doing it more efficiently, from an economic perspective.

There’s such a large economic gain in doing that. There’s lots of people chasing it. If I could tell you which was going to win, I wouldn’t tell you I’d just invest. (Laughter.) But I don’t know. There’s a whole other story we can get to, if you want, about why it’s actually impossible to choose what the winning technologies are because if you could do it, someone would have figured it out, because there’s a lot of money made in it. But the venture capitalists, who are all about money, they fund ten things and one of them works. So, you know, it’s a hard job.

ZEGART: OK. Why don’t we open it up for your questions? If you could raise your hand, I think—do we have mic—we do have microphones. Yes, right here.

Q: Hey. Thanks for the great conversation. Tom Antonelli, former Pentagon.

Mark, you just hit on this. So with all these exciting new technologies—robotics, launch vehicles, AI—something needs to provide the power. We don’t necessarily have incredibly efficient, clean, reliable means. But we haven’t talked about the role of advanced nuclear, small modular reactors. Do you see that as part of the solution? And not just for you, but for any of you to comment on the future of nuclear as a power source to help fuel all of this technology? Thanks.

HOROWITZ: Do you want to do it?

LIN: I’ll take that one. If you look at the record of production of nuclear reactors in the United States—that is how many we can deploy and so on, and on what time scale—we’ve deployed precious few in the past couple of decades. I think the answer is zero. (Laughs.) You can argue about whether one—where the number is one, but I don’t want to get into that argument. To get the kinds of power that we need over the next—you know, by 2050, if you wanted to take some goal for that, like the Paris conference numbers, you’re looking at hundreds of deployments of nuclear reactors. And the average time to bring a reactor online from saying go to—you know, you had the commitments and so on, to having it online is twenty years.

So let’s say we could make that half, ten years. I think ten years—nobody would say—I mean, you know, in the last two years we’ve seen projections of energy go up, you know, a lot, because of datacenters and the like. So I think that there are people who could credibly say that, in the long run, nuclear power is a part of the energy mix that we have to do to reduce emissions. But in the short run, I don’t know anybody who thinks that we can actually make a big difference with nuclear.

ZEGART: Mark, did you want to add anything?

HOROWITZ: No. I 100 percent agree. (Laughs.)

ZEGART: I will say, it’s a great question. Read our energy chapter of the report. There are ten emerging technology areas that we have in the report. Each of those is led by a faculty person who is an expert in that technical area. So Allison is our faculty Council member for robotics, and Mark for semiconductors. We can draw the lines in different places of those technologies. Nuclear last year was a separate chapter. Now it’s part of the energy chapter. But I encourage you to read the report. And we try to highlight exactly those kinds of questions, about what’s coming over the horizon, how realistic do we think those assessments are.

I’ll just add one other thing, which is we don’t have any policy recommendations in that report. And that’s by design. We have very strong ideas individually about what policies should be developed, but we want our report to be the best nonpartisan scientific analysis of emerging technologies and their implications, without a dog in the policy fight. So there certainly are implications. And we have suggestions individually. But collectively, we felt it was important to have something that was really seen as authoritative.

OK, other questions? Yes, right here.

Q: Thank you. Good morning. My name is Mercedes Fitchett, Department of Defense.

With regards to what you were saying about the innovation economy, and as a former student at National Defense University reading America, Inc.?, by Linda Weiss, for anyone who’s interested in terms of the U.S. government’s long-term investments throughout the whole spectrum. Although this report you mentioned is not going to have policy recommendations, as we think about how to better have an innovation economy, what types of conversations are you having with thought leaders here in Washington, D.C.? Thank you.

ZEGART: Do you want me to take that?

LIN: Sure.

OKAMURA: Yes, you start.

ZEGART: OK. We’ve had—I mean, I will say, I know it’s a crazy time in this town. We have had really terrific conversations across the aisle, including in the executive branch, in the legislature yesterday. So there is a hunger for understanding what can we do in innovation and the economy. The one thing, if I could say—this has come up repeatedly in conversation—what’s the one thing we most need to power the innovation economy? Compute. Compute, compute, compute. So Princeton last year spent precious funds to buy 300 of Nvidia’s most advanced chips. This was a big deal. The same year, Meta announced it was buying 350,000 of the same chips. That’s the gap. I’m not suggesting that universities have to have parity, but we need minimum viable compute.

If you think about compute as a national resource, the crucial, federally provided infrastructure for economic development and national security of the ’50s was the highways, right? Eisenhower highways. In the ’70s it was the Strategic Petroleum Reserve, both for our economy and our national security. The equivalent of highways and oil today is compute. And the federal government is not actually investing nearly enough to enable folks across the country, in universities and small startups, to advance scientific discovery and commercialize products for tomorrow. They can’t do that without compute. So that’s, I think, the most important message in terms of if you do one thing, national compute. I don’t know if you all have anything you want to add.

OKAMURA: Maybe I’ll chime in. Definitely something I’ve learned over the last couple of days. This is really my first time in D.C. doing this kind of thing. And I’ve just learned how important it is for—at least for me as a member of the university faculty—to be a part of these conversations. And so, for example, in robotics there was at one time a Robotics Caucus which enabled conversations between companies, both big and small, as well as universities, and the government. And that’s fallen by the wayside. So what I would love to see is more discussions—you know, not just university folks coming and saying what’s important, but having an exchange both ways. And whether it’s the Robotics Caucus or the AI Caucus, which is active but could probably also use more university input, these are the types of activities that I think some of us have to get out of our comfort zone and participate in.

ZEGART: Yeah. Jane Harman.

Q: So I’m a huge Amy Zegart admirer. That’s my qualification. (Laughter.)

ZEGART: That’s why I called on you, Jane. (Laughter.)

Q: And I—and I have an analog brain, I have to confess. (Laughter.) And my question is really around that. Those of us who were/are policymakers, do we have enough sophistication and understanding of this area to make informed policy decisions? And, related to that, is one of the important policy decisions keeping a person in the loop, as we talk about what these technologies could be used for?

ZEGART: So yesterday we were meeting with Senator Hickenlooper. He said he thinks he’s the only scientist in the United States Senate. He said, now, there are doctors and there are engineers, but he’s the only scientist in the United States Senate, to your point, Jane.

Q: And he’s a geologist.

ZEGART: Yes. And that’s exactly how he put it. (Laughter.) And that’s exactly how he put it.

HOROWITZ: He pointed—he pointed that out.

OKAMURA: And he’s a geologist. (Laughter.)

ZEGART: He actually said that. I’m a scientist, but I’m a geologist. What you just articulated is the reason why I pitched this to my colleagues, and why I’m so heartened that our engineering colleagues and friends have said, yes, we want to help. Because there aren’t enough folks for whom this is a native language, talking about engineering, including me. I’m a political scientist. We have too many political scientists talking about technologies we don’t understand. We need the people who are actually involved in those technologies and understand them to help us understand what’s real and what’s not. But that’s not enough. We have to work across disciplinary lines to be able to understand what’s the policy environment.

So one of the—I was so excited to hear, Allison, you say, you know, like this is your first trip to Washington, and what she’s learning. Like, that is the point. We think the policy networks in this town have to include leaders in technology spaces. And I’m a little biased, but I think Stanford has the best engineering school in the world. And we also have the Hoover Institution, which has a deep and rich history of policy engagement and expertise. And we should marry those two things. And that’s what we’re trying to do.

So my ask to all of you is, how can we do that better? Because we are doing the Silicon Valley prototype. We’re trying things. We’re, you know, iterating. Some things will work, some things won’t. But we want to have that connectivity and have it be more continuous engagement, rather than one off, so that we can solve the analog problem that you just raised. So I’m a bigger Jane Harman fan than you are—(laughter)—just for the record.

Yes.

Q: Mark Kennedy, Wilson Center. Ah. Mark Kennedy, Wilson Center.

And since Jane and I both served in Congress, we can tell you that Congress replies to the people. And when you think about the pace of change in technology, how fast it’s going to accelerate, the people’s acceptance or resistance to that will probably as big of a determinant as to how fast it’s allowed to go as a technology. To shape policy, how do we shape the will of the people to accept the technology you’re talking about?

LIN: Yes. (Laughter.) We have—one of the things that we have—in the cross-cutting themes that we identify is that non-technological factors are as important to innovation as technological, maybe even more so. The headline about the scientific breakthrough is important. That’s number one. Once you’ve done that, you’ve got to prove that you can actually turn it into something useful that actually does something. Some sort of, I’m going to call it engineering feasibility, for lack of a better term. Then you have to be able to produce it in an economic way, that people—that’s affordable. Then you have to show that there’s a market. And people have to be willing to accept it. And so on and so forth.

So between the—you know, the headline that says “scientific breakthrough,” and, you know, widespread deployment, there’s a very, very long way. And there’s lots of scientific, quote, “innovation” and so on that has sort of died on the vine because it failed one or more of those tests. Example, I mean, in the genetically modified organisms in certain parts of the country. There’s absolutely no, even though they’re—they seem to be reasonably safe and so on, but—and could save lives, and relieve hunger, blah, blah, blah. But, you know, there’s a cultural or some sort of social objection to it. Many, many such examples. And so you have to—you have to successfully traverse all of them. We understand that.

ZEGART: And we’ve talked a lot—and, Allison, I’d love for you to chime in about robotics, because it’s a great example of this, perception. People’s fear, right? Entertainment, science fiction. We’ve talked with our AI colleagues about people are terrified of AI. They’ve been scared to death about AGI, and Skynet, and Terminator coming to you. There are lots of risks that are near term and lots of opportunities that are ignored because of this perception issue. But, Allison, why don’t you talk about the sort of acceptance, or not acceptance, of robotics?

OKAMURA: Yeah. So I think one of the trickiest kinds of conversations we’ve been having over the last couple days has been the question of how do you, you know, protect and regulate in a way that’s appropriate, but not stifle innovation. So my understanding is that the Paris AI Summit that happened recently, that a lot of the discussion was about what did the regulations in Europe about AI do, and attitudes that may have actually stifled innovation and acceptance, that although there have been some really important AI breakthroughs and activities happening in Europe they just either haven’t been publicized in the way that maybe DeepSeek was, even though similar technologies were developed in France, and that, you know, there’s a fear to communicate it, and difficulty actually doing that work.

I think in robotics some of the things I’m also learning is about the maybe, you know, intersectionality between different policies and the need for robots. So one problem we have is in healthcare, especially in the elder care area but there’s also in other fields like agriculture, where there is just a shortage of workers. As we have an aging population, we know that as adults—older adults go into nursing homes that actually can really cause a precipitous decline in health. And so having people age in place in the home could be really advantageous for health and quality of life, and mental health as well. And so that’s what people would like, is aging in place in the home.

But what you need to accomplish that is people who are willing to take jobs at the appropriate wage, with the difficult labor that that entails. You would like that to happen, but we just don’t have the population of workers to take those jobs. And so other countries, like Japan, have really embraced, say, robotics research and implementation for this more intimate kind of human-robot interaction. But I think a lot of Americans find this idea difficult, that a robot will be taking care of your parents or your grandparents in their homes. But a main point is that many of those types of jobs are also taken up by recent immigrants to our country, right? And not necessarily the ones with high-tech education.

And so there’s this, you know, complexity that I’m learning about that, you know, we don’t have the answers yet. But these are the conversations we need to have that if we—if we don’t take in people who are either—develop in our own—in our own country, or take in others who are willing to take those kinds of jobs, we will need to find technological solutions. And so we can’t have it both ways. (Laughs.) And so, again, I don’t have an answer, but those are the kinds of conversations that are worth happening—to happen. And these are things that won’t necessarily affect people’s lives next year, but we need to plan for the things that will happen in the coming decades.

ZEGART: Heidi.

Q: Hi.

ZEGART: Wait, hold on. We want to make sure you get a microphone, Heidi.

Q: Hi. Heidi Crebo-Rediker. I’m with the Council on Foreign Relations. This was fantastic.

You’re in a policy town where a lot of the technologies you’re talking about are dual use. And very quickly conversations here will flip to, OK, well, what are the implications for technology competition, particularly with China? So I’m interested how much of what you talk about to the policymakers here goes towards the what we can do with elder care and keeping people at home, what we can do for a whole range of beneficial parts of the technology space, versus just tripping right into the defense tech conversation.

ZEGART: So it’s a—thank you for that question. You know, I think of every technology as a weapon. (Laughter.) So the national security crowd naturally goes there. It’s wonderful to be around technologists who are so optimistic because both are true, right? The dual use, it’s upside and downside. Depends on the conversation. This afternoon we’re going to the intelligence community. We’re going to hear a lot, I’m sure, in our conversations about dual use. But as you know better than anyone, Heidi, this is an economic competition, too. It’s not just about defense tech. We think about who’s going to—who’s going to lead in adoption, not just invention, back to DeepSeek, right?

So, you know, DeepSeek’s adoption by Y Combinator startups in San Francisco was like that, like within twenty-four hours. It spread like wildfire, people switching from the models they were using in the valley to DeepSeek. So the economic piece—and I know you’re doing a lot of work on that—is complex. And how do we integrate technology, economics, and national security, from an intelligence perspective and from a policy perspective? What are the organizations of Congress? What are the organizations of the executive branch to fuse thinking and policy in those areas? This is a really challenging area.

It’s not at all like the Cold War. I find the reference to a new Cold War deeply misleading and problematic, because there wasn’t that kind of integration in the Cold War. It was a military and political competition, not an economic one. This is different, and it requires new organizations and really new ways for the intelligence community to think about who’s a customer, right? Who are they serving when they provide intelligence? What is intelligence? Where do they get it from? I go right to the intel piece of it but you know the policy piece is just as important.

I think we have time for one more question. Yes, sir. Right here.

Q: Thank you. My name is Marc Rotenberg. I’m with the Center for AI and Digital Policy.

It’s nice to see you, Herb. I remember thirty years ago when you were working on the crisis report for the National Research Council, which had a big impact on crypto policy in the U.S., around the world. And that moment was interesting because the U.S. government was trying to regulate encryption, and many of the leading scientists in the field were saying this is a mistake. We need to promote privacy and security. And, of course, that gave way to the rise of the commercial internet and, on balance, I think was a very important shift in direction. And you really did the work. And I just wanted to acknowledge that.

But thirty years later, we seem to be in a different point. The government, at least in Washington, is resisting regulation, whereas many of the leading innovators in the field—and I’m thinking of Geoffrey Hinton and Yoshua Bengio and Stuart Russell are actually saying, now we need regulation. California passed seventeen AI-related bills last year and, frankly, regarding DeepSeek, the Navy, NASA, and Congress have all sought to prohibit it, as have many governments around the world. So I was wondering what your thoughts are about AI governance. To Mark’s earlier point, I actually think this is key to establish public trust and support for what is clearly a transformative technology. And it seems like there’s a lot of work to do.

LIN: I will do my best at simulating Fei-Fei Li, who is our AI person, the godmother of AI. And on that particular question that you ask regarding AI governance I’ve heard her say—don’t take this literally—but, you know, I’ve heard her say that the idea of regulating AI as a technology enabler, per se, by terms of the number of floating-point operations that it can carry out and so on, that doesn’t make any sense at all. But she thinks that if there’s going to be regulation it should be applied to the specific applications. So AI in X, you choose the X. AI in cars, AI in biology, whatever, OK? And you regulate the applications of it. That’s what I’ve heard her say. I think she would endorse that capsule summary of it. But it’s a very—as you know, it’s a very controversial issue. And there are respectable people on many sides of that.

ZEGART: Allison, you get the last word.

OKAMURA: Yeah. I’ll just mention another question about regulation is on the input side, right, the supply chain aspects, which we haven’t had the chance to discuss. And how do, you know, economic and security-based regulations, in terms of, yeah, exchange of goods and materials—you know, right now it’s the Shenzhen area which supplies almost all of the components that we use in robotics. And then many of the superstar companies with tech titans are also relying on products from other parts of the world. And so if we are going to regulate the interactions with those materials, we’ll also have to think about how that might affect innovation. So there’s kind of the innovation in the middle, there’s the endpoint which is regulating the applications, but there’s also the beginning point of regulating where the materials come from. And all of those are going to impact our ability to compete.

HOROWITZ: So—can I just say—so both this question and the previous question, I think it’s really important to understand that there’s always two edges to what you try to do. So on the one hand, you’re trying to do protection and guarantee sort of more safety about these tools. But that also has a problem with sort of decreasing the amount of expertise that you have throughout the country in that area, and your ability to make use of that.

So, you know, one of the—when you get into national security, I think we need to be very cognizant of the fact that the country is better off, especially in this knowledge and economic power, to have a very bright, vibrant group of people within the country who understand that technology and can leverage it in various ways. Now, some of it is very scary, because you’re worried about China and all the rest. But let’s be honest, China has the expertise locally to do this. They have a group of people that are expanding in this. And they are less worried about some of these issues than we are.

And the competition—you know, everybody talks about the race. If you’re going to use it, it’s not really a race. It’s a marathon. I don’t know what it is. It doesn’t have an end. It is not that we need to beat people at a particular place. We need to have the knowledge workers and the expertise in this country to continue to innovate and create the next generation solutions, not just for tomorrow, or the day after, but for the foreseeable future. And I think if you’re very shortsighted about where we are and how to protect ourselves in some way, what you do is you stifle the basic engine that has driven the world forward.

Research works. You know, I’ve been in a company. I started a company. You know, the thing is if you’re starting a company you don’t want to innovate everywhere. The point is, figure out where your advantage is and work on that, because if you acquire too many miracles they will not happen, right? One is still a miracle, right? Two is a miracle squared. (Laughter.) So the point is, we really move the country and the world forward by being able to tap all the people in the United States and the world—that’s the way research works—about being able to leverage ideas that people had in various different places.

And I think the shutting down, the nationalization, or the attempt to contain that and protect it from other people, is basically shooting yourself and the whole world in the foot. And I understand—it’s like I’m not oblivious to national security issues. Like, I will take any of you in a conversation about what the tradeoffs are. And I am—I’m a technologist. I’m a techno-weenie. I love this stuff, OK? But I want you, as a policy person, to understand the ramifications of what your policies are in the greater context. Because I think people get narrowed in to the thing they’re thinking about and they blow the context, and then in result don’t do the right thing. So, you know, my—I implore you to ask people who really understand what the ramifications of your actions are in the greater system, because I think it really matters.

ZEGART: We have come to the end of our time. Thank you so much for coming. It’s lovely to see everybody. (Applause.) Thank you, guys.

(END)

This is an uncorrected transcript.

Top Stories on CFR

United States

America’s oldest military service turns 250 on Saturday. If you see an active duty, former, or retired member of the Army, wish their service a happy birthday.

Global Governance

Leaders of the world’s seven major industrial democracies will strive for a united front on tackling some of the world’s toughest economic and security challenges. Job one will be to avoid a rupture over trade.

Immigration and Migration

The White House’s latest travel ban imposes restrictions on citizens from nineteen countries. Many of those affected are contending with crises at home.