Global Affairs Expert Webinar: How Tech Firms Shape Geopolitics

February 5, 2025

Adam Segal, the Ira A. Lipman chair in emerging technologies and national security and director of the Digital and Cyberspace Policy program at CFR, leads the conversation on how tech firms shape geopolitics. Carla Anne Robbins, senior fellow at CFR, moderates the discussion.

 

Speaker
Adam Segal
Ira A. Lipman Chair in Emerging Technologies and National Security and Director, Digital and Cyberspace Policy Program
Council on Foreign Relations

Presider
Carla Anne Robbins
Senior Fellow
Council on Foreign Relations

 

Transcript

ROBBINS: Welcome, everybody, to today’s session of the Winter/Spring 2025 Global Affairs Expert Webinar Series on “How Tech Firms Shape Geopolitics.” I’m Carla Anne Robbins. I’m a senior fellow here at the Council on Foreign Relations.

Today’s discussion, as we said, is on the record and the video and transcript is going to be available on education.CFR.org, and we encourage you to share them with your colleagues or classmates. And, as always, CFR takes no institutional positions on matters of policy. I can’t tell you the number of times I have said that on our podcast. I think I have it probably tattooed on my arm.

So, we are delighted to have Adam Segal as our speaker today. Dr. Segal is the Ira A. Lipman chair in emerging technologies and national security and director of the digital and cyberspace policy program at CFR. He’s also a former senior advisor in the State Department’s Bureau of Cyberspace and Digital Policy where he led the development of the United States international cyberspace and digital policy, and before that he served as project director for the CFR-sponsored Independent Task Force reports Confronting Reality in Cyberspace; Innovation and National Security; Defending an Open, Global, Secure, and Resilient Internet; and Chinese Military Power. And he’s also the author of the Hacked World Order: How Nations Fight Trade, Maneuver, and Manipulate in the Digital Age, which was published in 2016.

And just a word on format: Adam and I are going to chat briefly and then we’re going to open up our conversation to the group. 

So, Adam, there is a long history of companies wielding geopolitical power. We’re talking about the British East India Company, which at its height controlled a large swath of the Indian subcontinent and had an army of more than 200,000 men. United Fruit in Central and South America they even managed the postal service in Guatemala at one point and, of course, you know, pushed the U.S. government to overthrow a democratic government in Guatemala.

Here in the U.S. JPMorgan bailed out the U.S. government twice and underwrote the pay of the entire U.S. Army in 1877, and that power was reined in by governments with reforms in the twentieth century. 

So, I think the question that I’m going to ask here—and it’s not one of those “there’s nothing new under the sun” questions but it is a pretty basic question—are modern tech firms wielding or on their way to wielding that measure of power over government and geopolitics? They don’t have armies, but they do have huge influence in election campaigns, the battlefield in Ukraine. They’re at the core of the strategic competition with China. Are they really sort of the modern-day version of this? 

SEGAL: Thanks, Carla.

I think they are. I think you started touching on some of the reasons why they have similarities to the geopolitical role that firms played in the past. I think, you know, we can distinguish between, as you said, kind of influence on domestic politics, which we are certainly seeing through individuals right now and, you know, the DOGE and Elon Musk’s impact on USAID and U.S. foreign policy is pretty large right now.

But I don’t think that’s the same category per se of what we’re thinking about the firms and geopolitics. I think we can think about it in kind of three main categories and the first one is, of course, their influence on the digital space which, you know, makes the most sense, given that this is where they are, and I think we’ve seen two waves of that in the Obama administration in particular. 

You know, they promoted an idea of an open internet, the free flow of data, multi-stakeholder model, and in the last seven to eight years we’ve certainly seen a backlash against that as countries around the world basically have questioned the idea that the internet would be completely open, have worried about content moderation, have wanted access to data for law enforcement purposes and also for harassing their own citizens, and so we see the tech companies really playing a role in that space. 

Second is, as you mentioned with Ukraine, we see commercial firms, really, on the front line. Here, we saw AWS (Amazon Web Services) and Microsoft providing really critical services to the Ukrainian government and moving data and protecting networks, and on the positive side, Starlink also provided internet services to Ukraine. That was really important for defending Ukraine in the first few months of the war. 

And then third, which you also mentioned, is the increasing importance of these companies for national security and innovation that feeds into national security. So, we’ve seen this real shift where, you know, in the 1940s, ’50s, ’60s, ’70s, it was the federal government that funded the basic R&D that provided the technological innovations that we’re still living off but were important for stealth and radar and other national security technologies, and now the federal government is playing a much smaller role, federal funding as a percent of GDP has been flat for decades and the slack has been picked up by the private sector.

We see that on the government side, where we have things like the Defense Innovation Unit and In-Q-Tel and other things that are trying to tap into the private sector. But we’re also seeing this shift in the private sector talking about how it’s going to support that role, and this has been very clear in the AI companies over the last couple of weeks where they have based—many, all of them—well, as far as I could tell almost all of them have adjusted their statements of principles saying before we are not going to work on national security issues to now saying democracies have to win this space and we’re going to support the U.S. national security interest in this space.

ROBBINS: Of course, the—we’re caught between two things. We’ve had this period of time, certainly, when AI first came out that, you know, there was this race and, you know, we have to win, which we’re now hearing about, again, at the same time you had workers at Google saying, we don’t want to do anything that supports the defense industrial complex. You know, this was sort of the back-and-forth. 

And then you also had this we’re really worried about our robot overlords. We have to have some sort of regulation. There hasn’t ever been this broader question here are these companies too big—are they going to get bigger than governments, which was the impulse in the nineteenth century.

How much of that is because they’re just so powerful, they’re just so rich, and how much of it is because they are seen as so intrinsic to our national security that nobody really wants to even engage in that conversation?

SEGAL: I think we’ve had that conversation a little bit. I mean, certainly, in the Biden administration and Lina Khan at the FTC in particular. In a number—

ROBBINS: The Federal Trade—yeah.

SEGAL: Yeah. Sorry. The Federal Trade Commission in a number of instances made the argument that big tech and the concentration of economic power is in fact a threat to national security. They make that argument, I think, one, for example, by showing that consolidation in the defense industry base has affected the supply chain, right, for providing weapons to Ukraine and if there’s any breakdown there, we don’t have the flexibility there. 

And then they’ve also made that argument around innovation and that if the—the big companies have tended to in some cases, you know, buy their competitors, create moats around innovation and that, they have argued, will slow the U.S. down and, you know, China will gain the edge. 

I think you do see people making the argument that you made which is that, no, in fact, we—you know, we need the big companies to succeed to help defend U.S. national security. You know, OpenAI and Facebook and others have made a version of that argument which is, you know, if you regulate us then China will take advantage. 

So, I think that those arguments have been made, and so far, the—you know, the big tech side of it has probably dominated. 

ROBBINS: So, we’re going to want to turn this over in a few minutes, but I can’t resist—(laughter)—the prerogative here. 

Can we talk a little bit about DeepSeek here? You know, AI certainly has become the new Cold War arms race here, and the Biden administration tried to maintain the advantage by denying China access to advanced chips. 

Now, the Chinese have announced they’ve developed an AI model they claim was built for infinitely cheaper. It was the—is it the Temu of AI? And despite those restrictions what are you thinking about this? 

Did they do it the way they claimed, on the cheap and with less advanced tech? Did they eat our lunch on this? Is regulation pointless? Is the genie unbottleable? Or did they lie about it and did they actually sort of back door it?

SEGAL: Yeah. So I think, you know, the DeepSeek poses questions on a number of fronts and all of those are right, right? 

So is the model of AI that’s been a dominant—development that’s been kind of dominant one in the political sphere which is, you know, you need more compute, you need more energy, you need more data. DeepSeek seems to question that.

We have an open source first proprietary model that’s also—DeepSeek seems to question and then, finally, the impact of export controls and chips. On the export control side, I don’t think it totally undermines the argument about, you know, the need for export controls, right.

I think the Biden administration and supporters of that would say, look, we know that there was going to have to be adjustment. We adjusted between 2022 and 2023. There’s still some loopholes about using cloud services for training that we’re going to need to close. But there’s going to be some stockpiling. There’s going to be evasion with fake front companies. 

But I think it’s important to also think about, you know, the purpose of the export controls are not just—I mean, they are to try to slow down certain breakthroughs, but they’re also to try and slow down the larger ecosystem and the Chinese AI innovation systems, and you have the head of DeepSeek saying, look, our biggest bottleneck is chips, right? We did make these breakthroughs, but we’re still going to need more chips. So I think that that strengthens the argument about why we would still keep it. 

Now, we shouldn’t expect that it will stop China, right. As you said, they’re going to make breakthroughs and they are, you know, dedicating resources to it and they’re, you know, smart and they’re going to do it. 

I think it does put some strength in the argument about, you know, we can’t expect to totally control AI, right, both for good and for bad, and so export controls need to be accompanied by and strengthened by policies that are focused on resilience and preparing us and preparing American society for the diffusion of AI and so we’re better prepared for that. 

ROBBINS: So, let’s—I’m sure everyone has an infinite number of questions so I will not dominate here as much as I would like to. So we’re going to open it up. Please click the raise hand icon on your screen to ask a question, on iPad or tablet click the more button to access the raise hand feature, and when you’re called on, accept the unmute prompt and tell us who you are. State your name and affiliation followed by your question. 

You can also submit a written question on the Q&A icon—we already have several there—or vote for other questions you’d like to hear answered in your Zoom window at any time. 

And, Deanna, you have infinite power and it’s over to you. 

OPERATOR: We will take the next raised hand from Beverly Barrett, who is a lecturer in business at the University of Houston. 

Q: Thank you very much to the panelists and for hosting this program. 

So, my question is following up on some comments from Dr. Segal. I’d like you to illuminate further, please, and comment on the uniqueness of—the argument you made that the large size supports national security at these U.S. mega firms whereas the accountability balance the U.S. mega firms such as Apple, Google, and Microsoft have more than $3 trillion market share for each firm, I understand.

And where—will there be a balance for oversight and competition policies in the U.S. as compared to in the European Union, which is much more rigorous on competition and anti-monopoly? What checks for accountability and competition can be applied to these firms in the U.S. and in light of the market power but also the voice that they’ll have, you know, on the internet and in our society, given their very large size? Thank you very much. 

SEGAL: Yeah. If I understand your question, I mean, I think—look, it’s clearly a political decision and the Biden administration for a long time talked about, you know, moving through and they did move through with some, you know, anti-monopoly cases against Google and others about limiting some of the big techs’ power. 

Now, I don’t think they got as far as they were hoping and, certainly, under the Trump administration, the signs have all been that, you know, regulation slows innovation down and that they, you know, see that as threatening U.S. national security.

Now, there are voices within the Trump administration that, you know, seem closer to me than what Lina Khan was talking about and, you know, J.D. Vance in some cases was—had some views that were parallel, too.

Now, that—a lot of that was driven by concerns about content and what they saw as censorship as conservative voices but there, I think, was some overlap there. I think in Europe, you know, there has been this—I think, as you were suggesting, this kind of long argument that Europe’s impact in this space plays on regulation, what’s known as the Brussels effect. 

But I think you are beginning to see more European voices questioning whether that’s a long-term sustainable, that without having tech firms—its own tech firms and its own European stack that they will not be able to compete. They will not, in fact, in some ways be able to regulate. 

And so, you saw that report that came out in last fall that basically said from the former minister—foreign minister of Italy basically saying, you know, we need to have our own industrial policy—we need to promote our own firms, and you saw something this week as well making those arguments. 

You know, I think regulation is the tool, competition is the tool, and so far the U.S. has chosen not really to use that in a way that other countries have. 

OPERATOR: We will take the next raised hand from Joseph Nye, who is a distinguished service professor emeritus at Harvard University.

ROBBINS: And so much more than that. (Laughs.) 

OPERATOR: Please accept the unmute prompt.

Q: Can you hear me now? 

SEGAL: Yeah, Joe. 

ROBBINS: Hey, Joe.

Q: OK. I say you should have added, “and a follower of Adam Segal.”

ROBBINS: And not me. Thanks, but—(laughs)—

Q: And Carla Robbins. (Laughs.)

ROBBINS: Thanks, Joe. (Laughs.)

Q: But, seriously, is there anything that can be done internationally? Some of the people who know a lot about AI argue that it doesn’t do any good for one country to try to regulate if you have so many of the new algorithms going open source, and that the dangers have to be dealt with in an international context. On the other hand, when Biden and Xi agreed—(inaudible)—to basically have a discussion, I’m told that the discussion went nowhere. The Chinese were trying to get access to U.S. technology and the U.S. were trying to keep it at a high and abstract level. 

What, if anything, can be done in terms of international cooperation—governmental cooperation—as you look at these large firms and particularly in regard to AI?

SEGAL: Yeah. Thanks, Joe. 

I mean, I think—look, we see a range of things happening at the, you know, multilateral, multi-stake, and bilateral level. None of it seems to be keeping up with it but it’s—there is a kind of proliferation of these discussions. 

So next week is the continuation of the meeting that started in the UK with the Bletchley Park process that then went to Seoul. Those two meetings were very much focused on trust, accountability, and safety and the French want to talk more about innovation and inclusivity, so kind of addressing some of the concerns about the Global South, as well as giving Macron a—as well as giving Macron a chance to promote French views about AI governance and the French AI industries.

The UN, right, has a process where there has been some discussions about how to control and on the positive side both the U.S. and the Chinese submitted resolutions. We both signed those resolutions about control. 

I think that, you know, the U.S.-China bilateral there was some discussions about keeping humans in the loop especially around nuclear command and control systems, which is, you know, a thing that lots of people have talked about. It’s not a difficult accomplishment but I think one that at least is, you know, some step of progress there. 

But I think, you know, people have played with the ideas of an IEA or other international organization that would regulate AI. I think those are unlikely. 

I suspect we’re going to end up in a world like we are with cyber right now, which is not a lot of constraints, some self-restraint that goes into place, some accepted norms which, you know, shape behavior at the margins and then a lot of expectations on actors to do the right thing.

OPERATOR: Wonderful.

We will take the next written question from Geri Sawicki, an adjunct professor at Modesto Junior College. She asks: How do we get oversight to know they are protecting our interests without interference at all? Can we trust these firms to protect national interests, or will they follow the money?

SEGAL: Yeah. So, it’s a good—it’s a very good question. I think the tension has been and the argument has been about regulation and innovation. The Biden administration managed to get these voluntary commitments from the firms. 

But, those are very hard to figure out if they’re holding up on to—holding on to them, and there have been several scorecards that have done that showed, you know, some things that have been held up—upheld more than others. 

And part of this issue is—has to do about just the difficulty and newness of AI regulation that—you know, how do we test a model? How do we know it’s transparent enough? Do the firms know? How do you red team? 

And so, part of the issue is that that science is not as developed as it should be, and so we need a lot more investment from the firms and from the federal government about doing that. 

The Biden administration also established with—inside of NIST, the National Institute of Standards and Technology, the AI Safety Institute but I think I just saw an article this morning that that person’s been put on administrative leave. 

So, I think we’re going to—you know, we’re definitely going to need some outside regulation and we’re going to need some buy-in from the firms about what actually is useful in ensuring that the systems are safe. 

OPERATOR: We have a raised hand from Christopher Ankersen, who is a professor at New York University. 

Q: Thanks very much for the opportunity to listen to this.

A question for you, Professor Segal. Is it—and I think this gets back to maybe what Ms. Robbins was mentioning at the beginning—is it really still viable to speak of a division between a commercial entity and a strategic or national entity here? 

Have we actually come closer to this notion of the British East India Company than perhaps we’re willing to admit? When either the entity is able to instrumentalize strategic arguments like we’ve built it—now you need to protect us, or the idea that they’re going to somehow get us into a situation where then the nation state has to kind of follow the lead of those companies, is it actually useful to talk about this in the way that we have for so many years, this framework of a difference between a commercial entity and a national or strategic entity, or are we talking about something that needs a whole new understanding of what’s going on here? 

SEGAL: I mean, I think that’s an interesting question. 

I think the—you know, the alternative has often been or has been this kind of idea that the—you know, we’ve now entered a techno polar world where the tech firms are their own centers of power and I think, you know, the Ian Bremmer piece in the—in Foreign Affairs several years ago kind of tried to set out a framework for that. 

I guess I’m somewhat skeptical, and I think maybe if I’m not misjudging your argument or misusing your argument, I think it is this kind of a spectrum of how those firms are thought of as, you know, commercial or national interest and that changes over time, and what we’re seeing is we’re entering a stage where the relationship is one that’s heightened compared to—and as it has been in other past periods, and we’re going to see and we are seeing reactions from the state to try and reassert its authorities and autonomy.

Now, a lot of that depends on state capacity. A lot of that depends on who you are compared to the firms and everything else. I don’t think we’re at the end of this. I think we’re at a point where we’re going to begin to see, as I said, a kind of reworking of that relationship.

Just to put all of this in a larger context, this is a part of a—you know, a project that I’ve been thinking about, about U.S.-China competition and how Beijing and Washington have thought about their own firms, and we saw in China very—you know, a very intense period of techlash where the Chinese state very quickly reasserted authority over what seemed to be some early signs of tech firms pushing back or asserting some type of autonomy.

Now, two different states, two different sets of relationships, but I do think if we think about it as kind of that spectrum of relations or kind of mutually constituted as opposed to states versus these firms that’s probably insightfully more—analytically more helpful. 

OPERATOR: We have a raised hand from Tanisha Fazal, who is associate professor of political science at the University of Minnesota. 

Q: Can you hear me? 

SEGAL: Yes.

Q: Great. I’m going to turn it over to my student William to ask a question.

SEGAL: Great. 

Q: How can the United States effectively balance its strategic interests with its technological competition with China in the Gulf region? 

SEGAL: So, in the Gulf region. Yes. 

So, you know, here the issue is access to data and to energy as well as into investment. You know, the—as you probably know from—given your question, the U.S. through Microsoft has tried to kind of come to an agreement with partner—the G-42 partner there about access to U.S. technology in return for G-42 not engaging with Huawei and other Chinese tech firms, moving forward.

I think there was an attempt at the end of the Biden administration to essentially say, especially also with the executive order that came out right before the administration ended about AI diffusion, was to say, look, if you want to have access to the U.S. AI stack then you’re going to have to make a choice. 

We don’t really know how that’s going to play out. I have been, I think, somewhat skeptical of the “you have to make a choice” argument because the Chinese, certainly, on a range of technologies can provide assistance, and I think it is also going to be harder under the Trump administration at least, given early signs about the willingness to use tariffs against enemies and friends. 

So, if I was, you know, the head of another country I would be thinking about trying to create autonomy between me and the U.S. and me and China. So I’m going to try to play all sides against each other. 

OPERATOR: We have a written question from Sokol Celo, who is the chair and associate professor at Suffolk University. He asks: Will the attempts of tech firms to reshape geopolitics lead to a shift away from democracy as we know it?

SEGAL: So I think the tech firms are symptomatic of—and the kind of influence that they’re having over domestic politics in a range of countries right now is kind of a symptom of and an amplifier of larger trends in democracy, right?

We’ve been talking about the undermining of trust in institutions, the lack of shared facts for which, you know, are important for democracy, which democracies played—tech firms played a part of but they weren’t entirely a cause of.

You know, we see the role that Musk has right now through the DOGE is not just because he’s a tech entrepreneur. It has to do about—I think the piece in the Times today did a good job of capturing about regulatory capture and the increasing role of companies generally in democracy and wealthy individuals. 

So I would say it’s all part of the trend but I don’t think the tech firms themselves are the catalysts behind it. 

OPERATOR: We have a raised hand from Mojúbàolú Okome.

Q: Thank you very much. 

I wonder whether the argument that if there’s regulation it would either stymie or kill innovation is valid because the Chinese regulate, I think, and there’s DeepSeek. And I also wonder the extent to which lack of regulation will unleash these tech firms. 

I mean, OK, so we assume they’re benevolent, but if they’re inclined to be malevolent what’s going to stop them if there’s no government regulation? And then under this administration I think I am somewhat confused about why, the logic of decisions that are being made and whether those really serve the national interest, you know. So, I wonder what you think.

SEGAL: Yeah. I didn’t mean to suggest that I support the argument that no regulation is better for innovation. I think the tension is because there’s no way that policymakers or regulations are going to be able to keep up with the edge of technological change. 

You have to be able to regulate in a way that doesn’t foreclose technological change or innovations that policymakers are not prepared for, and the way to do that is to, you know, try to be as kind of clear about the outcomes you’re trying to get but not—but to be tech agnostic.

And so, you know, we had a debate in the U.S. about AI regulation in California this summer. There was a lot of input from the tech industries there. There was a division in the tech industry. Some of it—some of the AI companies said they could live with it. Some of them did not.

But that seemed to me to be getting closer to the types of regulations we could hope for in this space because, as you hinted at I don’t think we can trust the firms, right? The firms are driven by commercial interest. 

There is, I think a lot of reason why to think that there could be a race to the bottom, that they’re—you know, they’re afraid that the competitors are going to move a model more quickly, that they would push models out that they haven’t completely tested, and I think we’ve seen some of them admit that they move faster in models than they necessarily wanted to because competitors were getting them out. And so I do think that we’re going to need some form there. 

I don’t know if I can really speak to the—you know, what the logic is right now, to be quite honest. I think I could probably map some logic of some actions to different parts of the incoming administration, for example, that, you know, are clearly China security hawks. They have a view about where technology is going to fit in that competition. 

But that’s just one voice in the administration right now, and also as we know from the first administration that it often just comes down to the president and the president often changes his mind a lot, and so probably too early to tell which direction we’re going.

OPERATOR: We have a raised hand from Michael Strmiska, who’s a professor of world history at Orange County Community College. 

Q: Yes. I’m actually in Lithuania right now. I’m here as a Fulbright fellow teaching at Vilnius University. 

It seems to me so much of the discussion today is about the potential or the lack of potential for any kind of effective regulation, and I’m afraid my question will be more of the same so you may have to repeat more of the same answers you’ve already given. But what I’m really concerned about is the capacity for the increased production and dissemination of misinformation for political purposes and how tech companies are involved in that, and it seems that with the Trump administration being almost religiously dedicated to anti-regulation there’s little hope that America can do much or would do much in this matter, at least for the immediate future. Do you see any forces on heaven or Earth that can be aligned to put any limits on the technological high-tech production of disinformation? 

SEGAL: Yeah. So, I fundamentally—well, I essentially agree with your characterization. I mean, right now we’re at a period where I think it’s very unlikely. You know, as you said, the Trump administration saw most content moderation as being ideological or driven to limit conservatives.

The tech firms themselves have—you know, we saw that the statement from Zuckerberg and the changes to moderation on Facebook. You know, Musk has essentially turned X into a platform for lots of different voices and has essentially cut back on all content moderation. A lot of the civil society groups have been under political attack, and they have cut back some of their work in that space. So, I think it’s very unlikely, certainly, from the U.S. side if we’re going to see any more regulation there. 

AI—we might see something around, you know, needing to put digital marks on AI-generated content, some—certainly some work around child sexual and exploitive material. But that’s probably really going to be at the margins. 

So I think the big question will be—and this is driven in part by larger issues about the trade war and transatlantic relations—is, you know, do the Europeans move forward with the Digital Services Act and fines against X and other companies for not taking down or not regulating the material, and here we’ll have to wait and see if they’re going to go forward with that. 

But that’s where we’re more likely to see it than on the U.S. side. 

OPERATOR: We have a written question from Asha Rangappa at Yale University, who asks: What do you anticipate will arise at the Paris AI Summit and what role will VP J.D. Vance play? 

SEGAL: Yeah. I’ll take the—I don’t really know why Vance is going, quite honestly. I was a little bit surprised to see that, you know, his first international trip is going to be there. 

ROBBINS: It is Paris. 

SEGAL: I mean, I guess so. But, you know, doesn’t go with the messaging about, you know, being a person of the, you know, anti-elite and globalism.

But I suspect the messaging is about, as I said, this kind of feeling that the AI discussions were overly focused on safety and regulation and slowing the U.S. down, and I suspect the messaging will be about, no, we have to innovate as quickly as possible and the biggest threat out there will be—is China and you should all join us in that kind of competition there. That’s what I suspect the messaging will be. But I’m kind of interested to see what it is. 

Like I said, I think the French really did want to expand the aperture to inclusivity and innovation and also give a chance for French firms to showcase their wares, and there is a, you know, between Mistral and some labs in Lyon and some other places, some real centers of AI excellence in France, and I think to some extent it is useful for the U.S. to embrace some of that messaging about inclusivity and growth because, you know, China very much has been arguing that, well, it’s the U.S. that’s hyping these risks. It’s the U.S. that’s controlling these technologies, and China is the one that’s going to provide these services to the emerging economies and the developing world. 

And so I would hope that we could, you know, message that as well where we want to include global voices. We want Global South voices. We want to make sure that we are—these systems also address their development and social needs. But I doubt that J.D. Vance is going to send that message, but I would hope maybe Rubio would.

OPERATOR: We have a raised hand from Michael Poznansky, who is an associate professor at the U.S. Naval War College. 

Q: Great. Thanks for this discussion. 

Adam, you alluded to in your opening remarks tech companies and their involvement in Ukraine during the Russia-Ukraine war. Just wondering if you could go a little bit deeper on that on kind of two fronts. 

One, what lessons have they learned about their experience there, both on the Microsoft-Google-Mandiant side and then kind of the Starlink side, and how might they respond differently or similarly in a potential conflict over Taiwan? Obviously, their interests in China are a lot different than their business interests in Russia prior. 

So I would welcome any thoughts you have on that. Thanks. 

SEGAL: Yeah. Thanks, Michael. 

So, I think there’s been a lot of thinking about is this—can this be recreated in a Taiwan scenario, and so I think the lessons have been about kind of running the scenarios before it happens and creating a lot of organizations to kind of keep the contact and information sharing going. 

You see on the private sector side, you know, this association of the companies that are still kind of talking about these things and coordinating the distribution of aid and thinking about Taiwan scenarios there. And I think the Taiwanese have learned that they need to make a lot of the personal connections with the firms before anything happens if it does happen. 

So, you know, what was really important in the Ukraine scenario is that a lot of the people had been involved with Ukraine since 2014 if not beforehand, and so there was a long history of people cooperating, and the Taiwanese seem to have learned that.

I think there’s some procurement lessons about, you know, licensing and how to do that quickly because the firms, you know, can’t do it pro bono for long periods of time and so you have to be ready to flip over so, you know, either the U.S. government or others start paying for it and how do you do that. Europeans were more flexible about it and I think the U.S. government learned some lessons about how to do that. 

But, you know, as you alluded to the—because of the commercial ties to China I think a lot of this is going to happen under the radar and the preparation is going to happen under the radar so people don’t get, you know, drawn into the sights of the Chinese government for some possible retaliation. 

ROBBINS: Can I follow up with that and ask a question about Starlink? 

SEGAL: Yeah.

ROBBINS: Is the dependence on Starlink that the Ukrainians have and how much of a conversation there is about having one company and one person having the ability to give freedom of action on a battlefield but also the ability to turn it off.

SEGAL: Yeah. So, I think there’s a lot of discussion about it, especially in the Taiwan scenario, because Musk also has said that Taiwan is like Hawaii and it should go back to the Chinese or there were rumors that Putin reached out to him and said that Xi asked him to tell him not to use Starlink in China. 

So, the Taiwanese are not going to use Starlink. They’ve said they’re not going to use Starlink. You know, they are right now contracting with a European provider. They’re trying to get their own microsatellites into space. 

But I think from a competitiveness perspective and a U.S. national strategy and interest perspective, it would be great if Bezos and Amazon can get their microsatellites up more quickly as well. 

I think we’re at a window now where there’s not a lot of choices or not more choice—as much, as enough choices as we need. But kind of speeding that competition would be really important. 

ROBBINS: Thanks.

OPERATOR: We have a written question from an attendee who asks: Ian Bremmer identifies three classifications for measuring the impact of big tech on the state—globalism, nationalism, and techno utopianism. What are the implications for state sovereignty and state capacity in the context of the expansion and influence of big tech?

SEGAL: Yeah. So I don’t know if we have—if those categories fit any longer, in a way, right, because, for example, if I—you know, some of the companies that you might have considered utopian now have said, we’re going to cooperate with the Defense Department on national security interests, and so—or they, again, with the globalist companies they because of the wars have been more forced to choose a side than they would have had wanted to in the past.

So, I don’t know, and I’d have to go through the article again to make sure that those categories still work, but I probably think that they don’t, at least in some of the examples that he gave. 

So, I think the capacity question is really the kind of critical one and one that, you know, is hard to—well, I think the—as we see in the U.S. government right now, it’s not just the tech firms that, you know, are creating capacity that—you know, that the U.S. government needs. It’s that we’re also making a decision to undermine U.S. capacity, right? 

So, if DOGE has its way about, you know, expertise in the U.S.—and, you know, civil servants and the government, we’re going to lose a lot of important capacity that would help us both reassert some autonomy, authority over the tech firms as well as channel the capacities of the tech firms and, you know, USAID is a great example. 

USAID does lots of work around digital capacity building and things that we would want to bring the tech firms into for competition with China on 5G or cloud and all those other things.

And so, you know, stalling that, defaulting on that, you know, rolling it totally up is going to not be great for U.S. national interest. 

OPERATOR: We have a raised hand from Eleanor Fox, who is a professor at NYU. 

Q: Hello. Thank you very much for this fascinating conversation.

I want to return to the conversation about private commercial firms versus are they part of the state. My own expertise is in the antitrust side and I would have to say that mostly in the antitrust community, there’s a lot of concern that antitrust is being co-opted by national security. I’m concerned that it’s been—that if some have their way, it would be compromised by national security. 

If you think about DeepSeek, for example, this is a huge invention despite all the dangers that we might have from China, and having a chip that uses low—spends much more—much less on energy would normally be, first of all, very useful to the users around the world, including U.S. and that American firms, big tech, and AI would be trying to invent the same thing—to provide the same thing. 

So, my question is—and also and antitrust is—of course, it’s, you can say, a technology to control market power. But if you tell firms that they have to compete in the interest of the nation rather than in the interest of profit maximizing, you get a very—you get the whole world balkanized again on competition. So, there’s something very important in the cosmopolitan idea of competition but, obviously, very important on the whole national security problem. 

Do you find that there is this tension and that there is more to the argument than has come out yet about the importance of keeping competition and competition policy in the interest of profit maximizing separate from national security interests, except to the extent that national security interests are specifically and narrowly proved to be an important constraint? Thanks. 

SEGAL: Yeah. I think those are all great—(inaudible, technical difficulties)—I think that’s why there’s a real risk to anti-monopoly regulations and arguments being totally held hostage to competition—or, national security competition concerns. And I think you—you pointed out both the kind of impact on U.S. innovation and our own standing and I think also the thing that often gets dropped by the side which is, you know, societal benefit or transnational benefit, right? You can imagine that, as you said, an AI system that’s cheaper to run, uses less energy, that could spread a lot of good, especially for countries that don’t have those capacities. And so I think there is a real risk that those things get shunted aside. 

I suspect you know better than I do that—I have looked into some of this in the past, and I think these arguments were made before. I think, for example, when AT&T was being broken up some people tried to mobilize a national security argument about why we needed to keep it together and I think there have been other instances in those cases where, you know, the competition argument eventually win—excuse me, eventually won. 

So, yes, I think it’s really important that we have all those things still in our mind and that we try to figure out which of those things we want to promote and prioritize and how do we make sure that all of those things are placed in front of policymakers when they make those decisions.

OPERATOR: We have a written question from Laeed Zaghlami, who is professor at Algiers University. 

He asks, will there be a war of AI regulation between the USA, China, and the EU? 

SEGAL: I probably wouldn’t use the word “war” but we certainly have competing models. If you look, for example, at the work of Anu Bradford at Columbia University, and her book about how China and the EU and the United States compete on how they’re going to regulate digital technology—(audio break)—sorry, and AI, we’re going to continue to see that. I think they’re trying to influence—they see that that has benefits if companies adopt their standards and they’re going to continue to do that and, you know, part of that is going to be in demonstrating that that system provides both innovation and some safety.

And so I think we’re still at the very, very early stages of that. You know, the Chinese have lots of regulation, but the model is not attracted to people that are worried about free speech and human rights and other things like that. EU has lots of regulation but not that many firms and, you know, the U.S. lots of firms and not that much regulation. 

OPERATOR: We have a raised hand from Chinedu Ezeife, who is a graduate student at Brooklyn College. 

Q: Yes. Good afternoon, Dr. Segal. I appreciate your insights on all this. 

My question is regarding sub-Saharan Africa and how—given China’s growing influence in Africa, I want your thoughts on how the U.S. can integrate AI in Africa and to basically combat China’s growing influence over there.

SEGAL: Yeah. I think, you know, quite honestly, the U.S. has to be present, right; that the arguments about, for example, cybersecurity or data security or why you wouldn’t want to use Chinese technology is not very convincing in lots of emerging economies.

If I’m a leader in an emerging economy my concern is, you know, bridging the digital divide and so I need the equipment to be there, and I’m probably thinking that I’m going to get spied on by the U.S. and China no matter what I do, so I might as well at least have the equipment. 

If you look what China is doing in Southeast Asia and parts of Africa and Latin America, there’s—you know, building lots of infrastructure and digital infrastructure and doing lots of training, right? Tens of thousands of people in Indonesia, for example, at cloud centers. 

And so I think really what the U.S. is going to need to do is be on the ground, that U.S. firms play a role in the training and helping build ecosystems and demonstrating that we can provide models that, you know, both provide countries in that area some control and also address their development goals. 

OPERATOR: We have a written question from Jane Kani Edward, who is an associate professor and chair at Fordham University. She asks: Given the role of technology and information dissemination how can tech firms curb the spread of misinformation that have the potential of destabilizing governments and fueling conflicts, especially in some African countries? 

SEGAL: So, I think there’s been a lot of research on what the firms could do or can do, right? So you need to hire many, many more local language moderators so you don’t get surprised or ignore, you know, events like what happened with the Rohingya in Myanmar. 

I think there’s certain types of language that have been—that’s used on social media that people have mapped on possible violences or genocidal types of actions. Technology and AI helps a little bit but it’s hard to keep up with those contexts. But mainly it has to do about investing resources. 

But, you know, as I think as we’ve discussed earlier, we’re just at a point now where the tech platforms have backed away from that and have become less transparent—you know, give less access to their APIs (Application Programming Interface) and data there. 

So, you know, it’s a combination of these political and technological changes, but right now the firms seem, you know, unwilling to do that any longer. 

OPERATOR: We will take the last question from Rita Kiki Edozie, who has her hand raised. She’s the professor and associate dean at the University of Massachusetts, Boston. 

Q: Well, thank you. I just have a quick question, returning to global governance.

A few months ago, the poor Democratic Republic of the Congo, although mineral-rich, sued Apple, and I noticed that they sued Apple in the national courts of France and Belgium and not the World Trade Organization. So just wondering if you could speak to the discrepancy and what is the role of the World Trade Organization in regulating big tech. Thank you. 

SEGAL: That’s very interesting and I don’t—sure I’m going to have a great answer for you because I was not aware of that case. But I think the WTO has not been particularly effective, as I’m sure you know, in kind of regulating big tech. 

You know, certainly, a lot of these are on the services side and not the manufacturing side or trade side. National and supranational regulation through the EU seems to be much more effective. You know, we did see some digital provisions in regional trade agreements, you know, big tech has been pretty supportive of, so pushing back against data localization, requiring flow of data except in some specific scenarios, and things like that. 

But the U.S. is not engaged in those discussions. And you know, USTR last year or the year before basically kind of questioned about why the U.S. government would promote some of those views, although it then moderated and said, you know, we support free flow but under these specific scenarios, conditions with the ability to regulate for national security and other concerns.

So, I don’t see a lot of this happening through trade agreements. I think it’s going to, you know, happen through national legislation and supranational.

ROBBINS: The DRC case is a conflict minerals case. 

So, thank you for that question and, Adam, thank you for what—really smart as ever, a great conversation, and for everybody else for their great questions and comments.

The next Global Affairs Expert Webinar will take place on Wednesday, February 19, at 1:00 p.m. Eastern Time, and Miles Kahler, senior fellow for global governance at CFR, will lead the conversation on foreign influence and democratic governance. 

And in the meantime, we encourage you to learn about CFR paid internships for students and fellowships for professors at CFR.org/careers. Follow @CFR_Education on X and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. 

Thank you again for joining us today. We look forward to your participation on our next webinar on February 19, and, Adam, this has just been wonderful. Thank you so much. It’s been a great conversation. 

SEGAL: Thanks, everybody, for listening today and for all your great questions. 

(END)