What’s Next for Chinese Cyber Strategy?
- Speaker: Adam Segal (Council on Foreign Relations)
- Date & Time: Tuesday, March 24 at 3pm (Mugar 200)
- Abstract: How will internal and external factors- international actions, domestic and political economic pressures, and the global race for AI supremacy—influence the development of Beijing’s cyber statecraft over the next 3-5 years?
- Bio: Adam Segal is the Ira A. Lipman chair in emerging technologies and national security and director of the Digital and Cyberspace Policy program at the Council on Foreign Relations (CFR). An expert on security issues, technology development, and Chinese domestic and foreign policy, Segal was the project director for the CFR-sponsored Independent Task Force reports "Confronting Reality in Cyberspace," "Innovation and National Security," "Defending an Open, Global, Secure, and Resilient Internet," and "Chinese Military Power." His book "The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age" (PublicAffairs, 2016) describes the increasingly contentious geopolitics of cyberspace. His work has appeared in the Financial Times, the New York Times, Foreign Policy, the Wall Street Journal, and Foreign Affairs, among others. He currently writes for the blog, “Net Politics.” From April 2023 to June 2024, Segal was a senior advisor in the State Department’s Bureau of Cyberspace and Digital Policy, where he led the development of the United States International Cyberspace and Digital Policy.
Black Box Warfare: Experimental Evidence on Military Decision-Making with AI Systems
- Speaker: Ryan Shandler (Georgia Institute of Technology)
- Date & Time: Tuesday, April 7 at 3pm (Mugar 200)
- Abstract: How does the integration of complex military AI systems shape life-and-death decisions during active wars? Public debate often assumes that military personnel will defer to AI systems and allow algorithms to shape battlefield decisions. But how do human decision-makers respond when these tools are embedded in realistic operational contexts? To answer this question, we reconstructed a high-fidelity replica of an AI decision-support system currently used in military operations and tested its effects in two large-scale experiments with over 2,000 military personnel. Contrary to widespread fears of automation bias, we find consistent evidence of algorithmic aversion — especially in high-risk scenarios involving potential civilian harm. At the same time, incorporating “explainable AI” features significantly improves users’ ability to critically evaluate — and, when necessary, override — complex algorithmic recommendations. The results further reveal systematic differences in how decision-makers process algorithmic information in military contexts. Discrete psychological traits and political identities — including partisanship — shape how individuals interpret, trust, and respond to AI-generated outputs. Trust in military AI, we show, is neither blind nor fixed, but dynamic, context-dependent, and shaped by individual-level characteristics. More broadly, the study demonstrates the importance of using immersive, authentic, and emotionally vivid experimental treatments to accurately capture the pressures of wartime decision-making. By grounding ethical and policy debates in systematic evidence, this research sheds new light on the evolving relationship between artificial intelligence and human judgment in modern warfare.
- Bio: Ryan Shandler is an Assistant Professor at the Georgia Institute of Technology’s School of Cybersecurity and Privacy (SCP). His research examines the cognitive and behavioral dynamics that shape cyber conflict and AI decision-making. He has pioneered widely used experimental approaches that immerse participants in high-fidelity cyber and AI decision environments, enabling causal measurement of judgment, trust, and behavior. Central to his work is identifying the psychological mechanisms that underpin these behaviors and using those insights to design targeted interventions that enhance decision-making resilience. Dr. Shandler has designed and led large-scale behavioral studies across more than ten countries, collectively involving over 50,000 participants, and has developed longitudinal experimental methods to track downstream cognitive and societal effects over time. His work has been published in leading computer science, political science, and behavioral science journals. He has secured more than $1 million in competitive research funding from DARPA, the U.S. Department of Homeland Security, Google, and other organizations focused on the intersection of behavioral science, technology, and security.