Global Risks in 2040: Q&A with Andrew Parasiliti
The RAND Center for Global Risk and Security exists to watch the horizon.
In recent years, the center’s research has warned of a growing link between film piracy and global terrorism; of how the changing nature of work might reshape society; and of the potential for U.S. financial problems to send shockwaves through world markets. Its most recent study looked at how artificial intelligence is reshaping our everyday lives, unnoticed and too often unquestioned.
The center’s founding director, Gregory Treverton, once described these as “threats without threateners,” the big-picture problems that cross industries and borders.
The center recently undertook an effort to envision the world in 2040, and the security challenges that will shape it: artificial intelligence, 3-D printing, the accession of the millennial generation, and the sheer speed with which our society moves and makes decisions. The lead investigators are all early-career researchers, drawn from fields as diverse as nuclear strategy, anthropology, and microeconomics.
Andrew Parasiliti directs the center and is overseeing its Security 2040 project. He came to RAND in 2014 from Al-Monitor.com, an online newspaper that describes itself as “the pulse of the Middle East” and that won the International Press Institute’s Free Media Pioneer Award. He has also served as executive director of the International Institute for Strategic Studies–U.S., director of programs at the Middle East Institute, and foreign policy adviser to former U.S. Senator Chuck Hagel—prior to Hagel’s stints as U.S. Secretary of Defense and RAND trustee.
How do you define “risk”?
We use the broadest definition of risk—a threat to the security of something. That could mean a threat to personal security, such as crime; a threat to state security, such as adversarial states or terrorism; or a threat to human security, such as climate change, natural disasters, or pandemics.
Which recent projects exemplify what you’re trying to accomplish?
Bill Welser, a senior management scientist and head of RAND’s engineering and applied sciences department, and Osonde Osoba, an engineer and specialist in machine learning, recently explored the risks of bias and errors in artificial intelligence. Bill and Osonde explain how algorithms give the illusion of being unbiased, when that’s not always the case. They document some of the problems that can result, including in criminal sentencing and other legal matters. Unless there is a greater awareness of algorithms and bias risk, and how to mitigate them, these problems will only grow.
What emerging global challenges concern you most?
There are many. We have more projects underway on artificial intelligence. We’re working on a study about how the growth in communications technology, the Internet of Things, and big data are all redefining and compromising privacy, and what that means for security. We are, in general, interested in the changing nature of power and governance in the international system, and, increasingly, how that links up with the challenge of what Michael Rich, RAND’s president and CEO, has been calling “truth decay.”
What do you see as the role of the RAND Center for Global Risk and Security?
We focus on cross-cutting, multidisciplinary research on future security trends, especially the impact of disruptive technologies. That means artificial intelligence, but also additive manufacturing, the trade-offs involved in privacy and security. We engage donors and the business community to support research and analysis in these areas, to complement RAND projects for government clients.
We look at trends rather than headlines.
How has the center’s mission evolved?
The center was founded in 2007, when former defense secretary Harold Brown—a trustee emeritus at RAND—advised us to address systemic risks to global security, to look beyond the demands of the national security inbox toward what we might call more long-lead security threats. The first center director was Greg Treverton, who went on to serve as chair of the National Intelligence Council from 2014 to 2016. His appointment to the council created the opening that allowed me to come to RAND. In many ways, the focus remains the same as it did under Harold and Greg: We look at trends rather than headlines.
What big questions are you hoping to answer with Security 2040?
This was Michael Rich’s brainchild: to seek new approaches to identify and assess the impact of several trends over the coming decades—political, technological, social, demographic—and to generate some useful guidance for policymakers. Things like, What might be the impact of artificial intelligence on nuclear security? How disruptive will additive manufacturing—3-D printing— be to our military supply chain and economy? How do millennials perceive security? What are the drivers and disruptors of “health security”? Does speed, meaning a faster society, influence our notions of security? We are just getting started.
Why the desire to have all the principal investigators be early-career researchers?
All of the projects are deeply collaborative, involving senior advisers and workshops drawing on expertise from across the organization. But we aim to develop a community of 2040 researchers—the next generation of thought leaders.