Relaciones Internacionales – Comunicación Internacional

Deepfakes and international conflict (Brookings)

| 0 Comentarios

3D human face structure made of grid.

Daniel L. BymanChongyang GaoChris Meserole, and V.S. Subrahmanian 

Deceit and media manipulation have always been a part of wartime communications, but never before has it been possible for nearly any actor in a conflict to generate realistic audio, video, and text of their opponent’s political officials and military leaders. As artificial intelligence (AI) grows more sophisticated and the cost of computing continues to drop, the challenge deepfakes pose to online information environments during armed conflict will only grow.

To navigate that challenge, security officials and policymakers need a far greater understanding of how the technology works and the myriad ways it can be used in international armed conflict. Deepfakes can be leveraged for a wide range of purposes, including falsifying orders from military leaders, sowing confusion among the public and armed forces, and lending legitimacy to wars and uprisings. While these tactics can and often will fail, their potential to impact an adversary’s communications and messaging mean that security and intelligence officials will inevitably use them in a wide range of operations.

For policymakers and officials in democratic states, deepfakes pose a particularly difficult challenge. Given the importance of a trusted information environment to democratic societies, democratic governments should generally be wary of deepfakes, which threaten to undermine that trust. Yet security and intelligence officials in the United States and other democracies will nonetheless have strong incentives to deploy deepfakes against their adversaries, particularly in the context of armed conflict. As a result, the U.S. and its democratic allies should consider developing a code of conduct for deepfake use by governments, drawing on existing international norms and precedents.

Further, the U.S. should also consider establishing something like a “Deepfakes Equities Process,” loosely modeled on similar processes for cybersecurity, to determine when the benefits of leveraging deepfake technology against high-profile targets outweighs the risks. By incorporating the viewpoints of stakeholders across a wide range of government offices and agencies, such an inclusive, deliberative process is the best way to ensure deepfakes are used responsibly.

…MORE

 

Deja una respuesta

Campos requeridos marcados con *.


Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.