Skip to content Skip to footer

Review: Artificial Intelligence, Decision-Making, and the Future of Conflict, Adam McCauley (6/11/23)

The CCW Emerging Threats Group convened on Monday, 6th of November, for a Q&A session with Canadian defence analyst Adam McCauley to discuss artificial Intelligence, decision-making, and the future of conflict ahead of his book publication. The book (called, “The Price of Certainty”) explores the role of artificial intelligence across a range of sectors, including law enforcement, finance, judicial systems and military applications such as lethal autonomous weapons.

Emerging AI Risks

McCauley highlighted key concerns around AI acting as an epistemic force. As people become more dependent on artificial intelligence, we expect to see a constrained decision environment and the curbing of creativity. Actors who trust the technology are likely to become more reliant on its outputs and are less likely to use their initiative and challenge ideas. Major concerns can also be raised surrounding the potential biases in AI, demonstrating the risks of following computational logic in decision-making.

Deterrence, Misperception

Before implementing AI epistemically, companies and decision-makers should look to evaluate the risks AI poses, including the impact on signalling, deterrence mechanisms, architecture, and sourcing information. McCauley highlighted the challenges in observing such consequences and, thus, the need for experts to socialise in order to raise and explicitly address concerns with regulation.

Military Applications

In the group discussion, the focus surrounded AI in military capacities. The potential for an arms race and also the presence of private players, including big tech companies in the AI space, created concerns about regulating such technology fast enough. The group further discussed the implications of AI and its potential oversight of civilian and moral considerations due to its bias towards probabilities. Ultimately, decision-makers need to weigh up whether the benefits of precision and quick analysis of vast quantities of intelligence are worth potential ethical and defensive challenges.

With the right regulation and adaptions (such as the potential to build bias into AI or encourage more humanistic traits), artificial intelligence presents exciting capabilities, but we should be cautious and consider all epistemic risks.

Looking Ahead

Our next session is with Joshua Stewart (National Security Research Fellow at the Airey Neave Trust and a postgraduate at the University of Oxford) on the 13th of November, discussing “Terrorism, Prediction, and Emerging Technologies”. We are also looking to fill positions within our team, so if you are interested in helping as either a group lead or research associate, please check out the opportunities section in our website.

Emerging Threats Working Group

Newsletter Signup

Contact Us

Oxford, United Kingdom

Email Us Here

Emerging Threats Working Group © 2024. All Rights Reserved.