How might dual-use disruptive AI technology influence security dilemma dynamics? This chapter applies the classical “security dilemma” international relations (IR) theory to consider the impact of dual-use AI technology on the uncertainties (estimates of adversaries’ “intentions” and “motives”) and insecurities within competitive strategic dyads—especially US-China.
A 2024 study found that 48% of companies had to pause or cancel an AI project already in progress, and nearly a third of these cases were due to internal resistance. Many leaders assume that if a system functions technically, the project is a success. In reality, most AI initiatives don’t fail dramatically — they quietly go unused. People don’t trust them, don’t adopt them, or simply bypass them.
AI denial is when people refuse to believe that AI is capable, important, or relevant. They may say things like:
“AI is just hype.”
“It can’t really do what people claim.”
“This won’t affect our industry.”
This often comes from fear of change or lack of understanding. But ignoring AI doesn’t make it go away—it only puts individuals or companies at risk of falling behind.
AI mistrust is different. It’s when people believe AI is powerful—but don’t trust it. They worry about things like:
Who is in control of the technology
Whether AI is biased or unfair
How their data is used
Losing jobs or autonomy
This is a trust issue, not a tech issue. People want to know that AI is being used ethically, transparently, and with accountability.