Game theory often falls short when it is applied to complex situations like international relations or parliamentary balance of power. However, in some situations, game theory can be useful in the scientific, prescriptive sense. For example, game theory is useful for, well, playing games. Modern software agents that play games like poker do in fact use rather advanced game theory, augmented with clever equilibrium-computation algorithms. Game theory actually works better when the players are computer programs, because these are completely rational, unlike human players, who can be unpredictable.
In fact one of the first instances of noncooperative games is a cold war one, devised - I think - by the extraordinary mathematician John M. Danskin. His book "The Theory of Max-Min and its Application to Weapons Allocation Problems" from 1967 presents a series of war games in which weapons are allocated to targets/defences on either side, and the max-mim problem reflects both simultaneously making the most devastating attack while trying to ward off the enemy missiles. A ghastly problem, but those were the days ...
I have conducted Ultimatum, Dictator, Third Party Punishment, and Public Goods games in tribal communities. I do get some players who make offers that would be considered hurtful or anti-social, but that is part of the variation of offers and/or rejections in the games. Furthermore, these offers are often seen as the logical selfish move, and rejection of hyper generous offers may be culturally relevant (avoiding indebtedness in Highland New Guinea for instance, see David Tracer's chapter in "Foundations of Human Sociality" [Henrich, et al.]. For that reason, they should not be excluded from the data set, and they are considered as part of the variation.
"A destructive player" is not altogether a clear concept.
The payoffs in the game have to represent the motivations of the players, and money or other objective payoffs may not do that. If one player is motivated to do as much damage to the other as possible, then the second player's payoffs are just the inverse of those of the first player, and we have a zero-sum game -- the max min solution then applies. If both (or all) are motivated that way, though, the payoffs may be undefined. We might assume that a "destructive player" wants to reduce the objective payoffs of the other, and in that case, the definition of each player's payoffs is straightforward, and you have a zero or nonconstant sum game as the case may be. But that would be odd, since (consistently with common knowledge of rationality) each would know that the other's subjective satisfaction is imperfectly correlated with her objective payoffs, and if really hostile, would want to make the other feel as badly as possible -- and we are back to undefined payoffs. As often the case with "interdependent utilities," deep difficulties arise.
In a wartime situation, the motive is to do a specific kind of damage -- to destroy the enemy's ability to project power -- and the issues raised by that kind of objective are pretty well understood. See "war of attrition."
your case belongs to non-self-regading preferences. Usually economic agents value only their own payoff (utility) and do not care about others. If they care positively, they are partly altruistic. If they care negatively, this is your case.
In one of my works I consider such type of game and its applications. Please go to section 4 of this article. It is not important to read previous parts to understand it.
Just to add to what Michael said about cold war games. Thomas Saaty ( https://en.wikipedia.org/wiki/Thomas_L._Saaty ) wrote a book "Mathematical Models of Arms Control and Disarmament" (1968, Wiley) where he was doing game theoretical analysis with application to mutual threat from nuclear weapons. In that games all parts are losers (so this is a destructive game) if it is played. However, in order to prevent it, it was necessary to have weapons above some minimal threshold from each side so that the nuclear war cannot start. Rational equilibrium exists only if the loss from revenge is non-acceptable for the initiator of the war. He was also talking about disarmament: in which case this will still be an equilibrium. If after disarmament one side will have non-sufficient weapon for retaliation, this is also a disequilibrium, because it might have an incentive for the first strike (choice of bad vs very bad). There also might be asymmetric information about unaccepted loss. I guess that this study is still relevant today.