Soon after its release, some players of the online first person shooter Modern Warfare 2 discovered what became known as “the javelin glitch.” Someone, somewhere, somehow figured out that through a bizarre sequence of button presses you could glitch the game so that when you died in multiplayer you would self destruct and murder everyone within 30 feet, often resulting in a net gain in points. It wasn’t long, though, before the method for creating this glitch spread through the Internet and servers were filled with exploding nincompoops. In fact, it quickly got bad enough that developer Infinity Ward had to rush out a patch to fix it.
The javelin glitch presented players in the know with an interesting dilemma: they could either abuse the glitch to boost their own rankings and unlock new perks, or they could abstain and preserve the game’s fair play. Of course, the problem is that if they abstain, someone else may abuse the glitch and dominate the match. The middle ground is when everyone glitches, but the resulting pandemonium isn’t as much fun as fair play for most normal people.
Let’s simplify the discussion by assuming a two-player deathmatch game in Modern Warfare 2. Look, I’ve created a table to summarize the dilemma for you! It’s suitable for framing.
So what do you do? Psychologists and economists who study this kind of decision-making call it a “social dilemma.” In these situations, intentional griefing notwithstanding, each person has what’s called a “dominating” alternative where they’re most likely to win (in this example, abusing the glitch) but most people REALLY want the “nondominating” alternative produced when everyone chooses to abstain from it. Especially once the novelty factor wears off.
Back in the 1960s research on these kinds of dilemmas exploded and out of it came what’s known as “the prisoner’s dilemma” based on an anecdote about getting confessions from two prisoners held under suspicion for a bank robbery. In his book, Rational Choice in an Uncertain World ((1. Dawes, R. (1988). Rational Choice in an Uncertain World. Fort Worth: Harcourt Brace Publishers.)) Robyn Dawes summarizes the classic scenario thusly:
Two men rob a bank. They are apprehended, but in order to obtain a conviction the district attorney needs confessions. He succeeds by proposing to each robber separately that if he confesses and his accomplice does not, he will go free and his accomplice will be sent to jail for ten years; if both confess, both will be sent to jail for five years, and if neither confesses, both will be sent to jail for one year on charges of carrying a concealed weapon. Further, the district attorney informs each man that he is proposing the same deal to his accomplice.
Here are those choices in table form:
In this case, both prisoners will probably confess if they’re rational about it. Why? Because each prisoner get a better (or no worse) payoff by confessing no matter what the other guy does. Prisoner A thinks, “I don’t know what B is going to do, so if I confess it’s the best way to keep myself from getting screwed. If he keeps quiet, I go free. If he also confesses, I get 5 years instead of 10.” In other words, confessing is the only way to keep the other guy from being able to screw you over. Notice how this mirrors the javelin glitch dilemma, only with fewer explosions.
Now let’s take another example from the golden years of PC gaming. In the early days of Starcraft, a strategy called “Zerg rushing” emerged where at the beginning of the match players would quickly build lots of cheap Zerg units to overwhelm opponents before defenses could be constructed. Counter strategies developed, ((as well as a game-balancing patch or two, I believe)) but for a good chunk of the player base Starcraft became a game of seeing who could Zerg rush faster, which wasn’t nearly as much fun as choosing from any other number of play styles or even races. So the dilemma was:
Again, the dominating strategy was to Zerg rush, because if you didn’t and the other guy did, you lost, which was worse than any of the alternatives. This despite the fact that what you really both want is a varied, fun game. It’s a design issue that still plagues strategy game developers today.
Prisoner’s dilemmas and social dilemmas in general can similarly be used to illustrate the reasons for “ninja looting” in World of Warcraft where one player exploits the “need/greed” loot distribution system to get a piece of equipment:
Or you could apply it to “tick throwing” and “fireball trapping” techniques in fighting games. I could go on, but I think you get the idea. My 2×2 table making machine burnt out, anyway.
What’s really more interesting and useful, though, is to look at what psychology has to show us about when people DON’T choose the purely rational option of abusing a glitch or a winning but boring strategy. Generally, people are more likely to do this when:
- They know they will be playing against their opponents in the future and face retribution
- They expect to interact with their opponents outside the game
- They don’t expect to remain anonymous
- They don’t know how many games will be played with the same person
Under these conditions, many players will adopt a strategy where they cooperate at first (for example, they don’t glitch or rush), then if the other player abuses that trust they retaliate in kind. This is known as the “tit for tat” strategy. Some researchers with lots of time on their hands even organized tournaments where people were invited to write computer programs to play iterated prisoner dilemma games, and the programs that adhered to the “tit for tat” strategy tended to do the best.
This is why things like playing with people on your friend’s list, Steam community group, guild/clan, or a favorite dedicated server is good. And it’s one reason why random matches between strangers or pickup groups can be infuriating. Making it easy to submit ratings to the profiles of people you just played also helps resolve these dilemmas to everyone’s benefits. It’s also the reason that I love the way that Halo 3 lets you remain in a lobby with the people you just played and go straight into another round with them. ((Ringing a bell? You may be thinking about my article on how deindividuation fosters antisocial behavior and how to similarly deal with that))
People being the complicated beings they are it’s not a perfect system, though. Some people are just griefers out to disrupt the game no matter what. Some people won’t abuse a glitch out of a sense of honor. Some will value their ranking on a leaderboard more than a sense of fair play for any individual match. But even if none of the suggestions above is a silver bullet, they help across large numbers of games.