Okay folks, I’m going to nerd out a bit but bear with me. There was this show that my wife used to like watching called Star Trek: The Next Generation. In one episode Captain Picard is being held captive by the Evil Alien of the Week. Said Evil Alien twirls his space mustache, gestures to a bank of four lights, and asks Picard how many lights he sees. When Picard says “Four” Evil Alien is all like “No way, dude, there are FIVE lights,” but Picard is like “F you, buddy. There are only four lights.” Also there are painful electric shocks involved, but Picard refuses to see five lights.
Turns out that most of us is no Jean Luc Picard ((Thank God, because that guy is SUCH a nerd.)) because we’re apt to disbelieve evidence obvious to our own eyes when the conditions are right. And we don’t even need a big scary alien dude looming over us; all we need are a few strangers in the room with us saying that they totally see five lights.
In the 1950s psychologist Soloman Asch conducted a series of experiments ((Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70.)) where he gave members of a group an index card with a line drawn on it. Asch then projected a set of three different lines onto a screen and asked subjects to identify which one matched the one on the cards. All three lines on the screen were different, so it was a task so simple that anyone with two eyes and a brain behind them could get it right every time. Heck, in a pinch one eye would do. It looked kind of like this:
And so subjects performed admirably for the first three rounds or so. But eventually one or two subjects in the group started to immediately give answers that were obviously wrong. Like saying Line A was the longest when it was clearly the shortest. Very quickly, more and more subjects started repeating the obvious mistake, saying things that would clearly look wrong to any starship Captain.
WTF? What was going on? Well, what was going on was that only one of the subjects in the experiments was actually a subject. The rest were actors in the employ of the experimenter ((What we call “confederates” in the biz)) and were purposely jumping in with obviously wrong answers just to see what the real subject would do. Turns out that in three quarters of the subjects in these experiments let their choice be influenced by the others, even when it should have been obvious that this was bananas.What’s more, in post-experiment debriefing interviews, subjects rationalized their choices by saying that their initial observations must have been wrong if everyone else was saying the opposite. They weren’t just PRETENDING to see things differently, they REALLY DID.
Turns out that when the tasks become more difficult or have less clearly defined “correct” answers, the phenomenon becomes even more accute. Asch did some follow up studies where he asked subjects questions about politics (such as what were the most critical political issues of the day) and found that he could influence people’s answers by inserting confederates into the group who asserted certain answers. Other studies have shown that bartenders or barristas can get you to tip more if they prime their tip jars with their own cash, simply because it makes you think that everyone else is tipping generously ((Cialdini, R. (2009). Influence: Science and Practice. Boston, MA: Pearson.)) These studies ties in with a lot of other things we know about human nature when it comes to conformity, submission to authority, and peer pressure. We’re often very willing to look to our peers –or even complete strangers– to define reality for us.
So what does this have to do with video games? Glad you asked. I’m sure you’ve noticed that you can’t shop on many online stores these days without being shown the ratings given to each product by other shoppers. Go shop for a new release on Amazon.com or GameStop.com and you’ll see user ratings quite prominently. Most websites that feature game reviews also have user reviews alongside their “official” ones, and file download sites like FilePlanet.com list not only download counts, but star ratings as well. See where I’m going with this? Well, keep reading anyway.
In their book Nudge: Improving Decisions About Health, Wealth, and Happiness ((Thaler, R. & Sunstein, C. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New York, NY: Penguin Books.)) authors Richard Thaler and Cass Sunstein describe a study by sociologist Matthew Salganik and his colleagues at Princeton ((Salganik, M., Dodds, P., & Watts, D. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311, 854-856.)) where the researchers had over 14,000 people visit a faux music download site and browse through music by previously unknown bands before deciding which songs to download. Half the subjects were asked to pick songs based just on song name, band name, and a sample. ((This would be the “control group” that your Psychology 101 professor is always talking about.)) The other half had all that info, but could also see how many times the song had been downloaded. Psychologists are always pulling crap like this.
What do you think happened based on what I’ve written so far above? Well, turns out that subjects exposed to the download counts were WAY more likely to download songs that they thought others had downloaded lots, and were WAY LESS likely to try music that they thought nobody else was choosing. The quality of the song still mattered, but so did how often subjects thought the song had been downloaded by their peers. Songs that did so-so in the control group were turned into smash hits among those in the experimental group simply by displaying their download counts.
Now, I’m not accusing Amazon.com of inflating its ratings to sell more books ((Though others do in fact accuse them of exactly that)). And one could argue that in the absence of such malfeasance that download counts and star ratings are real, useful pieces of information that shed some light on the true quality of a product. But nonetheless this is something to be aware of, especially with new files/games/books that haven’t yet amassed ratings or download counts. It’s also worth noting that advertisers can indirectly purchase this kind of influence by buying front-page placement or using ads to drive consumers to that content and thus increasing its popularity –or at least the number of times it was bought or downloaded. And it can work in reverse. Remember a while back when the backlash against Spore’s digital rights management measures caused a bunch of people to flood Amazon with one-star ratings? It’s still barely got one star out of 5 as of the time of this writing. The point to remember is that what you see other people doing shouldn’t always unduly influence your own actions.
That point made, though, it’s interesting to think about how game designers could use this kind of bias for the player’s benefit –at least potentially. I’m certainly not advocating that they inflate star ratings or player counts, but less sacrosanct data could be used to nudge players in certain directions that they might enjoy. For example, what if in a few months’ time you were sitting down to play through some more of the single-player campaign for Halo Reach when at the main menu there appears the message “Nine people on your friends list have tried the Halo Reach multiplayer modes within the last week. Select ‘Multiplayer’ From the main menu to join them.” Or maybe “1,943 people checked out the leaderboards in the last 5 minutes; press ‘Y’ to do the same.”
I know that the administrators of technologies like Steam, Xbox Live, and GameSpy Technology are awash in data like this and to my uneducated monkey brain it seems like it should be relatively easy to do this kind of stuff on the fly with real data. So somebody go do that and get back to me. In the meantime, I’m gonna go out and start telling strangers that it looks like rain, even though there’s not a cloud in the sky. You know, just to see what they do.