Why Video Game Research is Flawed
Studies that spread the idea that video games are harmful to children are conducted by researchers whose knowledge about video games is embarrassingly poor.
WHAT DO 23 MARTIAL-ARTS FIGHTERS have in common with a talking Australian marsupial? According to one team of video game researchers, they’re identical.
Last year, the journal Aggressive Behavior published a study by a group of Dutch psychologists examining gaming and violence in children. As in most video game research, a lack of fundamental video game knowledge led to a study no gamer would consider credible.
Other psychologists, social scientists, court judges and game industry spokespeople have voiced disapproval of research that purportedly links video games with violence or aggression. Among many criticisms, they say results found in labs are not related to what occurs in the real world, measures of violence and aggression are inconsistent and unreliable, researchers ignore contradictory evidence, and journals are more likely to publish articles that cast gaming in a bad light.
There is also a much more basic criticism of video game research that needs to be made. The researchers, quite simply, don’t understand video games.
In a common videogame research setup, the Dutch psychologists divided their participants into groups in which some would play a violent game and some would play a non-violent game. They then measured aggression levels afterward. (The Dutch study also included a group that only watched the “violent” game being played). The Dutch psychologists chose PlayStation games Tekken 3 and Crash Bandicoot 2 as their contrasting games, assuring readers that the former “differed only on violent content” from the latter.
Anyone suggesting that the presence of violence is the only difference between Tekken 3 and Crash Bandicoot 2 cannot be taken seriously. Tekken 3 is a one-on-one fighting game in which the player controls one of 23 combatants (most human, some cyborgs and humanoid creatures, plus one trained bear) in a mixed martial arts tournament. The character models are lifelike as much as 1998’s polygonal graphics technology would allow, but many of their fighting techniques are unrealistic. One fighter can teleport, for example.
Crash Bandicoot 2, on the other hand, is an action/adventure game with 3D, cartoon-style graphics. Players control Crash, an anthropomorphic bandicoot, who runs and leaps through Australia-inspired environments, dodging traps and defeating humanoid animal enemies by jumping on them, spinning into them or body slamming them.
Tekken 3 involves playing a series of timed fights, whereas Crash Bandicoot 2 lets players explore environments and progress through levels. The gameplay is different. The graphic styles are not the same. The characters are mostly human in one, mostly animals in the other. The control schemes don’t match. Plus, Crash Bandicoot 2 isn’t even non-violent. Players kill Crash’s enemies when they attack them.
Saying the games differ “only on violent content” is false, but the assertion is typical of the kinds of mistakes researchers make when they’re studying videogames. Researchers often pair up completely unrelated games but act like they’re equivalents. One experiment contrasts sci-fi first-person horror game Doom 3 with falling-bricks puzzle game Tetris. Another pairs dark and suspenseful stealth game Manhunt with a colourful, fast-paced game based on the Animaniacs cartoon. Modern blockbuster titles with lifelike graphics and complex gameplay are compared with shareware versions of Pac-Man. You get the feeling that if video game researchers studied fruit, they’d see no difference between an apple and an orange.
Most researchers assume that video games are completely interchangeable with one another, a concept any gamer would find as ludicrous as the idea that all books are the same or all movies are basically identical. One study by two American media researchers acknowledged this limitation. In an article published in the Journal of Communication in 2007, James Ivory and Sriram Kalyanaraman carefully chose to contrast violent and non-violent games with very similar gameplay styles and presentations. Probably not coincidentally, their study found no significant differences in aggression levels between the players of the different games.
Many of the errors that researchers make would be obvious to video game players. One study using gore-filled fighting game Mortal Kombat: Deadly Alliance measured aggressive acts within the game only when players used weapons, but not when using their fighters’ hands and feet. Mortal Kombat fans would point out that doing the most damage in the game has nothing to do with using weapons or not. Another study found players of zombie-shooting game House of the Dead 2 were faster at identifying angry faces than players of a kayaking game. The study’s authors considered this evidence that violent games produce aggressive thinking. Gamers would point out that House of the Dead 2 is a reflex-oriented shooting game. Success in the game specifically relies on being able to quickly identify angry faces. Surely, that would have affected the study’s results.
While individual experiments suffer many design flaws like these, the most common problems come from how researchers define which games are violent or aggressive. People familiar with gaming know there is more than one kind of violence. Researchers, though, are shockingly imprecise. They don’t distinguish between violence done to innocents or done to “bad guys,” humans or animals, living beings or machines, monsters or supernatural creatures, vehicles or property, soldiers or terrorists, or the many other contexts that affect how players perceive violence. We’d never lump movies together that way. We all know there’s a difference between the violence in Saving Private Ryan, The Dark Knight and Kung Fu Panda.
The same differences exist in video games. The Mortal Kombat series is notoriously gory, but gamers see the blood and guts as ridiculous, not realistic. The Super Mario games are considered non-violent, but gaming icon Mario actually kills hundreds of creatures by crushing them with his body weight, hurling fireballs at them or feeding them bombs. Second World War games like Call of Duty are violent shooters, but players re-enact real historical battles in them and nobly liberate the virtual world from Axis forces. All are violent, but in radically diverse ways.
Researchers also don’t say whether a game’s level of violence is related to the quantity of violence or the brutality of the violence. Does a single violent act make a game violent? Is a game with no blood less violent than a game that shows blood, even if the actions are the same? Does a game still count as violent if the player witnesses violence rather than acting it out?
In some studies, researchers use rankings to evaluate how much violence a person experiences in their regular game-playing. The results look scientific and mathematical, but they’re useless. For example, two German psychologists, Ingrid Möller and Barbara Krahé, used rankings by six university students to assign subjective numerical values to games. A ranking of five represented a vague “high level of violent content.” Adolescents listed games they played and then the psychologists classified each child depending on how high the total violence ratings of his or her games were.
The study, and others that use a similar system, has two main flaws. Without a precise definition of violence, it’s hard to determine why certain games were ranked higher than others. Many rankings make little sense. The fantasy strategy game WarCraft’s low rating of 2.67 is strange given that it’s a war simulator whereas Second World War game Medal of Honor received a 4.75 rating. Were the differences due to WarCraft’s fantasy setting (humans fighting orcs), Medal of Honor’s use of guns or because WarCraft’s bird’s-eye view distances players from the killing whereas Medal of Honor uses a first-person perspective? It’s impossible to tell and calls the accuracy of the ratings into question.
The other flaw is that assigning numbers to the games doesn’t give us any useful information. What is the difference between a game with a 2.00 rating and a 4.00 rating? Is the latter twice as violent? If so, what does “twice as violent” mean? More weapons? More instances of violence? More blood? When measuring distance, we can tell the difference between 2.00 cm and 4.00 cm because we know precisely what the numbers represent. Ranking games with numerical values gives the illusion of precision without really meaning anything.
Some researchers point out the results of their experiments are limited to the specific game or genre they used. Most don’t. They generalize their results as if they applied to all “violent video games,” a feat that’s even more ridiculous since they don’t define what a “violent video game” is. Many pick questionable games in their research as well, choosing titles with extreme levels of violence that were never particularly popular with gamers and contrasting them with amateurish, low-quality free games that no one’s ever heard of. And then these are supposed to represent all video games.These leaps of illogic make reading video game research like peering into a parallel universe, where everything may seem internally consistent, but nothing matches up with the real world. Researchers cite flawed studies as proven fact, then base their assumptions on those mistakes and carry out the same errors in their own experiments. Next, their research is uncritically reported by the media, is used as evidence by politicians and lawyers to restrict game sales, and scares panic-prone parents across the world. A growing number of academics (e.g., Andrew Przybylski, Christopher Ferguson) more familiar with gaming are calling for the research to improve. If it does, it will be a change misrepresented gamers have deserved for a long time.
Related on maisonneuve.org:
—Can a Video Game Make You Cry?
—Artsy Games
—3D Games Wait for Their Avatar Moment
Subscribe — Follow Maisy on Twitter — Like Maisy on Facebook