Tuesday, October 16, 2012

Content for Later

I see people stumble on this page occasionally, so I want to try to put at least a little something on here every so often.

I came across this awesome video on Penny Arcade TV that I wanted to discuss in the future.  I'm saving my full thoughts on it for later when I'm not defending my dissertation proposal and programming experiments, but I'm excited to share.

http://penny-arcade.com/patv/episode/beyond-fun

The long and short of it is that video games are being bogged down by this idea that they have to be "fun," when really we need to be thinking about how they can be engaging.  Fun is part of that, but not the necessary component of a game.

I couldn't agree more, but at the same time question the whole "treating video games like other art forms" premise.  Yes, video games can vary in genres like books and movies, but they also require a degree of interaction you don't see in those media.  Video games are kind of their own monster that doesn't follow a lot of the rules and conventions you can get away with in a purely aesthetic entertainment experience.

Anyway, food for thought.

Friday, September 28, 2012

Going off the Deep End

I need to get my writing mojo going, so I thought I'd pop in here for a long-delayed spell.

I'm teaching introductory psychology and working on my prelims (dissertation proposal), so I honestly shouldn't even be wasting time on this, but I need to get my fingers moving and some ideas flowing.  So here's one I've been munching on for a while (even though I know I have a cliffhanger on my decision making post).

Video game design faces a fascinating problem that you simply don't see in human-computer interaction or human factors: the issue of challenge.  (Perhaps outside the realm of error prevention, where you want to make screwing something up difficult to do or easy to undo.  But even then, it's a much more discrete and categorical issue.)

Normally, good design in software and technology means that the user doesn't have to think.  A good design anticipates the user's needs and expectations and hands them what they want on a silver platter.  Unfortunately, that's almost exactly the opposite of what the core gaming audience wants.  (A metaphor I'll admit was inspired by this rather apt comic floating around the web:)

I did my damnedest to find out where this came from to give credit where credit's due, but couldn't find a source.  If you know who made it, leave a note in the comments.

There's one small change I would make to the cartoon, though.  Gamers - hardcore gamers especially - demand that obstacles be thrown in their way, but not to keep them from the fun.  When a game is well designed, fighting it IS the fun part.  Back in the NES days, we didn't complete games - we beat them.  The game was a malevolent entity bent on your destruction, and you were a champion determined to destroy it.  And that's how we liked it, dammit.

A greater distinction between the hardcore and casual markets, I believe, is perhaps more about the time investment required to appreciate the core experience of what makes the game fun.  Angry Birds is a casual game because it takes very little time to appreciate the fun in chucking stuff at other stuff to make it fall down.  Whether you're good or bad at that is another issue entirely.  Compare that to Skyrim, where simply learning all the options available to you for creating your character is at minimum a half hour investment in and of itself.  You can play either game in short bursts or in multiple-hour marathon runs, and even log the same amount of hours on both; but, the initial cost of entry is dramatically different.

As a side note to my side note, I thought about how games like Super Meat Boy (which Steam brazenly calls "casual") and Smash Bros. fit into this scheme, as they're a sort of easy-to-learn but difficult-to-master type game.  Like chess, there's a low cost of admission, but there are depths to them that most people will never see.  You can jump into those sorts of games and have fun immediately, but there is still a question of how fully you've experienced the game after a limited time.  But that's another discussion for another day.

Anyway, I digress.  The issue I wanted to talk about here is that applying human-computer interaction is a tricky issue to video games.  On the one hand, following the principles of good design are necessary for a comfortable and intuitive experience.  Yet, it's possible to over-design a game to the point of stripping it of exactly what actually makes the experience fun.

I'm going to use two super simplified examples to illustrate my point.

One obvious place you want to remove obstacles for the player is in control input.  Your avatar should do what you want it to when you want it to and the way you expected it to.  Many an NES controller has been flung across the room over games where a game avatar's reflexes (at the very least, seemingly) lagged behind the player's.  (Or maybe that's just a lie we all tell ourselves.)  A slightly more recent example is the original Tomb Raider franchise (back when Lara Croft's boobs were unrealistically proportioned pyramids).  Lara had a bit of inertia to her movement, never quite starting, stopping, and jumping immediately when you hit a button, but rather with a reliable delay.  You learned to compensate, but it limits the sort of action you can introduce to the environment, as it's impossible to react to anything sudden.  Bad control is not only frustrating, it limits the experiences your game can provide to players.

Underlying this principle is the fact that humans generally like feedback.  We like to know our actions have had some effect on the system we're interacting with, and we want to know that immediately.  When your avatar isn't responding to your input in real time, or you don't know if you're getting damaged or doing damage, or you're otherwise unsure of what your actions are accomplishing, that's just poor feedback.  This is a very simple and basic foundational concept in HCI and human factors, and it has a huge impact on how well a game plays.  Just look at Superman64 or the classic Atari ET that's currently filling a patch of Earth in the New Mexico desert.  People complained endlessly about how unresponsive the controls felt and how the games simply did not properly convey what you were supposed to do or how to do it.

Ironically, the game appeared to be all about getting out of holes in the ground (though no one is entirely sure - we think that's what's going on).

The trickiness comes in when trying to distinguish what's a frustrating obstacle to actually enjoying the game and what's an obstacle that makes the game enjoyable.  You want to present the core experience of the game to the player and maximize their access to it, but at the same time manufacture a sense of accomplishment.  It's an incredibly difficult balance to strike, as good games ("hardcore" games especially) stand on the precipice of "frustrating."  You want to remove obstacles to what makes the game an enjoyable experience, but you also risk removing the obstacles that create the enjoyable experience.

I believe no one is more guilty of this right now than Blizzard.  It's been immensely successful for them, tapping into the human susceptibility to variable ratio reinforcement schedules, but the core gaming crowd doesn't talk about Blizzard's games these days with affection.

Blizzard has successfully extended a bridge from what were hardcore gaming franchises into Casual-land.  Pretty much anyone can pick up Diablo III or World of Warcraft and experience the majority of what makes the game fun: killing stuff for loot.  But if you listen to the talk about these games, you find that these games are regarded as soulless and empty experiences.  So what went wrong?

Blizzard's current gen games follow a core principle of design, which is to find the core functional requirements of your product and design everything around gently guiding your users toward them without requiring effort or thought.  The one thing that keeps people coming back to Blizzard games is their behavioral addiction to the variable ratio reinforcement of loot drops for killing things.

Blizzard wants you to keep coming back to it, and they clearly made steps to optimize that experience and minimize obstacles to accessing it.  Your inventory in Diablo III is bigger than ever, and returning to town incurs absolutely no cost - two things that previously interrupted the flow of action in previous incarnations of the franchise.  Having to use resources to return to town just to make room for more stuff is an incredibly tedious chore that just keeps you from doing what you'd rather be doing in the game: killing demons in satisfyingly gory ways.  Hell, one of my favorite features of Torchlight, Runic Games' Diablo clone, is that you can even have your pet run back to town to do all those tedious housekeeping duties that normally pull you out of the demon-slaying action.

Torchlight II even lets your pet fetch potions for you from town.

If you look at these games, almost like the design of a casino to keep players at the slots, everything is geared towards keeping you killing monsters for loot.  Sure, there are a lot of accessories and side attractions, but they're all part of or related to getting you into their Skinner box.  The ultimate side effect is that it makes the game feel like a one-trick pony.  You practically habituate to the killing, and soon what used to be fun simply becomes tedious.  And yet the periodic squirts of dopamine you get from blowing up a crowd of monsters or picking up a legendary piece of equipment keeps you doing it past the point you really feel like you're enjoying yourself.

Granted, this was made before any major updates.

Like their behaviorist heroes, the Blizzard (Activision?) business team doesn't seem to care about your internal experience, they just care about your overt behavior: buying and playing their game.  I personally don't think this is a sustainable approach for the video game industry as a whole; it would be akin to food manufacturers just putting crack cocaine in everything they make so that people keep buying their product.  Your addicted consumers will continue giving you their money, but they'll hate you for it and tear themselves up over it in the process.  And that's fine if you don't have a soul, but I think that's hardly what anyone wants to see.

Anyhow, the point is, these games are not creating the obstacles that the hardcore crowd expects.  A good game rewards you for mastering it with a sense of accomplishment, no matter whether it's hardcore or casual.  A major problem is that the hardcore crowd requires so much more challenge in order to feel the same sense of accomplishment.  Like junkies, they've desensitized themselves to (and perhaps developed the skills necessary to overcome) typical game challenges  Just to complicate things more, it's not so simple a matter as making enemies deadlier, like monsters that randomly turn invincible or become exponentially stronger after a time limit, as in Diablo III's Inferno mode.  Blizzard operationalized challenge as "how often your character dies," and they had to overhaul Inferno mode because everyone hated it - it was a cheap means of manufacturing an artificial challenge.

Oh, fuck you.

Challenge in video games is a hugely difficult problem.  A good challenge is one in which the solution is foreseeable but not obvious, difficult but attainable, and furthermore provides a sense of agency to the player.  I believe these are features all great games - that is, the ones you return to and replay for hundreds of hours instead of shelving immediately after completion or (gasp!) boredom - have in common.  When the player overcomes the challenge, a good game leaves you with the feeling that you did it by mastering some new skill or arriving at some insight you didn't have when you started - not because you grinded (is that correct syntax?) your way to some better stats or more powerful equipment.

Not a simple problem to solve.

One potential solution I believe already exists is the idea of flexibility (providing an optimal experience for multiple tiers of players), but this is even only a step towards the answer.  This traditionally took the form of adjustable difficulty levels, but that seems like a clunky approach.  A player may not know what tier of difficulty is best for them, and the means of modulating the difficulty can very easily feel cheap (like making monsters randomly turn invincible).  That's often where "rubber banding" (a sudden spike in difficulty) comes from - a developer introduces some new obstacle to crank up the difficulty without having any sense of scale or calibration for the player.  Another reason why aggressive user testing is necessary and important.

I'm not gonna lie, I have some opinions on methods for overcoming this problem.  But like the Joker says, if you're good at something, never do it for free.

Anyway, I should get back to my prelims.

Thursday, August 16, 2012

That troublesome question "why"

Greetings from Sitka, Alaska! It’s been a busy two weeks for me, hence the delay in updating.  In that time, I wrapped up my internship with Motorola, drove back home from Florida to Illinois, then proceeded to fly off to Alaska to visit my wife.  I teased a conclusion to my last post, but this one just flowed out of me on the plane to Alaska, so tough nuggets if you were eagerly awaiting that other one.  I’ll post it when I get a chance to work on it some more.  For now, enjoy this (unintentionally general interest) post I wrote.



======================================

This one starts with a story.

I have a baby cousin-in-law who’s the sweetest little kid. Three years old, cute as a button, shock of golden hair.  If you ever visit her, after some coy hiding behind her mom’s leg, she’ll eventually show you her ballet moves and invite you to a tea party by holding up an empty plastic tea cup that you’ll have no choice but to appreciatively sip from.

Their family has a dog who’s the sweetest dog you’ll ever meet.  A chocolate lab, loyal to a fault, and loves to play.  Pet him on the head, rub his belly, and he’ll follow you to the ends of the earth.  My wife and I accidentally lost him for the most harrowing 10 hours of our lives, but that’s neither here nor there.

One day, my sweet little cousin was walking around the house with her sweet little dog.  Seemingly unprovoked, she slammed his tail in the door and broke it.  There was a lot of frantic yelping and screaming and running around; the dog went to the vet, my cousin banished to her room.  They’re both fine now, but this is all to set up the pivotal moment for the purposes of my post.  When her parents took her aside after the incident, they sternly demanded, “Why did you do that?!”  As the story goes, she looked her parents dead in the eye and simply replied:

“Because I can.”

Chilling, isn’t it?  Maybe even a little distrubing.  “Oh, my god,” you’re thinking, “That child is a psychopath!”  Maybe she is; it’s a bit early to tell.  But for the sake of argument, I will counter: quite the opposite!  In fact, I must say that she was, if nothing else, honest - and in a way that adults simply cannot be.  And by the end of this post, I hope to have convinced you of the same.

So why do I tell you this story?  Ironically enough, it’s to illustrate how difficult it is for humans to answer the question, “Why?”  Why did you do that?  Why do you feel that way?  Why do you want that?  The ease with which we usually come up with answers to that question - as encountered in our daily lives - belies the difficulty of getting a truly valid answer.

This has profound effects on human factors and psychology.  Every human instinct tells us that answering the question, “Why?” should be easy and straightforward, but psychology has definitively demonstrated that a lot of the time, we’re just making stuff up that has almost nothing to do with what originally drove us.  If you’re a human factors researcher trying to figure out how people feel about your product and how to fix it, that leaves you at a loss for solid data.  My goals for this post are two-fold: first, to clear up some misconceptions I find many people - even professionals in industry - have about humans’ access to their thoughts and emotions.  Second, I hope to completely undermine your faith in your own intuitions.  

Because I can.

I’ll start with emotion.  We like to believe we know how we feel or feel about something (and why), but try this little experiment (if you haven’t already had it inadvertently carried out on you by a significant other or friend).  Just have someone you know stop you at random points during your day and ask, “How do you feel?”  Your first instinct will be to shrug and say, “fine,” and the fact of the matter is that you really aren’t experiencing much past that.  You’ll find it’s actually pretty difficult to think about how you’re feeling at any given moment.  You start thinking about how you feel physiologically (i.e., am I tired? Hungry? Thristy?), what’s happened to you recently, what’s happening around you now, and so on.  Then, only after considering these data do you have any more of an answer.

Don’t believe me?  Well, you shouldn’t - I just told you that you can’t trust introspection, and then proceeded to tell you to introspect on your behavior.  But there are some solid experimental findings that back me up here.

First is the study that formed the foundation of what is currently the consensus in how emotion works in psychology.  We often just think we feel the feelings we do because they’re just intrinsically triggered by certain thoughts or circumstances.  But it’s not nearly that straightforward.

In 1962, Dan Schacter and Jerome Singer tested the theory that emotion is made up of two components: your physiological arousal (which determines the intensity of your emotion) and a cognitive interpretation of that arousal (which determines which emotion you experience).  The first part, arousal, is pretty clear and measurable, but that cognitive interpretation is a hornet’s nest of trouble.  You start running into questions of what people pay attention to, how they weigh evidence, how they translate that evidence into a decision, bla bla bla.  It’s awful.  in all honesty, I would rather take on a hornet’s nest with a shovel than try to isolate and understand all those variables.

So Schacter and Singer kept things simple.  They wanted to show they could experimentally induce different levels of happiness and anger through some simple manipulations.  They brought some subjects into the lab and gave them what they said was a vitamin shot.  In actuality, they were either shooting you up with epinepherine (i.e., adrenaline, to raise your arousal) or a placebo saline shot as a control condition (which would probably raise your arousal a little bit, but not as much as a shot of adrenaline).  Then they had these subjects hang out in a waiting room with a confederate (someone who was working with the experimenter, but posing as a subject).

Now comes the key manipulation.  In one condition, the confederate acted like a happy-go-lucky dude who danced around, acted really giddy, and at one point started hula-hooping in the room.  I don’t know why that’s so crucial, but every summary of this experiment seems to make special mention of that.  In the other condition, the confederate was a pissed-off asshole who very vocally pointed out how shitty and invasive the experiment was - things we normally hope our subjects don’t notice or point out to each other.


This gif makes me interpret my physiological state as happiness.

After hanging out with the confederate (either the happy one or the angry one) for a while, the experimenters took the subjects aside and asked them to rate their emotional states.

So, what happened?  

People who were in the room with a happy person said they were happy, and the people with the pissed off person said they were pissed off.  On top of that, the degree to which a given subject said they were happy or pissed off depended on the arousal condition.  People with the epinpherine shot were really happy with the happy confederate and really pissed with the angry confederate, whereas the subjects in the control condition were only moderately so.

The data support what’s now known as the two-factor theory of emotion.  Even though we don’t feel this way, there’s no intrinsic trigger for any given emotion.  What we have is an arousal level that we don’t necessarily understand, and a situation we need to attribute it to.  If you’re in a shitty situation, you’re likely to attribute your arousal level to the amount of anger you’re experiencing; if things are good, you’re likely to attribute your arousal to your happiness.  Either way, the category of emotion you experience is determined by what you think you should be experiencing.

What a lot of people in the human factors world sometimes overlook is just how volatile subjective reports can be.  Now, I’m not saying all human factors researchers are blind or ignorant to this; the good ones know what they’re doing and know how to get around these issues (but that’s another post).  But we definitely put too much stock in those subjective reports.  Think about it - if you ask someone how they feel about something, you’re not prompting them to turn inward.  In order to know how you feel about something, you start examining the evidence around you for how you should be feeling - you’re actually having people turn their attention outward.  The result: these subjective reports are contaminated by the environment - the experimenter, other subjects, the testing room itself, the current focus of attention, the list goes on and on.  Now, these data aren’t useless; but they definitely have to be filtered and translated further before you can start drawing any conclusions from it, and that can be extremely tricky indeed.

For instance, people can latch onto the wrong reasons and interpretations of their emotions.  There’s a classic study by Dutton and Aron (1974) where they had a female experimenter randomly stop men for a psych survey in a park in Victoria, BC. (The infamous Capilano Bridge study, for those in the know.)  They key manipulation here was that the men had either just crossed a stable concrete bridge across a gorge (low arousal) or a rickety rope bridge that swung in the wind (high arousal).  The female experimenter asked the men about imagery and natural environments (or some such bullshit - that stuff isn’t the important part of the study), then gave them her business card.  She told them that they could reach her at that phone number directly if they wanted to talk to her about the study or whatever.


I kind of wanna piss myself just looking at this photo

Now, here’s the fun part: the men who talked to the experimenter after staring the grim specter of death in the eye were significantly more likely to call her up than the men who crossed the stable (weaksauce) bridge.  The men on the rickety bridge were more likely to call the experimenter because they found her more attractive than the men on the wuss bridge did.  The men misattributed their heightened arousal to the female experimenter rather than the bridge they just crossed. "Wait a minute," you might say, "maybe it was selection bias; the men who would cross the rickety bridge are more daring to begin with, and therefore more likely to ask out an attractive experimenter." Well guess what, smart-ass, the experimenters thought of that. When they replicated this experiment but stopped the same men ten minutes after they crossed the bridge (and their arousal returned to baseline), the effect went away.

By the way, this is also why dating sites and magazines recommend going on dates that can raise your arousal level, like doing something active (hello, gym bunnies), having coffee, or seeing a thrilling movie.  Your date is likely to misattribute their arousal to your sexy charm and wit rather than the situation.  But be forewarned - just like the confederate in Schacter and Singer’s experiment, if your provide your date with a situation to believe he or she should be upset with you, that added arousal is just going to make them dislike you even more.  Better living through psychology, folks.

Another favorite study of mine generated the same phenomenon experimentally.  Parkinson and Manstead (1986) hooked people into what appeared to be a biofeedback system; subjects, they were told, would hear their own heartbeat while doing the experiment.  The experiment consisted of looking at Playboy centerfolds and rating the models’ attractiveness.  The trick here was that the heartbeat subjects heard was not actually their own, but a fake one the experimenters generated that they could speed up and slow down.

The cool finding here was that the attractiveness rating subjects gave the models was tied to the heartbeat - for any given model, you would find her more attractive if you heard an accelerated heartbeat while rating her than if you heard a slower one.  Subjects were being biased in their attractiveness ratings by what they believed to be their heart rate: “that girl got my heart pumping, so she must’ve been hawt.” They found a similar effect also happened for rating disgust with aversive stimuli.  So there’s another level of contamination that we might not otherwise notice.

No one likes to believe they don’t know where their feelings and opinions come from, or that they’re being influenced in ways we don’t expect or understand.  The uncertainty is troubling at a scientific - if not personal - level.  And guess what?  It gets worse: we will bullshit an answer to where our feelings come from if we don’t have an obvious thing to attribute it to.

If memory serves (because I'm too lazy to re-read this article), Nisbett & Wilson (1977) set up a table in front of a supermarket with a bunch of nylon stockings set up in a row. They stopped women at the table and told them they were doing some market research and wanted to know which of these stockings they liked the best.  And remember, these things were all identical.

The women overwhelmingly chose pairs to the right.  Then the experimenter asked: why did you go with that pair?  The “correct” answer here is something along the lines of “it was on the right,” but no one even mentioned position.  The women made up all sorts of stories for why they chose that pair: it felt more durable; the material was softer; the stitching was better; etc., etc.  All.  Bullshit.

Steve Levitt, of Freakonomics fame, is said to have carried out a similar experiment on some hoity-toity wine snobs of the Harvard intellectual society he belonged to.  He wanted to see if expensive wines actually do taste better than cheap wines, and if all those pretentious flavor and nose descriptions people give wines have any validity to them.  As someone who’s once enjoyed a whiskey that “expert tasters” described as having “notes of horse saddle leather,” I have to say I’m inclined to call that stuff pretentious bullshit, as well.  

So he had three decanters: the first had an expensive wine in it, the second had some 8 buck chuck, and the third had the same wine as the first decanter.  As you probably expect, people showed no reliable preference for the more expensive wine over the cheap wine.  What’s even better is that people gave significantly different ratings and tasting notes to the first and third decanters, which had the same goddamn wine in them.  We can be amazing at lying to ourselves.

My favorite study (relevant to this post, anyway) comes from Johansson et al. (2005) involving what they called "choice blindness."  It happens to involve honest-to-goodness tabletop magic and change blindness, which are basically two of my favorite things.  In this study, the experimenter held up two cards, each with a person’s photo on them.  He asked subjects to say which one they found more attractive, then slid that card face-down towards the subject - or, at least, he appeared to.  What the subjects didn’t know was that the experimenter actually used a sort of sleight of hand (a black tabletop and black-backed cards, if you’re familiar with this kind of thing) to give them the card that was the opposite of what they actually chose.  Then the experimenter asked why they thought the person in that photo was more attractive than the other.


Check out the smug look on his face when he listens to people's answers

Two amazing things happened at this point: first, people didn’t notice that they were given the opposite of their real choice, despite having just chosen the photo seconds before.  (That’s the change blindness component at work.)

Second, people actually came up with reasons why they thought this person (the photo they decided was less attractive, remember) was the more attractive of the two choices.  They actually made up reasons in direct contradiction to their original choice.

The moral?  People are full of shit.  And we have no idea that we are.

These subjects didn’t have any conscious access to why they made the choices they made - hell, they didn’t even remember what choice they made to begin with!  And probing them after the fact only made matters worse.  Instead of the correct answer, “I dunno, I just kinda chose that one arbitrarily, and also, you switched the cards on me, you sneaky son of a bitch,” subjects reflected post hoc on what they thought was their decision, and came up with reasons for it after the fact.  The “why” came after the decision - not before it!

When you try to probe people on what they’re thinking, you’re not necessarily getting what they’re thinking - in fact, you’re much more likely to be getting what they think they should be thinking, given the circumstances.  Sorta shakes your trust in interviews or questionnaires where people’s responses line up with what you expected them to be.  Is it because you masterfully predicted your subjects’ responses based on an expert knowledge of human behavior, or is it just that you’ve created a circumstance that would lead any reasonable person to the same conclusion about how they should feel or behave?

This rarely crosses people’s - lay or otherwise - minds, but it has a profound effect on how we interpret our own actions and those of others.

Taking this evidence into account, it seems like understanding why humans think or feel the way they do is impossible; and this is really the great contribution of psychology to science.  Psychology is all about clever experimenters finding ways around direct introspection to get at the Truth and understand human behavior.  It takes what seems to be an intractable problem and breaks it down into empirically testable questions.  But that’s for another post.

So let’s return to my baby cousin, and her disturbing lack of empathy for the family dog.

“Why did you do that?”
“Because I can.”

If you think about it - that’s the most correct possible answer you can ever really give to that question.  So often, when we answer that very question as adults, the answer is contaminated.  It’s not what we were really thinking or what was really driving us, but a reflection of what we expect the answer to that question to be.  And that’s not going to come so much from inside us as from outside us - our culture, the situation, and our understanding of how those two things interact.  There’s a huge demand characteristic to this question, despite the fact that it feels like we should have direct access to its answer in a way no one else can.  And it contaminates us to the point that we misattribute those outside factors to ourselves.

If you think about it that way, there’s a beautiful simplicity to my cousin’s answer that most of us are incapable of after years of socialization.  Why did you major in that?  Why do you want to do that with your life?  Why did you start a blog about human factors and video games only to not talk all that much about video games?

Because I can.

Thursday, August 2, 2012

Choices, choices...

I was doing a little research at work on UI design recommendations, and I came across this Apple developer guideline that basically says custom settings are bad (or, at least, should be de-emphasized).  There are some technical reasons in there for why, but I want to highlight something they float by nonchalantly:
When you design your application to function the way most of your users expect, you decrease the need for settings.
Give the people what they want, and they won't want to change it.  Or, to put it another way, the people will take what you give them and like it.  The latter is basically the Cult of Apple's M.O., but there's actually some behavioral support to the idea (so let's calm down, shall we).  This raises an interesting question: is choice actually good or bad for design?

Given the participatory nature of video games, it can be easy to assume choice is good in gaming.  In fact, we generally tend to see choice as a good thing in all aspects of life.  The freedom to choose is a God-given American right.  Americans don't like being told what to do. Why not?  Fuck you, that's why not.  'MERR-CA!

But what role does choice play in game design?

One of the biggest complaints you'll hear from Diablo fanboys about Diablo III is the lack of choice over stat changes at each level-up.

Nerds raged for weeks over the Mass Effect 3 ending because the game seemed to disregard the choices they had been making up to that point throughout the past two games.



People get mad when you don't give them choices.  And they get madder when they get choices that don't culminate in consequences.  People just don't like the feeling that they've been deprived some sort of control.

But the issue of choice and its impact on how you enjoy something is much more complicated than you might initially expect.  Choice isn't always a good thing.  Choice can be crippling.  Choice can be overwhelming.  In fact, choice can even make you unhappier in the long-term.  But the thrust of western game design is steadfast in the opinion that the more choices you have, the better.  I'll discuss here a little bit on what we know about choice and its impact on our happiness.


How Choice is Good

When you hear psychologists sing the praises of choice, they usually cite studies of freedom or control. You'll hear about experiments in which old people in retirement homes live longer on average when you give them some semblance of control - even when it's something as small as having a plant to water or having a choice of recreational activities.  Depending on who you ask, you'll hear different theories on the mechanism behind exactly why this happens, but regardless, the commonality is that having choices leads to an increase in people's lifespans.

Recent work by Simona Buetti and Alejandro Lleras at my university (and other research that led to it) suggests that even the illusion of choice makes people happier.  When presented with aversive stimuli, people feel less anxious if they believe they have some control over the situation, even if they didn't and were only led to believe that through clever experimental design.

Interestingly enough, this is accomplished through the same psychological mechanism that causes some people to swear to this day that pressing down+A+B when you throw a pokéball increases your odds of it succeeding.  If the capture attempt succeeds, it's because you did it correctly; if it fails, it's because your timing was just off.  It was never just random coincidence.  At least, that's how our young brains rationalized it.  Of course we had no control over the probabilities of a pokeball's success - but we convinced ourselves that we did.

But did he time it right?!  (VGcats)

You can do something similar in an experiment.  You have a bunch of trials where an aversive stimulus comes on for a random period of time - sometimes short, sometimes long.  Then you tell your subjects that if they hit a keyboard key with just the right timing, they'll end the trial early; if it goes on past the keypress, it's because they didn't get the timing right.  As a control condition, you give people the same set of random trials but don't let them make any keypresses.  Now you have one group with absolutely no control over the situation and another group with absolutely no control over the same situation, but thinks it does.

What you'll find is that the group that believes it had control over the trials comes out of the experiment less anxious and miserable than the group that definitely had no control.

Even when we have no control or choice, we'll latch onto anything we can to convince ourselves we do.  That's how committed we are to having choice - even the illusion of choice is enough to make us feel better.

Of course, humans also show a similar to commitment to heroin.  Is it necessarily a good thing?

How Choice is Bad

This is the more counterintuitive of the two theses here, so I'll flesh it out significantly more.

This thesis has been pushed forward popularly by a behavioral economist named Barry Schwartz (who's just as old and Jewish as his name suggests).  If you'd like to see a TED talk about what he calls "the paradox of choice," you can see it here:


I'll hit some of the highlights.

Choice is paralyzing.  As Schwartz mentions, having multiple attractive options for something can make someone freeze up and put off making a decision, even if it is to that person's detriment.  Not having to make a tough decision can actually psychologically offset the cost of procrastinating on it.  Full disclosure time: I have not finished a single Bioware game because of this effect.  I know, I know, it seriously hurts my nerd cred.  I was probably one of the last people in this world who had the big plot twist in KOTOR spoiled for him in 2010.

Attractive alternatives make us think about what we could've had instead of what we chose.  We just can't let ourselves be happy sometimes.  When presented with multiple attractive options, all we can do is think, "Did I make the right choice?"  And we spend all our time obsessing over what's wrong with what we did choose and how the alternative could be better.  The grass is perpetually greener on the other side.  And it gets worse the more easily we can imagine or access the alternative.  I have an example to illustrate this point below.

Making a choice affects how we perceive the decision and ourselves.  I have an entire post about this very topic brewing right now, so I'm not going to say too much on this just yet.  For now, I'll just say that it's much more often that our actions direct our opinions rather than the other way around - we just aren't aware of it.  You actually do a little bit of cognitive acrobatics whenever a decision is made that all happens without you realizing it.

First, you convince yourself that your decision was the right one - but only if you can't change your mind!  One study (I have the citation somewhere, but I can't for the life of me find it right now) had participants take a photography class, and at the end, they got their two favorite photos developed and framed.  The twist: the experimenter said they could only take one home, and the class would keep the other for its own display.  The manipulation here was that one half of the participants could change their mind and bring the photo back, whereas the other half had to make a final decision right there.

Just because something is in black and white doesn't mean it's good

When the experimenters later asked the students how much they liked the photo they ended up keeping, the students who were allowed to change their mind were actually less happy with their choice than those who had to stick with their first decision.

What happened?

The people who were stuck with one photo didn't have an alternative available to them, so they spent all their time convincing themselves how awesome the picture they chose was.  The people who could change their minds were pre-occupied with whether they should've gone with the other one, and so spent their time thinking about everything wrong with what they chose.  Freedom made them unhappier.

(An important subtlety I should point out is that, on average, people are happier with their choices when they're stuck with them than if they can change their minds.  This is a relative statement.  You can still be stuck with a decision and dislike it, but - assuming your options were equally good or bad to begin with - you'd be even angrier if you could've change your mind.)

If you want to see an example of this in the gaming wild, do a google search for Diablo II builds and compare the discussions you see there to discussions over Diablo III builds.  The major difference between the games: you can change your character's build in D3 any time you want, but it's (relatively) fixed in D2.  People still wax poetic about their awesome D2 character builds to this day and will engage with you in lively debate on how theirs was best.  With Diablo III, there's mostly a lot of bitching and moaning. People seem to love their D2 characters and are indifferent or even negative towards their characters in D3.  Why?  The permanence of their decisions and the availability of attractive alternatives.

All these skills and I haven't a thing to wear

The second thing that happens after you make a decision is you attribute its consequences to someone or something.  Depending on how the decision pans out, you may start looking for someone to blame for it.  Things can go in all sorts of directions here.  Schwartz opines that depression is, in part, on the rise because when people get stuck with something that sucks in a world of so many options, they feel they have no one to blame but themselves.

I'd disagree with that claim - first, because the idea about justifying your decisions, which I described above, would suggest that you'd eventually come around and make peace with your decision because, hell, you made it and you're stuck with it.  Second, I don't believe humans - unless the predisposition towards depression was there already - would dwell on blaming themselves.  People tend to have a self-serving bias: they believe they're responsible for good things that happen, and bad things are other people's fault.

No, I think we still pass the buck to whomever else we can when a decision pans out poorly.  In the case of video games, if you made a choice and are unhappy with it, you blame the game developer.  If the player is unhappy with the choice they made, they don't suddenly think, "I've made a huge mistake," they think, "Why would the developer make the game suck when I choose this option?"  And sometimes, they have no way of knowing whether the other option that's now unavailable to them would've been better or worse.  The game dev just ends up looking bad, and people hate your game.  (Again, look at Diablo III among the hardcore nerd crowd.)

I've been thinking a lot about how these factors have historically impacted choice in video games and how I think they made various games I've played either awesome or shitty.  Based on that, I think there are a few simple rules devs could follow to make choice enhance their games rather than hurt them.

But that's for the next post.

Sunday, July 29, 2012

I like my women like I like my keyboard shortcuts...

I keep starting posts that turn into grand, sweeping philosophizing, and I'm trying to get away from that.  (Hence the delay in updating.)  So to make things a little more tractable, I'm going to take a small-scale approach and discuss a common gaming pet peeve: The unskippable cut scene.

There are all sorts of reasons to hate the unskippable cut scene.  For one, cutscenes in general violate the inherent participatory nature of the video game.  The Half-Life games were praised as revolutionary in their time because they were the first games among their contemporaries to tell the story through the game.  You experienced the entire story through Gordon Freeman's eyes, and it was an incredibly immersive experience as a result.  There wasn't a fade out and cut to 3rd person view every time the game designers decided it was exposition time.  Hell, if you wanted, you could just walk away from the people talking to you and start hitting stuff (including the people talking) with your crowbar.  Gordon Freeman is a mute MIT physicist - he's clearly on the autism spectrum, and people will understand if he'll just do that kind of thing from time to time.

Gordon Freeman's teeth grinding and hand flapping proved just too unsettling in preliminary game testing, but if you look closely, NPCs' reactions to it are still coded into the game.

Taken from a human factors perspective, though, I can think of two major reasons why cutscenes can be so annoying.  

Requirements Analysis

First, good design involves a strong understanding of your creation's requirements.  There's quite a bit of formal theory built around requirements analysis, but I'll present one popular take.  There are basically four major requirements you have to work through when designing something:

Functional requirements: things your thing absolutely has to be able to do, and if it didn't, would make people think your thing is broken.  For instance, a calculator has to be able to perform arithmetic (correctly) on numbers that you enter into it.

Indirect requirements: things that have to be present to make the functional requirements possible.  To keep going with the calculator example, you need to have a power source of some kind.  (Don't start with me, abacus nerds.)

User requirements: what's your audience?  What do they know and what do they expect to be able to do with your thing?  Are they men or women?  Big or small?  Young or old?

Environmental requirements: where is your thing being used?  A solar-powered calculator is useless to an astronomer at night.  But then she's probably using a scientific calculator or a computer anyway.

Gameplay and cutscenes have a very tense relationship under this sort of framework.  The core gameplay mechanics serve as a functional requirement (or perhaps an indirect requirement to behavioral reinforcement...), but what is a cutscene?  

In the games that use them, they can serve a range of purposes.  They provide motivation to the player.  They inform the player of the next thing they have to do.  And in story-centric games, they may even be the very thing the player is primarily interested in.  Depending on their implementation, they can be considered a functional or indirect requirement.  So then why do they get so annoying?  For one, they are if they don't accomplish any of the things I just listed above.  If they're uninformative, provide superfluous narrative, or are just plain insipid, you're gonna piss people off.  

Oh, my god, I don't care.  I just want to sneak around and kill terrorists.

Then there are people who have absolutely no interest in the cutscenes, and just skip them because it's standing between them and playing the game.  This can be indicative that the cutscenes are not serving a purpose in the game to begin with, and their presence becomes questionable (sorry if you just wasted hundreds of man-hours making them).  For one, they're failing to meet their requirements, but second, they're actually interfering with the main functional requirement(s) of the game.  As with so many things, an ounce of prevention is worth a pound of cure.  This is why game devs need to sit down and figure out what they're trying to accomplish with their game, and how they want to do it.  Is this going to be gameplay driven?  What purpose are the cutscenes serving?  Do we need them?  What constitutes a justifiable reason to have a cutscene?

But then there's another, even simpler reason why the unskippable cutscene is so annoying, and it's highly related to a concept from user-interface design: flexibility.  

Flexibility

In computing, flexibility refers to the degree to which a computer program can provide an optimal experience to users of different expertise levels.  Keyboard shortcuts are the prototypical example.  In a well-designed program, the functions to which keyboard shortcuts are mapped will be organized in some kind of menu or toolbox structure that allows users to find or stumble upon them.  However, as a user starts using a function more and more often, navigating through menus or toolboxes becomes cumbersome.  Fortunately, through that frequent use, the user notices and becomes familiar with the keyboard shortcut noted on the function, and can use that to access it quickly without having to go through the UI infrastructure.  And so, keyboard shortcuts provide an optimal experience for the novice and expert users alike, while providing a support structure to aid in the transition between those two levels of expertise - all without getting in the way of the different users.  (You'll find many games ignore that last part; the Sequelitis video I put up in the first post speaks to that already.)

Can you imagine how annoying it would be for you if, every time you used your computer, you had to go to the edit menu to copy, then again to paste, every single time you wanted to copy and paste?  It seems so small, but just try doing that for a day.  You won't last an hour.  For an expert user, having to go through the process, as simple and short as it is, feels incredibly slow and frustrating.

Sound familiar?

As a hardcore completionist when it comes to games, I can't even relate to why someone would skip a cutscene on the first pass through.  But as a hardcore completionist, I have found myself on a fifth pass through a game wanting to stab the game dev's eyes out for making me sit through this same cutscene again.  Unskippable cutscenes are a pain because, at the very least, they ignore the principle of flexibility.  They ignore the fact that the player might already be an "expert," intimately familiar with the goals and story of the game, and just wants to get back to the action.  

Unskippable cutscenes are just bad design.

But wait!  There's another side to this coin.  Unskippable cutscenes are becoming more and more rare these days, I'll admit, but this is also giving rise to another problem.  The too-easily-skippable cutscene. I've probably had this problem even more often than the unskippable cutscene, to be honest.

I've lost count of how many times I've been in this situation: You're sitting through a cutscene, no idea how long it is because this is your first pass through the game, and you've really gotta pee.  You don't want to miss anything, but holy crap, your kidneys are gonna explode!  You never had to leave the game mid-cutscene before.  Can you pause them?  I don't know!  This thing just keeps going...  But I can't hold it any longer...  Let's just hit start and maybe that'll--FUCK!

(Or, even simpler, you accidentally bumped a button, and poof!  Cutscene goes bye-bye.)

Now what am I supposed to do?  Granted, a well-designed game will provide you with some kind of redundancy to indicate where you're going next or what your next goal is if the cutscene served to tell you that, but what about the content itself?  That's content you paid for, gone.  If you're lucky, the game lets you re-watch cutscenes in some kind of gallery, but how often do you see that?  Now, that portion of the game is lost to you until you play through to that point again.  Well, ain't that a kick in the balls.  

(I started playing Saints Row The Third recently, and I've already managed to lose a cutscene.  It seemed to involve an energy mascot beating up a gangbanger, though, so I'm guessing I'm not missing anything, but the fact that I don't care speaks to the justifiability of your cutscene in the first place.)

What I'd look like if I had a sweet scar, a more chiseled face, and a better haircut.

Game devs wised up and provided an option to cater to the expert user, but now the first-time player is left hanging in the wind, which is just as bad (if not worse).  So how do we deal with that?  Once again, a simple principle from general UI design: error prevention.

Error prevention is what it sounds like: a means of keeping the user from doing something stupid.  

It's why your word processor pops up a window when you try to close a document that says something like, "Are you sure you'd like to close?  You haven't saved any of the additions you've made to your dissertation in the past five hours, you lunatic."  Or whatever.  No one reads text strings longer than about 5 words anyway.  

Error prevention is also why there's that little plastic door on buttons that do awesome things in airplane cockpits and tricked out cars that you have to flip up like a bad-ass first before you do something that usually corresponds to the phrase, "punch it."

The ideal solution, as you can probably guess at this point, is pausable cutscenes that give you the option to skip them when you pause.  Thankfully, Square-Enix, responsible for 20% of the world's cutscene content despite producing .05% of all games annually, has implemented this in a lot of their recent games.  It's probably one of the few good game design decisions of Final Fantasy XIII.  

 There's that and the Shiva sisters summon who form a motorcycle by scissoring.

Still, confirmation that you're about to do something stupid, at the very least, is a step in the right direction.  Diablo III is the most recent example of this that leaps to mind - you can't pause a cutscene, but at least it asks you if you're sure you want to skip it.  And you get a cutscene gallery in the main menu, so you don't have to worry about missing anything.

Flexibility is a particularly interesting principle with a multitude of implications in game design, so look forward to discussions of that in future posts.

Tuesday, July 17, 2012

Justifying my Existence, part 3

Ok, time for the glorious conclusion of my blogging trilogy.  Wherein I tie everything from parts 1 and 2 together, then celebrate blowing up the Death Star II with a big Ewok party in the woods.

(Everything else that's wrong with Jedi aside, how could blowing up the death star and killing the Emperor definitively end the Empire?  I mean, did the entire enterprise really hinge entirely on one functional but partially unconstructed starship and two top leaders?  They can't possibly be outnumbered and overpowered at that point by the Rebel Alliance - characterized as the ragtag underdog throughout the series - after a single battle, which itself resulted in great losses for the Alliance's already dwindling force.  I'm not willing to believe that the sociopolitical infrastructure that is both a requirement and product of an intergalactic empire can be undone in one stroke like that.  But, whatever.  I bet the Expanded Universe answers these questions, but frankly, it just makes me feel too nerdy to have any knowledge from the SWEU.  Anyhoo...)

So let's recap where we've been:

Human factors was born out of the military wanting to know why their best-trained pilots were still crashing the most precision-engineered planes.  The answer: the technology wasn't designed to fit the limitations of the human operator.  A bunch of people came together to figure out how to engineer the technology around the human being, and bam, we have human factors.

Video games are at an interesting point in their history.  Graphical improvements have defined the step up from one console generation to the next for the past 30 years.  But we're getting to a point where graphics are not significantly adding anything meaningful to the video game experience.  Recent trends have moved towards employing motion controls, but no one is really crazy about them except for idiots (and even then, for not very long).

So what do these things have in common?

I'm going to answer this question with a graph.  'Cause that's how we do in academics.

I know there's an official figure for this somewhere, but I can't seem to find the one I have in my mind's eye.  That makes me wonder if I made the whole thing up, but it all sounds probable enough to be real.

Anyway, discussions of human factors usually involve a graph that looks something like this:

FUCK YEAH, GRAPHS!

What this graph tells us is that aviation incidences have decreased over time thanks to advances in engineering.  More reliable hardware means planes aren't spontaneously combusting and sputtering to a halt in midair so much, so of course you see a drop in aviation incidents.  That's super, but engineering will only get you so far.  You eventually hit an asymptote at the wall of human error.  Even with the best engineering possible, you're still going to have some percentage of people mistaking one dial or switch for another and getting caught in a death spiral.  Codified safety regulations also help to bring the numbers down, but even they only do so much.  You never really reach zero.

To break through that wall, you need human factors.  You need a method to systematically study what mistakes people still make, understand why they make them so you can develop solutions, and then systematically test the solutions to see what works.  When you start engineering the human-machine (or human-human or human-human-machine) system as a whole, then you suddenly find avenues for improvement that weren't there before.

Now consider this graph which I most definitely just made up:

Human Factors is driven largely by Nyan Cat

Like aviation engineering, we're getting to a point in video games where our hardware is starting to asymptote.  Technical advances are adding less and less to the experience, even if they continue to advance monotonically.

(Sometimes, they even make the experience worse.  Remember how awesome it was when Dead or Alive introduced boob jiggle physics?  Then remember how horrifying it was when Team Ninja made the boobs jiggle independently from each other in wildly different directions on the Xbox 360?  WHAT HATH OUR SCIENCE WROUGHT?!)

OH GOD, WHY IS ONE UP WHEN THE OTHER IS DOWN?!  STOP RUNNING, FOR GOD'S SAKE.  You're hurting MY tits.

Back to the point, video games - in my opinion - are hitting a similar technical wall to the one I described for aviation.  While the technology evolves, things get better, but the rate at which they're getting better is slowing down.  (Before you jump up to argue with me on this, hear me out.)

Think about the Mario series.  Break it down into its core components (I'll discuss more on this in future posts), and you have this underlying thread - running and jumping through an obstacle course - that just evolves over time.  But think about how that evolution has played out.

We saw incremental improvements going through the 2D Mario games on NES, and those were awesome.  But then the SNES came out, and the graphical leap allowed game designers to introduce enemies and environments they never could before.  Enemies could be huge or tiny, they could change form, they could respond differently to different attacks.  Your environment could shift and move around you in new ways, creating situations and challenges that people had never seen before.

Then the Nintendo 64 came out, and it was an even bigger leap.  There was not only a leap technically, but also in the experience afforded by this open, three-dimensional world.  Challenges and puzzles could come at you from all new directions, and it fundamentally altered the way you approach and move through the game.  Once again, gamers were faced with a brand new experience they had never seen before.

Then the Gamecube came out, and it was cool because it was prettier, and the worlds were a little bigger, and for some reason you had a water jetpack?  I'm not normally one to denigrate a jetpack, but honestly, can anyone say that's as big an evolutionary step as the addition of a spatial dimension?

Oh, I almost forgot: Mario got short sleeves.  Progress.

And then the Wii came out with Super Mario Galaxy and it was like Super Mario Sunshine without the jetpack and some funky gravity physics.  And the revolutionary new control scheme the Wii was supposed to give us?  Wiggling your wand at the screen picked up stars.  (Oh, if only that were true in other facets of life.)

And now we have "New Super Mario Bros." which is basically Super Mario Bros. with pretty graphics and co-op play.  The evolution has become a cycle that's eating itself.

When an iconic figure in the industry like Mario is running out of places to go, you know something's up. The point is, video games are running up against this wall I talked about before; it looks like there's nothing else to improve on, but there's still this sense that we could be doing better if we knew what was holding us back.  And that's what human factors affords us: a tool set for making bad ideas good, good ideas great, and great ideas amazing.

Motion controls were one attempt to break through the wall by taking things in another direction.  While that was a valiant approach (that some say have ruined the glut of modern video games), it's become a joke among the core audience of gamers.  Understanding why it failed (and how to fix it) is one major application of human factors.  One reason that I've mentioned already (though there are many, in my opinion) sits at the core of human factors - developers pushing motion control didn't think about the users.  They designed a blender for someone who wanted a toaster; sure we have this cool new gadget, and they made some money off some protein-shake-drinking douchebags, but a lot of us are still waiting here with cold, pallid bread in hand.

And now, I've officially spent way too much time pontificating.  Enough with the mission statements.  Next post will be more like what I had originally intended for this blog.

Tuesday, July 10, 2012

Justifying My Existence, part 2

Ok, in retrospect I don't know why I teased the ending of the history lesson.  The big reveal is that the human factors engineers went back home to their universities, and continued their research.  The result was the beginnings of human factors as a field.  There were now scientists interested in the systematic study of human beings in the context of a technological system.

Although I use the term "technological," it's important to note that human factors covers more than what we would normally consider technology.  For instance, the way a restaurant kitchen staff interacts with each other to deliver food properly and efficiently can be considered a technological system to the human factors researcher.  The real interest is in understanding humans as part of a greater system rather than the human per se (cf. psychology's study of the individual's mind).  It's a multi-disciplinary field in that regard, as human factors is interested in the cognitive psychology of a person's mental processes, the biomechanics of how they move and act on the environment, the sociology of how their broader cultural contexts influence their behavior, and so on.  As a cognitive psychologist, though, you'll find I'm primarily interested in what's going on between people's ears.

So, I'm not talking about video games nearly as much as I want to be, so let's reign it back in.  Why should video games care about human factors?

Back in 2005(ish), Nintendo's head honcho, Satoru Iwata, explained the company's philosophy behind the Wii.  (I wish I could find the original interview, but it's getting swamped by Wii U stories.)  In essence, he explained that the defining change from one console generation to the next up to that point had been a boost in graphics; but we're reaching a plateau.  It may have been a while since you've seen it, but check out the original Toy Story:

Back in 1995, this blew our freaking minds.

It took 300 computer processors 2-15 hours to render a single frame of the movie.  Perhaps with the exception of the self-shadow effects, the entire movie looks like a current-gen in-game-engine cut scene.  For instance, check out what the PS3 pumps out in real time for the Toy Story 3 movie tie-in game:

Woody facing off with his infamous nemesis, skinny bandana man

Graphics are responsible for innovation in games insofar as they can more veridically represent a developer's vision, and the gap is rapidly closing.  We've hit a point where games are sufficiently pretty.  You're unlikely, for instance, to mistake someone's headband for their face in current-gen graphics.  

Agahnim from The Legend of Zelda: A Link to the Past (source)

And as any retro gaming nerd will tell you, graphics don't even affect how the game plays.  Whether you're reflecting energy balls back at Agahnim with your bug net on an SNES or Gannondorf's magic blasts with a bottle on your 3DS, it still feels awesome.  And it's about as much fun in all the other Zelda games since LttP, which just about all used it, too.  (Nerd burn!)

In fact, the only example I could think of off the top of my head where a step up in graphics had a direct impact on gameplay was when Nintendo (ironically) first used Link's eye gaze as a means of providing puzzle hints in Wind Waker.

Fun fact: You automatically direct your attention toward the target of other people's (especially cel-shaded people's) eye gaze (Friesen & Kingstone, 1998).

Getting back to my point, Iwata explained Nintendo would not significantly upgrade the Wii's graphics capability over the Gamecube, and instead would focus on upgrading the means of interacting with the game.  That, he argued, would lead to the next great innovations in gaming.  Granted, it eventually became a lot of mindless waggling, but Nintendo accomplished their goal of shaking up (har har!  PUNS!) the video game industry.  The new gimmick proved profitable and now every major console company is trying to cash in with their own make-an-ass-of-yourself-by-wiggling-some-part-of-your-body-around-but-oh-god-not-that-part-of-your-body-except-maybe-for-that-one-time-you-wanted-to-see-if-your-dong-was-big-enough-to-get-detected-by-the-Kinect method of game control.

(By the way, because I know you're curious now: the answer is yes.)

But here's the thing: the tactic was profitable because Nintendo drew in a whole lot of new, (*shudder*) casual gamers.  The assumption was that the Wii would be a gateway drug of sorts, luring in upstanding members of society with a few minutes of tennis here and a few minutes of bowling there.  They could never figure out how to do that on an XBox with its multiple joysticks, 11 visible buttons, 2 invisible buttons (L3 and R3), and its XTREME 'TUDE.  But this, you're just wiggling a little white stick!  How cute.  

Then in order to justify spending hundreds of dollars on what amounted to an afternoon diversion, those same people would try out something a little headier - like a Metroid or an Okami.  Then BAM, you have a new generation of gamers pre-ordering the next iteration of your system.

Or do you?

Of course you don't.  That question always means you don't.  You tapped into a completely separate market that behaves in a totally different way.  You tapped into a market that tries out that new "Dance Dance Resolution or whatever" game at the movie theater one time on a date but then never again (OMG! He thought I was such a nerd!  Lolololol).

You tapped into a market that can justify buying a $150 Netflix box or a $200 smartphone and sees games as fun, two-minute time-killers that they won't pay more than $2 for.  (Try selling those guys a 50-hour RPG for $50.)

You tapped into a market that would never understand why you'd pay $65 for a 16-bit game cartridge in 1994 no matter how awesome it was or how many times you've replayed it since then, and what does that even mean, the battery died and you can't save your game anymore?

Point is, those profits are punctate.  Nintendo didn't create repeat customers.  It's not a coincidence that console game sales are down and PC game sales are way up (though Steam offering cross-platform games probably also has something to do with that - how am I supposed to resist the complete Assassin's Creed series at 67% off?  DAMN YOU, GABEN!).  It made Nintendo a buttload of money, but it left a lot of gamers with a bad taste in their mouths.

So what role can human factors (and psychology) play in all this?  That's for the next post to explain.