Dear fellow Human Science types,
I read Predictably Irrational (Dan Ariely) while on vacation, and for anyone who’s interested, I did a little write-up.
The premise is that the author, burned on over 90% of his body by an exploding road flare, has to endure a prolonged stay in the burn ward. Every day his nurses rip his bandages off quickly instead of slowly peeling them off. When he asks why they do it that way, the nurses say that that is what they have always done, the logic being that it actually reduces the patient’s cumulative pain because it is a more intense pain over a shorter period.
Long story short, he becomes a behavioral economics researcher and discovers that this old theory of “tearing off the band-aid” is in fact a fallacy. It was always practiced, and continues to be practiced, for a few forgivable but rather impractical reasons. First, it was never tested empirically; it just seemed like “common sense.” Second, the people actually performing the procedure, subconsciously or otherwise, factored in their own emotional anguish caused by hurting the patient. Tearing off the bandage quickly made them have to experience a shorter period of trauma induced by hurting someone they strive to help.
The third reason, and what continues to fascinate the author, is that even though the nurses have done the “wrong” thing thousands of times, they have not learned from it. That is, the mechanism of humans that allows them to learn from their mistakes doesn’t seem to work in this case. This discovery prompts him to study situations in which people act irrationally (that is, outside of their collective best interest), fail to learn and correct the behavior, and repeat the mistake, thereby being “Predictably Irrational.”
As a read, it’s enjoyable. It has a Freakonomics or Malcolm Gladwell-esque feel, with a little more humor injected in. It’s compact, readable in a couple days, has some really cleverly designed studies, and some interesting ideas. In my opinion, the author is better at identifying problems with the world than he is at coming up with solutions, but it’s definitely worth reading to get some giggles at the expense of human nature.
But beyond that, some of the studies of behavioral economics showed some interesting results that have implications for the user research and usability testing we do at MAYA. I come from a scientific background, so one of my repeated frustrations is the difficulty in finding objective, unbiased results in these types of studies. As such, this book offered some insight into how we can leverage the knowledge of behavioral economics to take some of the fuzziness out of our work. In the interest of being brief (well…somewhat brief), I’ll describe two ideas that I thought were particularly applicable to our work.
The book opens with a chapter about how human decision-making is heavily biased towards things that are easily comparable. He includes, among others, a tale of how the Economist magazine once offered 3 subscription options:
A – Internet-only subscription – $59
B – Print-only subscription – $125
C – Print and Internet subscription – $125
In the study, just about everyone chose option C, the print and internet, because it was objectively better than just the print subscription. However, if the middle option were removed or altered so as not to be easily comparable, the results go all over the place. The reason the author provides for this phenomenon (which is substantiated by a number of manipulations of the same type of problem) is that humans find it easy to make a decision between two alternatives that are easy to compare (like B and C), but have tremendous difficulty deciding between two options that are difficult to compare (like A and B).
This brought to mind situations where we have test participants verbalize preferences between variations of an interface or prototype. If we have several ideas, and one clearly dominates another, that could end up being the “favorite” just because it is easy to compare it to something else. That is, a user might select something as the best option not because it actually is, but because it is easily better than at least one other option.
Secondly, there’s a chapter about how we can’t, in a cold emotional state, predict how we’ll react in a state of physiological arousal. There are numerous clever studies (and a couple that are slightly uncomfortable to read) that illustrate this point. That got me thinking about some of the projects we work on for the military: if the product is going to be used by soldiers in life-or-death combat situations — when people are at their peak state of physiological arousal — can we really expect to get a good idea of its ease of use in the lab with a calm test subject? Even if the test subject has been in combat before, the findings in “Predictably Irrational” indicate that people are unable to predict accurately how they will act in an aroused state.
I would imagine there are certain ethical issues that prevent us from firing live rounds at people while they test a piece of software, but maybe there are ways to get some adrenaline pumping in the lab — safely — that would help us design better systems for people whose lives depend on the usability of the software they bring into battle with them. Putting test subjects into a particular emotional state before a test is common practice in psychology, but to my knowledge it is uncharted territory in the usability field.
Another very realistic application of this sort of thing would be air traffic controllers. Air traffic control has been documented as the single most stressful civilian job, and one where there is little if any room for error. Maybe when we design air traffic control systems, it wouldn’t be a bad idea to stress out our test subjects in the lab to mimic the hectic environment they work in!
Whether it’s feasible to always be considering these types of things or not, the book is a fun read. The biggest thing that I got out of it was trying to look at Ariely’s experiments — which were set up to intentionally manipulate people to make a certain decision — and seeing if I could figure out how to remove similar (albeit accidental) biases from my own user tests and user research.
Something to think about.