Wyrwane z kontekstu – In defense of doing it the hard way

Marcowo-kwietniowy numer  interactions z 2012 roku, dostarcza nam kilku ciekawych artykułów. Moją szczególną uwagę zwrócił tekst In defense of doing it the hard way autorstwa Leanny Gingras, nawołujący do unikania skrótów przy wykonywaniu profesjonalnych badań.  Polecam cały artykuł (dostęp ograniczony), ale najciekawszy jest fragment poświęcony eyetrackingowi:


Eye tracking, in all of its popular glory, is a notoriously misapplied methodology. Eye-tracking technology monitors where and for how long people’s eyes fixate on a target. The original idea back in the day was to learn how people read and to correlate eye fixation with cognition. It was long the exclusive tool of labs with very deep pockets, but times have changed, and at UX conferences these days you can’t throw a rock without hitting an eye-tracking vendor. These vendors claim to deliver the power of the eye-tracking lab at dirt-cheap prices. Eye-tracking presentations and seminars (often given by said vendors) spring up like weeds, offering “eye-tracking 101” and “eye-tracking boot camp.” It’s not so expensive, they promise, and not so hard. Anybody can do it.

Great! What’s the catch? Well, eye tracking in UX is based on the premise that the resulting heat maps will reveal thoughts that users don’t verbalize, because they are not conscious of their attention processes. Unfortunately, the heat-map data does not actually represent the user’s mental processes. Like chocolate cake, you have to bake it before you eat it. Cognitive scientists understand this. When they use eye-tracking studies to learn how we process information, they actively take account of all relevant work, no matter the methodology or the discipline. When vendors promote eye tracking as easy and accessible, they gloss over that work, and because the heat maps look scientific, we fall for it.

It’s easy to understand why eye-tracking maps are so easily mistaken for findings. Humans intuit that data is messy, so if it looks nice, it must be analysis-ready. Unfortunately, because eye-tracking is so deceptively easy, it enables enormous fallacies in user research. It’s marvelous at proving other people wrong (“See, I told you green wouldn’t work”), proving our own points (“If the button were red, people would see it”), drawing shaky conclusions (“It’s not that people don’t want to use it, it’s that they don’t see it”) and discrediting our profession (“This isn’t so hard. Remind me again why we’re paying an expert to do this?”).

Like the other shortcuts I’ve mentioned here, eye tracking gives a dangerous amount of latitude for anybody to make their own guesses and draw their own conclusions. Eye-tracking data seems very approachable, and it looks fun to play with. However, its data is stripped of all meaning and context, and when we take it at face value, we run the risk of drawing unsubstantiated conclusions. Unfortunately, our clients may also mistake eye-tracking data for insights, and it’s our responsibility to ensure they don’t draw unsubstantiated conclusions either. Our clients (who are not trained in the fine art of considering data in a holistic context) need solid information to make solid business decisions. In supporting that need, we must ensure that our insights are rich and that they provide information our clients can trust.

Źródło: In defense of doing it the hard way, Leanna Gingras, Interactions, Volume 19, Number 2 (2012), strony 74-77, ACM, USA

Chcesz pracować w zawodzie? Aktualne oferty:
Oferty dostarcza serwis praca.uxlabs.pl