As developers we keep on asking people to tell us their requirements. And then when we give them something that meets those requirements, surprise, surprise, it turns out that something got lost in translation.
And yet we persist in interviewing key users and running workshops to find out how to build our systems. Even though we know that the resulting documentation is often very wide of the mark.
An article in the 17th Jan edition of The Economist, called The Price Of Prejudice makes some strong arguments that not only is what people say they do often different from what they really do … what people think they would do is often different from what they would do in reality.
From the 2nd paragraph:
[T]he implicit association test measures how quickly people associate words describing facial characteristics with different types of faces that display these characterisitcs. When such characteristics are favourable – “laughter” or “joy”, for example – it often takes someone longer to match them with faces that they may, unconsciously, view unfavourably (old, if the participant is young, or non-white if he is white). This procedure thus picks up biases that the participants say they are not aware of having.
In the first, students were asked to pick team mates for a hypothetical trivia game. Potential team mates differed in their education level, IQ, previous experience with the game and their weight. When asked to rate the importance of the different characteristics, students put weight last …
However, their actual decisions revealed that no other attributes counted more heavily. In fact, they were willing to sacrifice quite a bit to have a thin team-mate. They would trade 11 IQ points – about 50% of the range of IQs available – for a colleague who was suitably slender.
In the second, students were asked to consider hypothetical job opportunities that varied in starting salary, location, holiday time and the sex of the potential boss.
When it came to salary, location and holiday, the students’ decisions matched their stated preferences. However, the boss’s sex turned out to be far more important than they said it was (this was true whether a student was male or female). In effect, they were willing to pay a 22% tax on their starting salary to have a male boss.
The last example looks at attitudes to race. In this experiment a non-black student enters a waiting room in which there is a white “student” and a black “student” (these last two are in on the experiment). The black “student” leaves the room and gently bumps the white “student” on the way out. This white “student” either ignores the bump or might say something racist about black people. The real student’s emotional state is then measured and the student is asked which of the two “students” they would pick as a partner for a subsequent test.
A second group of non-black students, rather than going into the waiting room either read a description of the proceedings or are shown a video recording of the scenario and asked to imagine how they would react.
Both those who read what had happened and those who witnessed it on television thought they would be much more upset in the cases involving racist comment than the one involving no comment at all. However, those who had actually been in the waiting room showed little distress in any of the three cases.
In addition, a majority of those imagining the encounter predicted that they would not pick the racist student as their partner. However, those who were actually present in the room showed no tendency to shun the white student, even when he had been rude about the black one
More grist to the mill for an ethnographic approach to software development: one in which we build what people do rather than what people say they do, or say they think they might like to do; one in which the software developer spends significant time doing participant observation with the end users to really understand what she is going to build.