Management and product development lessons from the 1950’s

2671775In 1955, Elihu Katz and Paul Lazarsfeld published “Personal Influence“. This studied how small-group dynamics moderate or influence mass media messaging. For example how people decide who to vote for, which brand of lipstick to use, or which movie to go and watch.

Reading this in 2018 it’s striking to see how much is still valid. I’m not posting this to provide tremendous new insights. Any insights here are over 60 years old. Apparently, human behaviour doesn’t change very much over the generations.

How people choose their leaders

In order to become a leader, one must share prevailing opinions and attitudes. (p52)

They cite a 1952 study on children in a day nursery in which kids with “leadership qualities” were separated from the other children who were then placed into groups of 3-6. These new groups created their own “traditions” (e.g. who sits where, group jargon, division of who plays with what objects). The original leaders were then re-introduced:

In every case, when an old leader attempted to assert authority which went contrary to a newly established “tradition” of the group, the group did not respond. Some of the leaders, as a matter of fact, never returned to power. Others, who were successful, did achieve leadership once more but only after they had completely identified with the new “tradition” and participated in it themselves. (p52)

Or another 1952 study amongst a religious interest group, a political group, a medical fraternity and a medical sorority:

[T]hose who had been chosen as leaders were much more accurate in judging group opinion … But this was so only on the matters which were relevant to the group’s interest – medicine for the medical group, politics for the political group, etc. It seems reasonable to conclude … that leaders of groups like this are chosen, in part at least, because of recognized qualities of ‘sensitivity’ to other members of the group. (p102)

A succinct argument as to why people who want to become leaders need to first spend time listening.

Group participation improves take-up

Here’s are some more 1952 studies that the authors cite:

  1. A study in a maternity hospital in which “some mothers were given individual instruction .. and others were formed into groups of six and guided in a discussion which culminated in a [group] ‘decision’ [to follow the instruction.” The participants in the group dicussion adhered “much more closely” to the child-care programme. (pp74-75).
  2. A study comparing a lecture approach vs a group discussion on “the nutritional and patriotic justifications for the wartime food campaign to buy and serve ‘unpopular’ cuts of meat. 3% of those involved in the lecture followed the desired course of action, vs 32% of those in the group discussion.

Worth bearing in mind in the next meeting you host, or the next corporate communication you send out.

How small groups construct their reality

So many things in the world are inaccessible to direct empirical observation that individuals must continually rely on each other for making sense out of things. (p55)

Apparently 1952 was a bumper year for social sciences. Here is another 1952 study in which individuals were asked to decide how far and in which direction a point of light was moving. The catch was that the point of light was static. The study found that:

  1. When people were shown the light individually, they would make their own judgment of how it was moving. When they were later put into small groups of 2 or 3, “[e]ach of the subjects based his first few estimates on his previously established standard, but confronted, this time, with the dissenting judgments of the others each gave way somewhat until a new, group standard became established.”
  2. If a group session came first, the group would achieve a consensus of how the light was moving, and each individual would adopt the group’s consensus as their own position.

The way reality is generated by social groups is something to bear in mind during user research activities.

How the make-up of a group affects quality of communication

You guessed it, it’s another 1952 study that found that:

  1. Rank in the group affects how people communicate. Specifically: “[P]-erson-to-person messaged are directed at the more popular group members and thus may be said to move upward in the hierarchy, while communication from one person to several others tends to flow down” (p89).
  2. As groups get larger (from 3 to 8) “more and more communication is directed to one member of the group, thus reducing the relative amount of interchange among all members with each other. At the same time the recipient of this increased attention begins to direct more and more of his remarks to the group as a whole, and proportionately less to specific individuals.” (pp89-90)

I’m sure these two findings will ring very true of many meetings you’ve been in. I suspect that the person who becomes the centralising leader in these communications might not even realise the role they are playing. Reading this makes me more keen to try out the kind of silent meetings approach they use at Square.

 

 

Advertisements

How long will this feature take?

Be clear on someone’s definition of done when trying to communicate what it will take to build a feature.

Planning software development timescales is hard. As an industry we have moved away from the detailed Gantt charts and their illusion of total clarity and control. Basecamp have recently been talking about the joys of using Hill Charts to better communicate project statuses. The folk at ProdPad have been championing, for a long time, the idea of a flexible, transparent roadmap instead of committing to timelines.

That’s all well and good if you are preaching to the choir. But if you are working in a startup and the CEO needs to make a build or buy decision for a piece of software, you need to make sure that you have some way of weighing up the likely costs and efforts of any new feature you commit to build. It’s not good enough to just prioritise requests and drop them in the backlog.

The excellent Programmer Time Translation Table is a surprisingly accurate way of interpreting developer time estimates. My own rule of thumb is similar to Anders’ project manager. I usually triple anything a developer tells me because you build everything at least 3 times: once based on the developer’s interpretation of the requirements; once to convert that into what the product owner wanted; and once into what the end users will use. But even these approaches only look at things from the developer’s point of view, based on a developer’s “definition of done”. The overall puzzle can be much bigger than that.

For example, the startup CEO who is trying to figure out if we should invest in Feature X probably has a much longer range “definition of done” than “when can we release a beta version”. For example: “When will this make an impact on my revenues” or “when will this improve my user churn rates”. Part of the CTO job is to help make that decision from the business point of view in addition to what seems to be interesting from a tech angle.

For example, consider these two answers to the same question “When will the feature be done?”.

  1. The dev team is working on it in the current iteration, assuming testing goes well it will be released in 2 weeks.
  2. The current set of requirements is currently on target to release in 2 weeks. We will then need to do some monitoring over the next month or two so that we can iron out any issues that we spot in production and build any high priority enhancements that the users need. After that we will need to keep working on enhancements, customer feedback and scalability/security improvements so probably should expect to dedicate X effort on an ongoing basis over the next year.

Two examples from my experience:

A B2B system used primarily by internal staff. It took us about 6 weeks to release the first version from initial brainstorming on it. Then it took about another two months to get the first person to use it live. Within 2 years it was contributing 20% of our revenues, and people couldn’t live without it.

An end user feature that we felt would differentiate us from the competition. This was pretty technically involved so the backend work kept someone busy for a couple months. After some user testing we realised that the UI was going to need some imaginative work to get right. Eventually it got released. Two months after release the take-up was pretty unimpressive. But 5 years later that feature was fully embedded and it seems that everyone is using it.

What is the right “definition of done” for both of these projects? Depends on who is asking. It’s as well to be clear on what definition they are using before you answer. The right answer might be in the range of months or years, not hours or weeks.

How to make 3000 look like 8000, or #marketing

See below how CarGiant, a UK-based retailer of used cars does it.

First off, the organic search results:

Car Giant organic search results

If you search for Car Giant they have already anchored in your mind that they have over 8000 vehicles

CarGiantHomePage

So now you land on the home page and note how the “over 8000” has now changed to “up to 8000”. That could be a whole lot of a smaller number. So let’s do a search for all vehicles up to the maximum price available.

CarGiantSearchResults

And there you go, just under 3000 cars. Which is definitely “under 8000” as promised, but by this time you’ve still got in your head that Car Giant is a truly giant organisation with 8000 or more vehicles to sell you.

See also https://en.wikipedia.org/wiki/Anchoring#In_negotiations

 

On the limits of automation

From The Economist Special Report on the future of finance, Jan 24th:

Mr Rajan of the University of Chicago says academic research suggests mortgage originators, keen to automate their procedures, stopped giving potential borrowers lengthy interviews because they could not easily quantify the firmness of someone’s handshake or the fixity of their gaze. Such things turned out to be better predictors of default than credit scores or loan-to-value-ratios …

In other words –  if the devs found something difficult to deliver they descoped it. Pretended it didn’t exist. A viewpoint that is more common than you might think: to develop software you need certainties (if x occurs then do y).

Not that this viewpoint is limited to devs. In my work with buyers I’ve seen a marked reticence to even attempt to quantify the non-price elements of the bids they are being faced with, and certainly a reticence about weighing up the non-price elements of bids against the price (e.g. is a 3-year warranty worth an extra £x per unit).

It’s almost as if there is a tendency to ignore the subjective when what we should be doing is incorporating the subjective, but accepting it as such.

Build what I do, not what I say

As developers we keep on asking people to tell us their requirements. And then when we give them something that meets those requirements, surprise, surprise, it turns out that something got lost in translation.

And yet we persist in interviewing key users and running workshops to find out how to build our systems. Even though we know that the resulting documentation is often very wide of the mark.

An article in the 17th Jan edition of The Economist, called The Price Of Prejudice makes some strong arguments that not only is what people say they do often different from what they really do … what people think they would do is often different from what they would do in reality.

From the 2nd paragraph:

[T]he implicit association test measures how quickly people associate words describing facial characteristics with different types of faces that display these characterisitcs. When such characteristics are favourable – “laughter” or “joy”, for example – it often takes someone longer to match them with faces that they may, unconsciously, view unfavourably (old, if the participant is young, or non-white if he is white). This procedure thus picks up biases that the participants say they are not aware of having.

They cite three other fascinating experiments. The first two are by conjoint analysis experiments by Dr Eugene Caruso and the third is by Kerry Kawakami.

In the first, students were asked to pick team mates for a hypothetical trivia game. Potential team mates differed in their education level, IQ, previous experience with the game and their weight. When asked to rate the importance of the different characteristics, students put weight last …

However, their actual decisions revealed that no other attributes counted more heavily. In fact, they were willing to sacrifice quite a bit to have a thin team-mate. They would trade 11 IQ points  – about 50% of the range of IQs available – for a colleague who was suitably slender.

In the second, students were asked to consider hypothetical job opportunities that varied in starting salary, location, holiday time and the sex of the potential boss.

When it came to salary, location and holiday, the students’ decisions matched their stated preferences. However, the boss’s sex turned out to be far more important than they said it was (this was true whether a student was male or female). In effect, they were willing to pay a 22% tax on their starting salary to have a male boss.

The last example looks at attitudes to race. In this experiment a non-black student enters a waiting room in which there is a white “student” and a black “student” (these last two are in on the experiment). The black “student” leaves the room and gently bumps the white “student” on the way out. This white “student” either ignores the bump or might say something racist about black people. The real student’s emotional state is then measured and the student is asked which of the two “students” they would pick as a partner for a subsequent test.

A second group of non-black students, rather than going into the waiting room either read a description of the proceedings or are shown a video recording of the scenario and asked to imagine how they would react.

Both those who read what had happened and those who witnessed it on television thought they would be much more upset in the cases involving racist comment than the one involving no comment at all. However, those who had actually been in the waiting room showed little distress in any of the three cases.

In addition, a majority of those imagining the encounter predicted that they would not pick the racist student as their partner. However, those who were actually present in the room showed no tendency to shun the white student, even when he had been rude about the black one

More grist to the mill for an ethnographic approach to software development: one in which we build what people do rather than what people say they do, or say they think they might like to do; one in which the software developer spends significant time doing participant observation with the end users to really understand what she is going to build.

Inspiring Apps: Deadline

Today I love Deadline (http://www.deadlineapp.com/)

They’ve taken on one feature and implemented it really, really well.

You simply type in your calendar item using one box (e.g. lunch with john tomorrow 1pm). It then parses out the date and time and sends you email reminders.

The web user interface is brilliant. It really does invite you to enter your calendar items. Not sure what it is about the UI, but I think it’s something to do with the big typeface and the little flash you get as the screen updates with your new entry.

But even better – you don’t need to use the web UI at all. You can send your invites in by IM and receive updates by email. I am a big fan of apps that don’t need you to log into a website every time you want to do something. And I am a big fan of leveraging email more in apps.

Thank you, Alex Young.

Ethnography in enterprise software development

We need more ethnographers in the (enterprise) software industry. 

We would produce better software (by which I mean software that achieves its intended benefits more) if we started our projects off with an understanding of how people really work in their day to day lives.

Instead we start off with interviews and workshops in which we gather a view of what managers say their staff do. Or rather, what they say they think their staff do. Which is often several steps removed from what people really do. 

The only way to really understand what people do is to spend time with those people. If you were to take this kind of approach in enterprise software development you would spend a year or even 18 months figuring out how people really work, and only then would you start designing new software. But your software would be better.

On one cynical level I wonder whether this happens anyway. The first version of an enterprise system is implemented based on what key individuals have said they think the organisation needs to do. It fails to deliver its benefits. Then it is reworked and reworked over the next 18 months.

If so it would definitely be better to do some ethnography first, before implementing your new system.