How long will this feature take?

Be clear on someone’s definition of done when trying to communicate what it will take to build a feature.

Planning software development timescales is hard. As an industry we have moved away from the detailed Gantt charts and their illusion of total clarity and control. Basecamp have recently been talking about the joys of using Hill Charts to better communicate project statuses. The folk at ProdPad have been championing, for a long time, the idea of a flexible, transparent roadmap instead of committing to timelines.

That’s all well and good if you are preaching to the choir. But if you are working in a startup and the CEO needs to make a build or buy decision for a piece of software, you need to make sure that you have some way of weighing up the likely costs and efforts of any new feature you commit to build. It’s not good enough to just prioritise requests and drop them in the backlog.

The excellent Programmer Time Translation Table is a surprisingly accurate way of interpreting developer time estimates. My own rule of thumb is similar to Anders’ project manager. I usually triple anything a developer tells me because you build everything at least 3 times: once based on the developer’s interpretation of the requirements; once to convert that into what the product owner wanted; and once into what the end users will use. But even these approaches only look at things from the developer’s point of view, based on a developer’s “definition of done”. The overall puzzle can be much bigger than that.

For example, the startup CEO who is trying to figure out if we should invest in Feature X probably has a much longer range “definition of done” than “when can we release a beta version”. For example: “When will this make an impact on my revenues” or “when will this improve my user churn rates”. Part of the CTO job is to help make that decision from the business point of view in addition to what seems to be interesting from a tech angle.

For example, consider these two answers to the same question “When will the feature be done?”.

  1. The dev team is working on it in the current iteration, assuming testing goes well it will be released in 2 weeks.
  2. The current set of requirements is currently on target to release in 2 weeks. We will then need to do some monitoring over the next month or two so that we can iron out any issues that we spot in production and build any high priority enhancements that the users need. After that we will need to keep working on enhancements, customer feedback and scalability/security improvements so probably should expect to dedicate X effort on an ongoing basis over the next year.

Two examples from my experience:

A B2B system used primarily by internal staff. It took us about 6 weeks to release the first version from initial brainstorming on it. Then it took about another two months to get the first person to use it live. Within 2 years it was contributing 20% of our revenues, and people couldn’t live without it.

An end user feature that we felt would differentiate us from the competition. This was pretty technically involved so the backend work kept someone busy for a couple months. After some user testing we realised that the UI was going to need some imaginative work to get right. Eventually it got released. Two months after release the take-up was pretty unimpressive. But 5 years later that feature was fully embedded and it seems that everyone is using it.

What is the right “definition of done” for both of these projects? Depends on who is asking. It’s as well to be clear on what definition they are using before you answer. The right answer might be in the range of months or years, not hours or weeks.

Advertisements

How to make 3000 look like 8000, or #marketing

See below how CarGiant, a UK-based retailer of used cars does it.

First off, the organic search results:

Car Giant organic search results

If you search for Car Giant they have already anchored in your mind that they have over 8000 vehicles

CarGiantHomePage

So now you land on the home page and note how the “over 8000” has now changed to “up to 8000”. That could be a whole lot of a smaller number. So let’s do a search for all vehicles up to the maximum price available.

CarGiantSearchResults

And there you go, just under 3000 cars. Which is definitely “under 8000” as promised, but by this time you’ve still got in your head that Car Giant is a truly giant organisation with 8000 or more vehicles to sell you.

See also https://en.wikipedia.org/wiki/Anchoring#In_negotiations

 

On the limits of automation

From The Economist Special Report on the future of finance, Jan 24th:

Mr Rajan of the University of Chicago says academic research suggests mortgage originators, keen to automate their procedures, stopped giving potential borrowers lengthy interviews because they could not easily quantify the firmness of someone’s handshake or the fixity of their gaze. Such things turned out to be better predictors of default than credit scores or loan-to-value-ratios …

In other words –  if the devs found something difficult to deliver they descoped it. Pretended it didn’t exist. A viewpoint that is more common than you might think: to develop software you need certainties (if x occurs then do y).

Not that this viewpoint is limited to devs. In my work with buyers I’ve seen a marked reticence to even attempt to quantify the non-price elements of the bids they are being faced with, and certainly a reticence about weighing up the non-price elements of bids against the price (e.g. is a 3-year warranty worth an extra £x per unit).

It’s almost as if there is a tendency to ignore the subjective when what we should be doing is incorporating the subjective, but accepting it as such.

Build what I do, not what I say

As developers we keep on asking people to tell us their requirements. And then when we give them something that meets those requirements, surprise, surprise, it turns out that something got lost in translation.

And yet we persist in interviewing key users and running workshops to find out how to build our systems. Even though we know that the resulting documentation is often very wide of the mark.

An article in the 17th Jan edition of The Economist, called The Price Of Prejudice makes some strong arguments that not only is what people say they do often different from what they really do … what people think they would do is often different from what they would do in reality.

From the 2nd paragraph:

[T]he implicit association test measures how quickly people associate words describing facial characteristics with different types of faces that display these characterisitcs. When such characteristics are favourable – “laughter” or “joy”, for example – it often takes someone longer to match them with faces that they may, unconsciously, view unfavourably (old, if the participant is young, or non-white if he is white). This procedure thus picks up biases that the participants say they are not aware of having.

They cite three other fascinating experiments. The first two are by conjoint analysis experiments by Dr Eugene Caruso and the third is by Kerry Kawakami.

In the first, students were asked to pick team mates for a hypothetical trivia game. Potential team mates differed in their education level, IQ, previous experience with the game and their weight. When asked to rate the importance of the different characteristics, students put weight last …

However, their actual decisions revealed that no other attributes counted more heavily. In fact, they were willing to sacrifice quite a bit to have a thin team-mate. They would trade 11 IQ points  – about 50% of the range of IQs available – for a colleague who was suitably slender.

In the second, students were asked to consider hypothetical job opportunities that varied in starting salary, location, holiday time and the sex of the potential boss.

When it came to salary, location and holiday, the students’ decisions matched their stated preferences. However, the boss’s sex turned out to be far more important than they said it was (this was true whether a student was male or female). In effect, they were willing to pay a 22% tax on their starting salary to have a male boss.

The last example looks at attitudes to race. In this experiment a non-black student enters a waiting room in which there is a white “student” and a black “student” (these last two are in on the experiment). The black “student” leaves the room and gently bumps the white “student” on the way out. This white “student” either ignores the bump or might say something racist about black people. The real student’s emotional state is then measured and the student is asked which of the two “students” they would pick as a partner for a subsequent test.

A second group of non-black students, rather than going into the waiting room either read a description of the proceedings or are shown a video recording of the scenario and asked to imagine how they would react.

Both those who read what had happened and those who witnessed it on television thought they would be much more upset in the cases involving racist comment than the one involving no comment at all. However, those who had actually been in the waiting room showed little distress in any of the three cases.

In addition, a majority of those imagining the encounter predicted that they would not pick the racist student as their partner. However, those who were actually present in the room showed no tendency to shun the white student, even when he had been rude about the black one

More grist to the mill for an ethnographic approach to software development: one in which we build what people do rather than what people say they do, or say they think they might like to do; one in which the software developer spends significant time doing participant observation with the end users to really understand what she is going to build.

Inspiring Apps: Deadline

Today I love Deadline (http://www.deadlineapp.com/)

They’ve taken on one feature and implemented it really, really well.

You simply type in your calendar item using one box (e.g. lunch with john tomorrow 1pm). It then parses out the date and time and sends you email reminders.

The web user interface is brilliant. It really does invite you to enter your calendar items. Not sure what it is about the UI, but I think it’s something to do with the big typeface and the little flash you get as the screen updates with your new entry.

But even better – you don’t need to use the web UI at all. You can send your invites in by IM and receive updates by email. I am a big fan of apps that don’t need you to log into a website every time you want to do something. And I am a big fan of leveraging email more in apps.

Thank you, Alex Young.

Ethnography in enterprise software development

We need more ethnographers in the (enterprise) software industry. 

We would produce better software (by which I mean software that achieves its intended benefits more) if we started our projects off with an understanding of how people really work in their day to day lives.

Instead we start off with interviews and workshops in which we gather a view of what managers say their staff do. Or rather, what they say they think their staff do. Which is often several steps removed from what people really do. 

The only way to really understand what people do is to spend time with those people. If you were to take this kind of approach in enterprise software development you would spend a year or even 18 months figuring out how people really work, and only then would you start designing new software. But your software would be better.

On one cynical level I wonder whether this happens anyway. The first version of an enterprise system is implemented based on what key individuals have said they think the organisation needs to do. It fails to deliver its benefits. Then it is reworked and reworked over the next 18 months.

If so it would definitely be better to do some ethnography first, before implementing your new system.

Collaboration tools I’m considering

We’ve been looking at various Web 2.0-ish collaboration tools that might make our life easier when working with clients on projects. I introduced some of my team to some of the poster children of the collaboration space:

www.dreamfactory.com

www.huddle.net

www.basecamphq.com

The feedback was interesting: DreamFactory was too sophisticated. The view was that if you wanted to go this far as DreamFactory would let you then you may as well stick with MS Project. Basecamp on the other hand was too simple, little better than just using basic Outlook Email and Google Docs. Huddle seemed just right. In particular its support for online editing of Word and Excel docs (using Zoho for extra Web 2.0 brownie points) in addition to offline editing was singled out for praise.

This feedback is based on a particular way of working so won’t work for everyone. However it was surprising to me (as a Ruby on Rails believer) that Basecamp got such short shrift from people with no axe to grind. I’d welcome any views on the pros and cons of various modern collaborative tools.