On Nick Carr, Stupid, Google, Motor Cars and Cowpaths in the Brain

Nick Carr’s article “Is Google Making us Stupid?”  certainly provoked some brouhaha across the global interwebs. I read a lot of the commentary, particularly this before the article so I was surprised on finally reading the article to find that I really enjoyed it. Even though the question in the title was obviously intended to rile people.

How about asking: “Did the motor car make us lazy?”

You know what – the motor car probably did make some people lazy. And at the same time, for others  opened up more possibilities than ever before.

Overall it’s not a very useful question.  Nor is “Is Google making us stupid?” It may be a useful soundbite but it’s not a useful question.

The question is really asking two different things:
(a) Can use of google cause a fundamental change in the brain?
(b) If so, is this a change for the worse?

There seems to be a lot of agreement that (a) is possible. Nick Carr references some examples to demonstrate that the mind re-wires itself according to how it is used. “.. readers of ideograms, such as Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet”.

Here’s another example that suggests that the brain not only re-wires itself, but changes physical shape according to how it is used: Apparently the brains of London taxi drivers have been seen to physically change in response to their work.  

A study of London cabbies used brain scanners to show that a part of the brain linked with navigational skills is bigger in taxi drivers than in other members of the public. The scientists also found that the size of the hippocampus – which lies deep within the temporal lobes of the brain just behind the eyes – gets bigger in proportion to a taxi driver’s length of service.

Sounds strange when you first hear it but starts making sense in a “paving the cowpaths in your brain” sense. but makes sense. (If you have no idea what I’m talking about then check: http://www.classy.dk/log/archive/001522.html, and esp. the link from there to here: http://www.peterme.com/archives/000073.html)

So I buy part (a) – using google (i.e. the internet) can change the way my brain operates.

But (b) is far less clear. Assuming this kind of brain re-wiring has happened is it “bad” or “good”? Or “neither”. Or “both”. Look back at the motor car again. Was it able to effect changes in how people physically interact with the world? Yes. They were able to drive rather than walk. Was this change “bad” or “good”? How bad is it to be a bit “lazier” because you can drive rather than walk?

Nick seems to assume that “stupid” equates to skim reading rather than deep reading. I’m not so sure that intelligence is as simple as this. I have known plenty of people far smarter than me who spent their whole university lives skim reading books and yet apparently understanding them. 

Does Google encourage us to skim read? Possibly yes. But possibly, just possibly, faster and easier access to information might even help people reach better understanding better than was possible before. Just like the motor car enabled us to get from A to B more easily than ever before.

Advertisements

Common sense procurement tools: Pro Purchaser

Worth checking out for procurement professionals is Pro Purchaser, a Canadian provider of market information. What I like about the service these guys provide is that it is all about common sense.

At first sight the site seems incredibly basic. It only has monthly updates of pricing information, for example. But the real value is in the combination of pricing information and “Negotiating Nuggets” – reports that Pro Purchaser produce on various aspects of negotiating.

For example, if you are wondering how useful monthly prices are going to be to you as a buyer, here’s a great Negotiating Nugget from Pro Purchaser called Direction matters more than accuracy.

Thinking out loud on Free and Open Source

I was having a barbecue at a neighbour’s over the weekend and he showed me his new iPhone 3G. In particular he really enjoyed showing the Phone Saber application he had installed. In his view, the Phone Saber – a free application – was written by people with too much time on their hands, and he was delighted to have such a gadget for free. Obviously he would have thought twice if he had needed to part with ££ for it.

Which then got me thinking about the motivations behind the Phone Saber, the because effect and Whimsley’s great post on Linux growing up and getting a job and the sadly lamented Fake Steve’s tirades on open source. And open source hardware and how much quality you should expect from free stuff.

[Aside: I know there’s a distinction between Free and Open Source. But from the point of view of someone using an application it’s the Free-ness or not that matters more than semantics about who can see the source code. In this regard my friend and his Phone Saber are the same as me and Linux/MySQL. Will I ever look into the source code of either of these? No. So for the purposes of this post it’s the free-ness that I’m most interested in, not the open-ness.]

The Because Effect: Make money because of something rather than with it. Google gives search away for free and makes its money from advertising. Google makes money because of search rather than with it.

Whimsley on Linux Growing Up: I’m a sucker for a good analogy and Whimsley compares Open Source to Making Music:

To invoke another parallel, open source software is a creative venture like music. Many people create music for all kinds of reasons. Most people create music for an audience of one when they hum in the kitchen or sing in the shower. A smaller (but still huge) number of people get together to form groups or participate in orchestras or bands. They don’t earn a living from it, but they love doing it and enjoy their performances. Some might dream of hitting the big time, others are happy being part of their community. Then a much smaller set of people take it a step further. Maybe they are paid to be in an orchestra; maybe they are in a band with a manager and bar gigs around the country. And a lucky few, of course, hit the big time. They get a record deal, find a big audience, and make some real money.

Fake Steve on Open Source:

Also worth noting: While open-source is great in many ways, remember that the single biggest tech phenomenon of the past decade has been an entirely closed and proprietary system which was launched in 2001 (two years after Red Hat had already gone public) and which last quarter produced $4.8 billion in sales. It’s called iTunes and iPod. Have you heard of it?

Open Source Hardware: In case you were thinking that open source only works for software (and so the iPod would have to be closed source), think again: Here is MFGx.com on Open Source Machine Tools.

How much quality should you expect from free? A lot, it seems, in 2008. Here are some links from earlier this year I collected when everyone seemed to be complaining about the quality of free services:

http://jeffnolan.com/wp/2008/01/17/fixed-gmail/ (Gmail)

http://www.techcrunch.com/2008/01/17/blogger-suffers-major-outage-bloggers-not-happy/ (Blogger)

http://www.techcrunch.com/2008/01/15/twitter-fails-macworld-keynote-test/ (Twitter)

So back to my friend and his Phone Saber. This is made by TheMacBox who are two students who don’t charge for their software: “We don’t believe in charging for software (we’d rather everyone was able to use it) but it does cost money to run this site, and uses a lot of our time. If you’d like to help us out, there’s a donate button in the bar at the right side of the page.”

Last time I was at PizzaOnRails I had a good chat with some developers about open source. The upshot was that developers will do open source if they are out of a job and need to demonstrate their skills to potential employers. Obviously there are also those who do it because their company sponsors them to do open source work, and there are those who have an insane drive to spend all their waking hours producing code. But I wouldn’t discount the keen amateurs (Whimsley’s bedroom musicians). I’d put TheMacBox guys into this camp – even if they would appear to be on the brink of “making it big”.

So to attempt to pull al these strands together.

The because effect is important in understanding commercial behaviour in the world of free. But the because effect is more subtle when it comes to developing free or open source software. Rather than generating revenues with or because of the code, bedroom musician developers are producing code in order to make money in the future. With the coding skills they have demonostrated in the free software. Arguably “because of” their free software but it feels like someone producing a portfolio of work in order that they might get a job in the figure is a different proposition entirely to a going concern that gives away one thing in order to make money because of it.

And back to Fake Steve who so often hit the nail right on the head: don’t discount the importance of closed-source and paid for technology. Even if not SAP-level complex beasts, the sort of thing that 37 Signals are famous for.

And while Free might be great for “me-too” applications Fake Steve has a point about new stuff. You can make plenty open sourced hardware but I doubt whether the first car could have been made on an open source basis. (Perhaps it was … I’d love to be corrected on this).

Like I said, this is just Thinking Out Loud on Free and Open Source. Trying to figure out where they fit in the grand scheme of things.

So for now I’ll leave the last word to Mashable’s recent post on “This Entitlement of Free Needs To Go Away” .

Gartner Magic Quadrant for Sourcing Application Suites – A Reaction

See this link on Spend Matters for the story on Gartner’s 2008 Magic Quadrant for Sourcing Software http://www.spendmatters.com/index.cfm/2008/7/15/A-Free-Look-at-a-Gartners-Sourcing-Magic-Quadrant. My comment was a bit too lengthy for a comment on Jason’s post. So here it is, below:

First a disclosure: I lead the product development for TradingPartners. TradingPartners provides eAuction services. Very similar to what FreeMarkets pioneered all those years ago with their “Full Source” offering. We don’t sell software licenses, let alone software suites, so wouldn’t fall into Gartner’s analysis but we are considered competitors with a number of the companies mentioned in the Gartner Sourcing Magic Quadrant report. You can make up your own mind to what degree the comments below are self-serving or not.

I’m not keen on the name of the report. Despite all disclaimers the report is titled “Magic Quadrant” which implies that there is one “magic quadrant” that buyers should look at, i.e. the top right. And when I look at the vendors in the graphic the relative ranking seems fairly arbitrary. Certainly it’s not clear from the report why Quadrem should be better able to execute than BravoSolution.

The best part was the overall 10,000ft view of what is happening in the market. In particular:

  1. The distinction between strategic sourcing and tactical sourcing. “Organizations should expect to eventually deploy two separate sourcing solutions or two configurations of a single solution: one for tactical sourcing (for example, querying a contract fuel vendor for this week’s price per liter) and one for strategic sourcing (such as simultaneously negotiating rental car contracts across multiple vendors for service for the next three years and in 10 countries”.
  2. The summary of the consolidation in the sourcing software space (gone are Freemarkets, B2eMarkets, Frictionless, Mindflow, Procuri, Verticalnet).
  3. The recognition that wrap-around sevices are of paramount importance in strategic sourcing initiatives. “[E]ffectively leveraging different auction/event types for the best results requires a knowledge that can be gained only be using strategic sourcing applications. Furthermore, enabling suppliers to register online and providing customer service to troubleshoot their issues requires a significant effort that a procurement group will not be able to support without advanced planning and incremental staffing”.
  4. The recognition that strategic sourcing tools don’t require ERP integration. “They function nicely as standalone tools, because the trigger to commence a strategic sourcing event is the initiation of a project, and prospective vendors do not need to be in the vendor master unless they win the bid. The output of an event tends to be a contract. The unstructured nature of strategic sourcing lends itself to solutions that are architected as project management and document repository tools”. Here Gartner calls strategic sourcing unstructured. I would prefer to call it BRP (in contrast to ERP).
  5. The recognition that, in reality, buyers are still sticking to Excel rather than fully automating the sourcing process. “Requirements should be specified in the sourcing tool at the line-item level to fully evaluate and document the resulting bids using the application; however, in practice, many companies simply attach the specifications and record the resulting proposals at the header level, and analyze the results offline.”

Some parts of the report I would take with a pinch of salt:

  1. Including forward auctions in the debate. They are a red herring. Sure from a technology point of view they are similar to reverse auctions but in practical business terms they are of little relevance to most buyers.
  2. Cautioning that some suppliers are buggy. Without any meaty supporting arguments I’d assume all software is equally equal in this regard
  3. Come to think of it, a lot of the “strengths/weaknesses” seem cursory. E.g. Ariba is praised because it “offers varying scales of its sourcing product so customers can consume functionality as gradually as desired”. And Ariba is criticised because its “customers tend to use sourcing to solicit bids from local vendors”. 

Seeing the similarities rather than the differences

I love learning from juxtapositions of things you wouldn’t normally think have much in common. So I enjoyed reading this post on PM Hut entitled: Manage your project like attorneys manage matters. Who’d a thunk there was anything similar between developing a new software product and preparing a legal case. But it turns out that there are plenty of similarities between the two, if you approach the subject with an open mind, in terms of:

  • Planning
  • Setting expectations
  • Clarifying communications
  • Identifying costs and risks
  • Preparing effective documents

I have worked with lawyers who were good at the above, and some who were bad. Same as with software teams.

Enrich, Simplify

The Gaping Void cartoons are often really spot on, and here’s one of my favourites.

Enrich, Simplify (c) Hugh MacLeod

It neatly sums up something I’ve struggled with in my life developing software products over the past 10+ years. Programmers will prefer to continue building software (development) rather than slowing down and seeing how people use that software in the real world (support). So you often see a tendency to continue adding new features one on top of the other. Something that used to be good once gradually becomes more complex and brittle over time.

Where I am now I try to develop products along the lines in the Hugh cartoon. Deliver software in small chunks. Speed up and slow down delivery so that you have time to see how people use your latest code before you race off down the next avenue. Focus on the pieces that people are interested in. Spend time stripping stuff out as much as piling new stuff on. Depending on what the user base really uses. Something that is only really practical in the On Demand/SaaS/ASP/whatever world rather than the behind-the-firewall expensive-customised-software world.

Not a million miles away from the approach Mitch Free talks about in this interview with Jason Busch today (and what prompted this post).

Local Maxima and Technology Adoption

I generally have faith in the innate smarts of my fellow human beings. And so I tend to subscribe to the view that software can make things better – from recording accounting transactions more accurately all the way through to bringing people closer together. And that people are not slow to adopt a new technology if it makes their lives easier, or restores a childlike wonder to their worlds. I don’t believe in “change management”: if you set out on a software project in the full knowledge that you are going to need a lot of change management then you are setting yourself up to fail (*).

But I also like the idea of “local maxima”. which I have seen referenced in Genetic Algorithms. From the Genetic Algorithms page on Wikipedia

[A Genetic Algorithm] may have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not “know how” to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem.

 

Here is Guns, Germs and Steel on QWERTY keyboards – a great example of a suboptimal technology that only the churlish would seriously challenge these days.

Unbelievable as it may now sound, [the QWERTY keyboard] was designed in 1873 as a feat of anti-engineering. It employs a whole series of perverse tricks designed to force typists to type as slowly as possible, such as scattering the commonest letters over all the keyboard rows and concentrating them on the left side (where right-handed people have to use their weaker hand). The reason behind all of those seemingly counterproductive features is that the typewriters of 1873 jammed if adjacent keys were struck in quick succession, so that manufacturers had to slow down typists. When improvements in typerwriters eliminated the problem of jamming, trials in 1932 with an efficiently laid-out keyboard showed that it would let us double our typing speed and reduce our typing effort by 95 percent. But QWERTY keyboards were solidly entrenched by then.

What a great example of a local maximum in technology adoption.

 

(*) If this sounds naive my approach is a bit different. You need to do all your change management before you start your project. Not when you are about to implement it.