Luck: What it means in sport, life...and UX

By Rob Gillham

In an entertaining and thought-provoking talk at the London School of Economics on Monday night, writer and former England cricketer Ed Smith made a compelling case for embracing luck in our lives.

The title of the talk – one of a series of public events at LSE – was ‘Luck: what it means and why it matters – in sport and life’. Admittedly, I attended this lecture primarily as a cricket lover, yet I found myself musing on the lessons for user experience (UX) researchers to be found in Ed’s stories about the evermore strenuous – and ultimately futile – attempts of sports coaches to eliminate chance from sport.

His argument is that people engaged in the reduction of risk eventually find themselves on the boundary of what we know to be controllable. At this point, despite all available evidence, human beings will continue to try and take control of events that are effectively random.

Whilst he made the case for increasing acceptance of the influence of randomness in our lives, I was struck by how his examples of attempts to achieve control over sporting situations drew parallels with mistakes I have often seen researchers make in drawing their inferences from customer data.

Randomness and UX research

Confirmation bias – a tendency to favour information that confirms things we already believe to be true.

  • In sports, pundits and coaches often single out examples of behaviour that they already ‘know’ to be the strengths or weaknesses of a player – to the exclusion of all other information.
  • In user research, observers who have strong opinions about the audience or the design will often latch on to isolated incidents in one research interview, or consistently rate the opinions of one person above others in a group research format.

Fundamental Attribution Error – describes the tendency to over-rate the influence of personal agency and personality to explain the behaviour of others while under-valuing situational explanations.

  • Sports analysts often assign too much significance after the fact to events such as the manager’s half-time talk to explain a turnaround in the team’s performance in the second half, when – as a superior side to their opponents – regression toward the mean suggests it is more than likely they would have won eventually regardless.
  • When designing for a certain audience, inexperienced project teams tend to overestimate their ability to produce certain behaviours by changing the location of buttons and other superficial actions – without considering more fundamental influences on a person’s likelihood to use a product. For example, if I am on a train, a mobile app presenting the most useful tasks for this context is preferable to a full desktop version of the same tool, however attractively presented.

Looking for patterns that don’t exist – trying to get your analytical data to address a question which it is not suited to answer, or simply asking the wrong question.

  • In professional sports, video analysis is used to break down every aspect of a players’ performance. Yet analysis commonly focuses on the wrong parts of performance. A batsman in cricket who ‘plays-and-misses’ at a delivery is still in, yet if the edge of their bat ‘nicks’ the ball – and it is caught by a fielder – they are out. Batsmen will spend hours reviewing video footage of their dismissals, observing their technique as they nicked the ball. Paradoxically, as the point of cricket is to hit the ball with their bats, these ‘nicks’ are actually somewhat better shots than the ‘play-and-misses’ which are often completely ignored in analysis.
  • In user experience design, many powerful tools exist to help analyse user behaviour and identify problems. Examples of these include remote testing tools, eyetracking and web analytics. Business stakeholders often prefer the quantitative output of these tools to qualitative user research as they offer a superficial veneer of statistical validity. Yet their output is only as good as their operator. You cannot simply turn on an eyetracker during research and automatically expect to get insights into user behaviour – it is simply too sensitive to tiny fluctuations in the research environment. The whole research session, and the objectives, must be designed exclusively to optimise the use of eyetracking technology, otherwise the data produced is at best useless, and at worst misleading.
  • (Harry Brignull produced a memorable presentation a couple of years back, comparing poor analysis of eyetracking data with identifying animal shapes in cloud formations! That is, trying to impose meaningful patterns on essentially random data)

Ed Smith was promoting his recently published book ‘Luck: What It Means And Why It Matters’. There’s no UX in there, but it’s provocative, well researched and intelligently written. I’d recommend it to anyone with an interest in the subject of risk, complexity and human beings’ often misguided attempts to control it (plus it contains several good anecdotes for the cricket fan).

What do you think?