It’s a well-worn truism of business systems that there’s no point in a report if it doesn’t change what you do. If you’re to control something you must measure it. Conversely, if you don’t need to control something, then there’s no point in measurement.
Even so, for nerds like me, there might be some pleasure in measurement. Analysis for its own sake.
I love statistics. I am an out-of-the-closet nerd. At school, although I was no genius at mathematics, I wasn’t bad at it either, and I took particular pleasure in statistics, the art of deriving meaning from apparent chaos (yes, and lies too, as they suggest). Whilst I’ve forgotten most of what I knew (nowadays I don’t know my mean from my median) I still love the idea of distilling something interesting, if not useful, from vast reservoirs of data (I think the fashionable term is BIG DATA).
Over a decade ago my company LLP Group sponsored a music competition in the Czech and Slovak Republics. We traipsed around both countries with a jury of four distinguished teacher/musicians, one from each category (strings, woodwind, brass and keyboards) and we listened to nearly two hundred young musicians. We had a relatively simple scoring system that delivered a single number between zero and ten for each player, and at the end of the tour, we promoted the top four from each category to the semi-finals.
I ran the scoring system using an Access database of my own devising, and every morning after I’d crunched the previous day’s numbers I’d produce a report for each juror, laying bare the unconscious stirrings that influenced his or her preferences. I was particularly keen to analyse these biases – a preference for Slovaks or Czechs (nationalism is always a danger in this part of the world), a preference for their own category of instrument, a preference for men or women (numbers I didn’t make public) and their average score and standard deviation. In the case of average score I had great difficulty in convincing them that the lower the average the less influence they might have on the final result. As for standard deviations, they didn’t like the sound of that at all.
Nationalism was a clear bias, unsurprisingly, but I found that each juror was biased not for, but against his or her category of instrument, as if he or she knew exactly where to find fault. As for bias towards men or women, this was fascinating, but I was careful to keep the sometimes surprising results to myself.
The systems@work part of LLP Group produces software for professional services management, which I design, and we’ve implemented the system across the group. In idle moments I speculate on certain questions such as ‘Is utilisation different on different days of the week?’ and ‘Is realisation related to the length of a client engagement?’ Utilisation is a measure of how much available time a consultant spends on client-directed work. Realisation is a measure of how much client-directed work is finally billable. I’m not sure that one can do anything with this knowledge, but it’s fun, and when I looked at all the reports in our system the other day I came across some of the reports I’d idly written a year or two ago to answer these two questions (it must have been a rainy winter weekend).
Here for example, you can see that utilisation is slightly lower on Mondays and Fridays. No surprise there, I suppose. What could one do about it? Well, maybe make Monday and Friday six-hour, rather than eight-hour days, making the midweek days longer ones?
And here you can see that average realisation is not significantly a function of engagement length.
time@work can do all of this (and less!). And no doubt more. Utilisation by nationality, by sex, by age, by client sector? Average engagement length by season? Hmmm…work to do!
For me this is fascinating, but perhaps it is I who should undergo analysis.