Solutions, Ideas and Creativity

I watched The Man Who Knew Infinity the other day, in which the fabulous Dev Patel (Slumdog Millionaire, The Best Exotic Marigold Hotel) didn’t play the usual fortunate dimwit, but rather the great Indian mathematician Srinivasa Ramanujan, who not only knew infinity, but could perceive complex mathematical theories, and the properties of numbers almost instantly as if he were simply looking out over a landscape and seeing objects that were just there.


There’s a moment in the film when his sponsor, the mathematician G H Hardy (played rather splendidly by Jeremy Irons) asks him where his ideas come from. Ramanujan looks puzzled. It’s a question he cannot answer. He simply ‘sees’ theories, or solutions, or patterns in mathematical or logical space. Where he has trouble is in developing painstaking proofs of his theorems so that others can be persuaded of their truth, others who lack his capacity instantly and ‘simply’ to see the correctness of a theorem.

Mozart, it is said, could simply ‘see’ a whole symphony, in all its complexity, all at once.

In fact the question ‘where do your ideas come from?’ is a misleading one. We tend to ask it when someone’s gift of solving something, or creating something, is so extreme as to seem unimaginable to us more ordinary mortals. But we’re all performing the trick of seeing solutions and ideas every minute of every day.

I was thinking about this when I was designing a system for a client in Peterborough two weeks ago. My job was to work out how time@work, our software for Professional Services Management, could meet the requirements of an engineering firm that designs control systems for factory floors. The challenge was that they already had a very sophisticated system which, for one reason or another, they needed to replace. I spent the first two days trying to understand their current methods and procedures, thinking all the time, ‘How on earth are we going to do all that?’

But time@work is a flexible system, a set of tools that you can put together in many millions of different ways, almost in the way that you might construct a tune, or a symphony, from notes. And suddenly I ‘saw’ how to fit our components together to do what they needed.

‘Where did the idea come from?’

Well, that’s a pointless question. It didn’t come from anywhere. There is no ‘process’, no algorithm you can apply to derive a solution from the facts, the requirements, from the bare bones of a problem. True, you can sometimes attempt a ‘brute force’ approach to problem solving, trying every permutation of components until, hey presto, the pieces snap together, but the ability to recognise a ‘good snap’ when you see it just moves the mystery to another place.

Finding solutions to problems in business procedures and systems isn’t easy, though it doesn’t rank with ‘knowing infinity’, but it shares the same characteristics. Imagination of any kind isn’t a ‘process’, or a method. All of us, all the time, simply do things and never think to ask ‘where the idea came from.’ We think up sentences, and put our ideas into words. We don’t ask ourselves ‘Where did that sentence just come from?’ Where does what we want to say ever come from? Where would we look for the process that generates sentences? And, come to that, where do recipes come from, or bicycle routes?

The great gifts that geniuses such as Ramanujan and Mozart possessed are the gifts that we all possess, with the volume turned up very high indeed. Perhaps if we could all do what they could, the supper would never get cooked, and man might soon become extinct.

I am with the great philosopher Wittgenstein on the issue of so-called ‘philosophical problems.’ We invent them, and keep our philosophers in (ill-paid) work, by asking illegitimate questions. ‘Where do ideas come from?’ Nowhere. Somewhere. Everywhere. They just happen. We simply ‘see’ solutions. Most of us.



Philosophers for Brexit

I read on the news this morning that Britain’s military establishment (or, rather, Britain’s former military establishment) has come out in favour of Brexit. Dozens of former generals have signed a letter arguing that what matters when it comes to defence of the realm is NATO not the EU.

Historians have come out for In. Actors, artists and other luvvies have come out for In. Economists have come out for both, of course, but what should we expect? It is the fashion for groups of all kinds to hold hands and write to The Times in favour of either In or Out. Where do campanologists stand? Ornithologists? Kleptomaniacs? Nymphomaniacs? Meteorologists? Numismatists? Philatelists? Dog lovers?


But another very important group has also nailed its colours to the mast today. Less well reported, but surely of greater import,  is a letter in today’s edition of Mind, the journal of the British philosophical establishment, signed by members of Britain’s philosophical community (note that there is no such thing as a former philosopher, unless you mean a dead one). They have come out, albeit quietly, for Out. Entitled ‘But it doesn’t mean anything’ the letter decries the philosophical assumptions on which the EU is built.

Brian Goodlittle, Reader in Philosophical Energetics at Bradford University says, ‘I was approached by the editor of Mind and was asked to sign this letter. I did so enthusiastically. I am fundamentally opposed to the continental drift of modern European philosophy. It favours meaningless nonsense so it’s not actually philosophy at all. I favour the bracing style of British Empiricism. It admits no blather, no metaphysical indecency. During the Second World War British Empiricism was one of the fiercest weapons in our intellectual arsenal. It had few uses on the front line, admittedly, but it helped us to break the Enigma code and, with the help of the Yanks, to build the Bomb, whilst the Nazis were literally dreaming up nonsense. It would be a disgrace if we gave in now to continental so-called philosophies such as phenomenalism, existentialism, structuralism and other forms of poppycock. French philosophy, in particular, is a load of merde, in my opinion. It reeks of garlic and doesn’t make a single iota of sense. Let’s face it, Mate, what does ‘European Union’ mean anyway?’

Another eminent philosopher, Fiona Fruitington, Professor of Radical Empiricism at Northampton University, has calculated that works of continental philosophy weigh on average four times as much as works by British philosophers. ‘Being and Nothingness,’ she says, ‘I would rather read a DIY manual on shelving. EU law is just the same. Voluminous, meaningless and impractical.’

British philosophy has for centuries been tethered to good old common sense. You can only understand a statement if it can be verified, Alfred Ayer told us (though he could never quite explain how this claim could itself be verified). Austrian born British philosopher Karl Popper turned the same idea on its head and said that something only makes sense if it can be falsified (science proceeds that way, he pointed out, rather than by verification, but he never clarified exactly how his own claim could be falsified).

The greatest of them all, my hero Ludwig Wittgenstein, said we must look at how we come to understand language and the meaning of the terms it contains. We must examine language ‘games’ in real human communities (though I don’t think he had the EU in mind). Continental nonsense, and most of what the EU has to say, he would describe as ‘language gone on holiday’, ordinary words extrapolated way beyond their safe and practical usage. The role of the philosopher, he believed, is to show the fly the way out of the bottle, the fly being the ordinary man or woman befuddled by EU terminology.

‘Ever greater union.’ What does ever actually mean, Wittgenstein might ask. How have we come to agree, as a community of minds, on its deployment? And how could we begin to understand the many meanings of ‘sovereignty’ and ‘subsidiarity’?

I have great sympathy for philosophers, but in the end I’m not with them on this. When it comes down to it the vast majority of them don’t know how to boil an egg.


In Support of Apple


Rights are often in conflict and there’s no reliable calculus that determines which of them should prevail. That’s because rights aren’t about utilitarian calculation. They’re a fly in the ointment of the collective, an essential counterbalance to the crude maximisation of human happiness and the crude minimisation of human suffering, which, if you’re just subtracting one from the other, can justify appalling cruelty to an individual as long as the sum of happiness is great enough.

Rights are fundamental to any moral theory; without them there wouldn’t be anything to get started with. It’s rights that establish human inclusion in whatever utilitarian calculation might be begun, and limit the applicability of its result. Animals, too, have the right to moral consideration.

Where do rights come from? Legal rights come from the law, whether from a founding document such as a Bill of Rights or a Constitution, or in some legal systems, from preceding judgements. But legal rights depend in turn on moral rights, which the religious take from additional founding document such as the Bible or the Koran, but which atheists like me, though each of us perhaps differently, take from a concept of what is essential to human life.

Human life, and language, are built on the assumption that others are like us. Descartes’ idea of the solitary consciousness doesn’t make sense because consciousness requires articulation, and a private language isn’t possible. Language and our knowledge of each other presume on our ability to know what others see and feel, and morality on our vivid understanding of others’ misfortune. We grant rights to each other on that basis, though we don’t all agree on their weight.

The battle between Apple and the FBI is about rights and it’s not a simple one. Tim Cook, I think, is unconcerned with the rights of the San Bernardino gunman (who is, in any case, dead), and I suppose he’d have no issue with assisting the FBI to obtain data from that particular phone. The issue, as I understand it, is that Apple is being asked to provide a ‘general tool’ for the hacking of iPhones anywhere and everywhere,  which, in his view, infringes the rights of millions of ‘innocent’ iPhone users. It’s not about the particular, where a murderer’s rights have been forfeited, but about the general, and the potential infringement of global rights to privacy. In Apple’s judgement this right prevails over the right of the general population to the protection that might ensue from information hacked from the phone.

I support Apple in their resistance to the court’s ruling. And I support their efforts to make it impossible, even for their own engineers, to hack the iPhone. I presume that it would require some rather complex and bludgeoning law to make it a requirement that a device be always hackable. It would certainly give Government too much power.

Rights are these days under serious attack in the nation of the free. Trump wants to turn up the volume when it comes to torture – waterboarding being far too gentle for his taste. (If you want to know more about waterboarding and how awful it really is, read the late Christopher Hitchens on the subject. He tried it.)

Mexicans, apparently, will soon be building their own wall to prevent themselves from scurrying across the border (I love the spiteful twist of ‘making them pay for it’, rather as if you might insist that a murderer pay for his own electric chair). And all Muslims are to be denied entry to the country.

Hardly the land of the free.

The right to privacy ranks high in my list, but there is, I suppose, no way you can establish that it must outrank other rights. It’s a matter of opinion and needs defence, by repeated passionate assertion, rather than philosophical justification, though logic can help with the analysis of how one right might conflict with another. But there are many who rank the right to bear arms, for example, far higher than the general population’s right to safety. What seems obvious to them is anathema to me.

But, from my point of view, three cheers for Tim Cook. I hope he doesn’t end up in jail (and have to pay for the bars).




It’s more than a few years since I graduated with a degree in Philosophy and Psychology, the two subjects I chose from the ghastly PPP trio of Philosophy, Psychology and Physiology (how I wish, now, that I had chosen Economics and PPE instead). Quite why the two subjects I chose were ever put together I’ll never understand. The one was entirely theoretical, the other a hotch-potch of social observation, dubious experiment, linguistic analysis and tedious stories about rats and pigeons.

From the first subject I learnt, gloriously, to know nothing at all, and from the second subject I derived merely boredom, confusion and frustration. As it happens the two faculties were more or less at war with each other (in the particularly vicious way academics go to war), the philosophers claiming, correctly, that the psychologists didn’t know what they were talking about, the psychologists babbling on regardless.


Psychology was at its worst when it strove to be the science of human behaviour. On safer ground, such as the neutral description of animal behaviour, it was merely dull, in the way that taxonomy is dull. Of all the twelve subjects on which I was examined in the summer of 1979 (a hateful two weeks that still gives me nightmares) social psychology was the most ridiculous, and by some cruel irony was the paper for which I got the highest marks.

When it confines itself to gentle observation social psychology is more or less tolerable, though English literature does the same with greater insight and wisdom. But when it attempts ‘theory’ it becomes absurd, not least when it approaches the human mind as if it is a machine. Machines are to humans what sound is to speech. We are both mind and machine, depending on how you look at us, but when we are talking ‘about’ rather than emitting sound, we are inescapably in the human world.

I remember (with some inevitable inaccuracy) a particular theory called ‘affect theory‘ (it’s even dignified with an article on Wikipedia) . One of the ideas of affect theory was that if you went around smiling all day, you would make yourself happier. ‘Happy’ and smile go together. Willing undergraduates (paid?)  reported mild mood swings after a day of parading a rictus grin. You might as well take an umbrella out in the hope of making rain.

I was thinking these thoughts when I read about a new film based on the famous experiments of social psychologist, Stanley Milgram. Milgram began his experiments in the early 1960s during the trial of the infamous Nazi  Adolf Eichmann, partly as an examination of the defence that Eichmann and millions of other Nazis ‘were simply obeying orders.’ Milgram was curious about how far ordinary volunteers (generally male and Yale undergraduates) would go if instructed by an ‘authority figure’ to administer electric shocks of increasing voltage and pain.

His approach was ingenious. A ‘learner’ (in reality an actor) was strapped to an electric shock machine and the naïve subject, the ‘teacher’, was required to teach him word pairs. Every time the learner made a mistake the teacher was instructed by the experimenter to administer a shock. As these grew in voltage with each mistake the learner cried out in pain, even begged the teacher to stop. Some stopped, but the majority went on.

The experiment was repeated all over the world with various minor variations, with both men and women as the unknowing subjects, with the learner sometimes hidden from view, sometimes not, with the experimenter present or instructing by telephone. By and large, the results were the same. The majority of unwitting subjects (the teachers) were obedient, even if the learner appeared to be suffering.

We studied these experiments as part of my psychology course, but fortunately we didn’t repeat them. I remember thinking, with horror and revulsion, how shameful it would be to be found to be obedient.  We often ask ourselves ‘How would I have behaved?’ This experiment is a way of finding out. Indeed the experiment would now be considered unethical because of the extent to which it puts an unwitting subject through the stress of the moral mill.

But apart from imposing an unexpectedly awful experience on the real subject of the experiment, what does it tell us? Can comparisons really be drawn with the obedience of Nazi functionaries? I doubt it. The ‘teacher’ in the lab, where he may subliminally be aware that no real harm is being done, isn’t the Nazi in the concentration camp, where harm and death are obvious and permanent.

And what does it tell us about legal or moral responsibility? Does it bolster or undermine the defence of ‘just obeying orders’? In my view, it is irrelevant to the issue of responsibility.

The experiment, however, was brilliant and disturbing. I won’t see the new film about Milgram – Experimenter – but I hope that it’s honest about the huge deficiencies of social psychology.

To illustrate everything that, to my mind at least, is facile and pointless in social psychology I need only quote a paragraph from the Wikipedia article on one of Milgram’s later experiments (which, without any foundation, I take to be a fair description):

Milgram developed a technique, called the “lost letter” experiment, for measuring how helpful people are to strangers who are not present, and their attitudes toward various groups. Several sealed and stamped letters are planted in public places, addressed to various entities, such as individuals, favorable organizations like medical research institutes, and stigmatized organizations such as “Friends of the Nazi Party”. Milgram found most of the letters addressed to individuals and favorable organizations were mailed, while most of those addressed to stigmatized organizations were not.

No surprise, there, I think.

Experience – Personal Perspectives

Precocious teenage Cartesians, of a nerdy philosophical bent, often suppose they can ask:

‘It’s possible, isn’t it, that what I experience as blue, you experience as red. I could never know, because I can’t get into your head and see colours the way you do, but it’s possible, isn’t it, that if I did, the world would look completely different to me than to you?’

It’s a question that makes no sense, at least to precocious teenage Cartesians who have gone on to study philosophy. We can’t get at the idea at all using the language and concepts that we learn in the world, pointing at things together and agreeing on the application of words. Even if you’re in some mysterious sense ‘experiencing’ blue as my red, we’ll point at the same things and use the words ‘red’ and ‘blue’ in complete agreement.

It’s only in abnormal cases such as colour-blindness that we can agree that things look different, and that’s because there are ways to establish that someone sees colours unusually – normal sighted and colour-blind people disagree on the application of colour words, systematically.

Words and thought won’t stretch to the idea that ‘privately’ our experiences might be different. You might as well suggest that a square might ‘privately’ look like a circle to me, What can we usefully do with such an idea? I’ll never be able to say, ‘Oh, he’s one of those people who sees blue as red, or circles as triangles.’ How could I know, and what would we do with this ‘knowledge’?

Not that private experience of all kinds is unreachable. Private experience is reachable if we can agree on it publicly. For example, we can make sense of the idea that we can see a single image ‘as’ one thing and then another, as long as we can point and explain. ‘Private experience’ makes no sense when we can’t point at anything or explain in any way.

I can see this cartoon image either as a rabbit or a duck.


‘Look, this is it’s bill,’ you might say, or ‘Look, they could be ears instead.’

And I’ll know when you get the point. Though I’ll never be able to ‘catch’ you seeing it one way or the other. That part is private.

I was thinking these thoughts at the recently reopened Picasso Museum in Paris on Saturday, whilst looking at some of Picasso’s early Cubist works. Through Cubism Picasso and others were trying to convey how an object is ‘really’ perceived and understood by a spectator rather than simply to ‘capture’ how it looks front-on from a particular perspective. It’s obvious, of course, that we don’t perceive objects as a camera does, all at once, with a quick snap of the shutter. Our eyes travel, we shift our point of view, and our mind constructs an understanding of an object as it might be seen from multiple dimensions. Construction going wrong is what happens when you try out LSD, I suppose (though I never have!).

Here’s how Picasso captures the reality of a man with a guitar.


Construction is perceptual, emotional, and intellectual, all at the same time. Good painters add attitude, anger, lust or love to the line and colour mix.


So, Cubism supposedly shows us how we really perceive an object, perspectives all mixed up, a consciousness of two eyes, face-on and in profile simultaneously. Perhaps the fragmented, multi-dimensional prose of Joyce’s Finnegan’s Wake aims to capture the ‘real’ world in the same kind of way, its prose assaulting the reader from multiple angles and levels of consciousness, all at once, as the world does. We get paintings that are hard to look at, some might say, and prose that’s impossible to read. Perhaps they both leave us with too little to do ourselves, or, when they become, impenetrable, too much.

But for those still of a philosophical bent there’s a logical problem with Cubism. The painter supposedly shows us how we really perceive an object, through the medium of a painting. But a painting is an object too, which we also perceive and know in complex ways (perhaps we imagine the blank hessian at the back even while we’re gazing in rapt attention at the front). So, it’s a complicated experience. A painter conveys the experience of an object through the experience of a painting. His, and our, experience isn’t of a flat and neutral object hanging on a wall. It’s more complicated. Ultimately another’s experience is elusive, and perhaps Cubism, the more it strives, takes things too far. It can’t ever really succeed.

But, there’s truth in the idea, too, and success. Look at Picasso’s evocation of the real Cannes, the whole Cannes.


David Hockney does something similar, conveying everything, all at once, about a regular car journey he made from home to studio, combining knowledge and image into a single complete and personal experience, and finding a way to share it.


We’re condemned, as individuals, to see things from a single point of view, and must use whatever means we find to share our personal perspective. Paint, music, words, and the rest, they work up to a point.

Marvellous Nonsense – Davros, Creator of the Daleks, is Back!

‘I try not to understand,’ says the Doctor. ‘It’s called an open mind.’

This is the kind of seductive and playful nonsense that makes Doctor Who such compelling drama. It’s a clever remark, has the ring of truth about it, but doesn’t bear examination. After all, understanding is what makes us unique. If we hadn’t tried to understand we’d still be living in caves.

But that’s Doctor Who. It’s not actually proper science or serious drama, even if it feels like it. The fun lies in how plausible it’s made to sound and how seriously it’s played.

‘It’s a psychic projection, or something‘ says an underling at UNIT, as the face of Missy looms out of a TV screen. UNIT’s commander is clearly irritated by ‘something’, but entirely at ease with ‘psychic projection’.

The first episode of Doctor Who Series 9 on BBC 1 on Saturday night was as good as it’s ever been. Davros and Missy (the re-gendered, regenerated Master) are back from the dead (‘Death is for other people, dear,’ says Missy), and I’m even beginning to enjoy Peter Capaldi in the role of the Doctor. What more might we ask for?

Certainly not logic. Doctor Who plays a game with us. For the nerds who know the twist, turn and dialogue of every episode it weaves an apparently consistent fabric out of everything that’s ever happened and everything the Time Lords have ever said about themselves. It’s a complete and coherent account of their planet, their history and their nature. Whatever happens there’s always an apparently plausible reason that makes sense within the structures and assumptions of the show. After all, we want it to make sense. We want it to be real. That’s drama.


A mixture of solemn moralising, fanciful sci-fi, heroism, pathos and horror, in fact Doctor Who has it all. What it lacks, of course, is real logic. The logic of time travel is one of Steven Moffat’s specialities (remember the brilliant and prize-winning  Blink from 2007), and he manages it with incomparably greater brilliance and humour than Back to the Future ever did. But whilst, again in this episode, he’s having fun with the future causing the past, he’s also having fun ignoring the more basic and inconvenient logical problems of time travel (can you imagine, I actually studied some of the logical issues of time travel during my philosophy course at Oxford?).

The central fact of Saturday’s episode is that Davros is dying, but if you’re a time traveller with no particular commitment to any particular time Davros is always dying, and always being born. Why the call from the future or the past should come ‘now’ as opposed to ‘then’ is anyone’s guess. But who cares? The fun is in half believing it makes sense.

Another problem. ‘Where is the Doctor?’ doesn’t make sense, either. The Doctor belongs nowhere at any particular time, or may be in more than one place at once if he’s wherever he is  ‘at this moment’ more than once in his own timeline. Indeed, we’ve seen him in one place more than once.

But it’s churlish to find fault. Half-logic, half-plausibility, half-science, half-seriousness is the point of it, and I’ve suspended my disbelief every Saturday since the 1960s when the Doctor first appeared.

There’s half-morality too. Hackneyed though they are, the moral dilemmas these stories raise seem real enough at the time. The troubled figure of the Doctor (is he really a good Time Lord?) wrestles with moral choice as any realistic hero might, and in Doctor Who there isn’t always a good choice. (Star Trek‘s Captain Kirk, by contrast, possesses a moral compass (an American one, obviously) that never fails to point him in the right direction.)

“if someone who knew the future pointed out a child to you and told you that that child would grow up totally evil, to be a ruthless dictator who would destroy millions of lives, could you then kill that child?’

A flashback to Tom Baker’s Doctor of the 1970s illustrates the dilemma the current Doctor faces in this first episode. Should he rescue Davros the child, trapped in a ‘hand mine’ field, knowing, as he does, what Davros, inventor of the Daleks, will become? And does he rescue him, or abandon him? This first episode leaves us guessing.

I suspect we’re in for a morally subtle logical twist. It will be because the Doctor abandons him that Davros becomes the monster he becomes.

Such (fanciful) moral dilemmas (morality on holiday, as Wittgenstein might say) remind me of a 1975 science-fiction story Let’s Go to Golgotha, which describes a group of tourists time-travelling back to that moment when Pontius Pilate asks the crowd to choose between Christ and Barabbas. Cautioned about changing history and against standing out from the crowd, they chant ‘Barabbas’, only to realise that everyone in the crowd is a time tourist like them.

This first episode of the new series also shows Peter Capaldi growing into the role, or is it that we’re getting used to him? Anguish is what he does best, and in this new series they’re piling it on.  I can’t wait for next Saturday.

What connects the philospher Ludwig Wittgenstein to the Sydney Harbour Bridge?

His mother (nominally).

I’m grateful to my colleague and business partner Jiri, who saw my reference to the philosopher Ludwig Wittgenstein last week in a post on Science and the Mind. Ludwig, at various times an engineer, philosopher, clarinettist, soldier, architect, and, during the Second World War, medical orderly, was the son of one of the richest steel magnates of Central and Eastern Europe.

Karl Wittgenstein’s  Vienna-based empire extended even to Kladno, just outside Prague, where, in 1889, he set up a world-famous steel mill, naming it the Poldi Works after his wife, Leopoldine.

It was at this mill that crucial components were manufactured in the late 1920s for the Sydney Harbour Bridge (which I can see from where I am writing this).

Ludwig inherited billions, but gave all of it away to his sister Margaret (who was painted by Klimt) and to his brother Paul (who lost a hand in the First World War and for whom Ravel wrote a piano concerto just for one hand), preferring a solitary, thoughtful, existence in a cottage in Ireland and a hut in Norway. He was famously difficult company.

Never mind, he was the greatest philosopher of them all.

Sydney Harbour Bridge


The Poldi Steelworks in Kladno, near Prague.


Poldi Steelworks logo


Ludwig Wittgenstein


Can Science Help You to Find Good Sales Staff?

Ever since I studied Psychology (and Philosophy) at university I have loathed and distrusted the scientific study of human behaviour. My course stretched all the way from animal behaviour to social psychology.

The behaviour of animals (at least some basic antics of rats and pigeons) can be fairly accurately observed and described, sometimes even usefully predicted, but it wasn’t remotely interesting. Social psychology, on the other hand, amounted, in my opinion, to nothing more than common sense written down. Human behaviour, to my mind, is more expertly and interestingly covered in literature (which someone once described as simply ‘gossip written down’).

My favourite philosophers, of the Wittgenstein school, taught that ordinary linguistic descriptions of human behaviour and the scientific approach are mutually incompatible, and I still believe that.


So I have an immediate distrust of ‘objective’ ways of arriving at judgements about people. And for that reason, for many years, I resisted ‘objective’ ways of discovering if someone is, say, a good salesman, or a good administrator, or a good consultant. In other words, I hated aptitude tests. True, I have sometimes relied on ‘lQ’ tests to determine if someone might make a good programmer. These are narrow logical tests and I would hesitate to say they measure ‘intelligence’. They measure IQ, and IQ, a narrow but important skill, is useful for a programmer.

In the early days of the company, I often had to find programmers and consultants, and, relying on personal judgement and IQ tests, I wasn’t so bad at it. I was a programmer myself, after all. But finding a good salesman was hard. And it’s hard anywhere and everywhere. Sales skills are broad and complex. You interview, you take up references, you choose and then very often they fail. You feel a fool, especially when everyone else tells you their failure was obvious from the start.

Frustrated by failure, I was finally persuaded, in South Africa, by someone who runs a company very much like LLP, to use aptitude tests to find good sales staff. And so I tried. In fact I tried the tests produced by the very same company they recommended. They are global and charge surprisingly high rates for their methods and services, so lots of people must believe in them.

Their tests, as far as I can see, come at the issue from all sorts of angles. No ‘logical’ questions such as in the IQ tests, but rather, questions about personal preferences and attitudes. Fifty questions and you’ve pinned your man or woman down – salesperson or not.

So I tried the method on the next set of candidates who presented themselves as ‘sales people’. And, surprisingly, I found the results encouraging. Those who were obviously unsuited did poorly, and of those who looked promising, some did well and some did not. The tests seemed to find out which of these apparently promising candidates was really suitable.

So, I nearly signed up for the service, accepting that it would be expensive (but less expensive than failing salespeople).

And then I thought, hang on a moment, why don’t I try the test on the salespeople and general managers we already have? I know which of these is exceptional and which are merely good, or not good at all, at sales. So I did. Our best salesman scored poorly, our mediocre salespeople scored well.

Aptitude tests are hopeless when it comes to something important. There is no ‘science’ you can substitute for good judgement. Employing a salesperson is almost as difficult as choosing a spouse, but you get a little better at it as you gain in experience. Don’t be fooled by scientific nonsense.