Go – Winning at Last?

I’ve always been sceptical about Artificial Intelligence (AI). It seems to me that ‘artificial’ and ‘intelligence’ don’t really go together, unless intelligence is rather narrowly defined, as, for example, the ability to consider a vast number of logical alternatives very rapidly indeed. Machines can do that well, and over the last decades sheer number-crunching (with, I acknowledge, some bells and whistles) has caught up with, and overtaken, human ability at games such as chess (where the ability to plan many moves ahead is of crucial importance). Processor speed is crucial and processors have got faster and faster. The human brain, though, is not merely a processor and number-crunching isn’t the name of every game.

go

It’s interesting, therefore, to read that Google’s DeepMind division has come up with a strategy and an algorithm for the ancient Chinese game of Go that can now defeat the best human players. The problem is that Go, though a more simple game in terms of its rules than chess, has many more branching possibilities. There are more arrangements of little black and white counters on the board than there are atoms in the universe (I wonder how they know that!). Even today’s fastest computer processors simply can’t consider the branching possibilities fast enough to plot the right move, the one that brings the highest chance of winning (perhaps quantum computers will eventually be capable of winning with this crude approach). Apparently, Go is a game in which it’s hard to know who’s winning – the tables can turn at the last possible moment in a cascade of black to white or vice versa.

DeepMind’s strategies involve an interestingly empirical approach, combined with the more traditional number-crunching one. The algorithm looks at the overall pattern on the board and compares it with a catalogue of patterns in other games and the resulting win or loss. And the more it plays (and, of course, it can play itself in virtual space millions of times a day) the more it learns about which overall patterns are successful and which are not. At a certain point, the algorithm switches to, or also uses, the classic ‘let’s consider all possible outcomes’ approach.

It’s a clever idea, a ‘broader-brush ’empirical’ start and then a fine-tuned logical attack. And it’s worked.

But how ‘intelligent’ is that? Certainly it sounds more like the way humans think and solve problems. We don’t have brains that work like computers, capable of simple logical planning at lighting speed. And it’s interesting to note that the best Go players in the world talk of using ‘instinct’ to decide on their moves. This could be something like the ‘pattern comparison’ approach. Is this the beginning of the creation of real intelligence in a machine?

Certainly, there’s a suggestion that this approach to artificial intelligence will be more fruitful, and might be applied to the diagnosis of illness or to business problems. I can see that that might be true – look at millions of combinations of symptoms and track who dies and who lives, and thereby ‘learn’ which patterns are the more promising. But there is one essential difference, which is why DeepMind’s computer can play itself – the options are constrained, however numerous they may be, and they are known. A given counter is either black or white. The problem is a digital, binary problem, rather than an analogue one. The possible ‘positions’ in business or sickness are unknown in advance.

I am a pessimist about artificial intelligence. We will never create a human mind by building a machine. And I find that consoling. I’ve worked in IT for more than 35 years and have read about one AI breakthrough after another. And yet, the most that’s been achieved has been to win at Go.

I have more faith in that other retreating dream – fusion energy. It will deliver sooner and more usefully than AI, but, even so, not next year.

 

 

Robots and Aliens – Would They Be Like Us?

There’s been a flurry of articles in the UK press about a scientist’s suggestion that intelligent aliens would resemble us physically and psychologically. This means, I suppose, that we might have met a few without knowing it.

Alien Life Would Look Like Us

Probably not a clever creature….

Improbable Alien

This makes me consider robots (aliens of a kind, I suppose), and what it would take for us to accept that a robot is alive and intelligent, with an interior life, and therefore demanding of moral consideration. The Turing Test sets a very low bar in this respect since we demand more of life (unless we are extremely limited emotionally) than that an intelligent creature should type some textual responses from behind a screen.

As for intelligent alien life, I think the argument would go something like this:

The Implications of Language

  • Intelligent life requires language and language goes with thought
  • Language and thought can develop only in a community of creatures (where meanings and criteria for the use of words, especially mental words, can be implicitly agreed)
  • Communities can only exist when language permits the ascription of thought by one individual to another, and this recognition of the other thereby involves compassion, the instinctive understanding (and feeling) of another’s thoughts, beliefs, pains, joys and feelings
  • A moral, perhaps even legal, code based on such compassion would underpin any community of intelligent life
  • Language and communities must evolve together
  • A language can only exist if a life form possesses a face capable of highly complex (mostly involuntary) physical configurations, since language, thoughts, feelings and beliefs are ascribed and learned through facial expression (note that a blind person generally grows up in a community of sighted people)

Head, Brain, Eyes and Face – All Go Together

  • Electromagnetic radiation (in our case the visual spectrum) offers the most powerful means of knowing things from a distance. Therefore aliens would have eyes.
  • Eyes would be found near the highest point of an alien’s body so that the field of vision would be as large as possible
  • An alien would possess at least two eyes (or some such similar mechanism) to enable the detection of distance
  • Eyes would evolve in a face since the focus of the eyes is a vital component of understanding facial expressions which underpin the mental life and it wouldn’t make sense to look in two directions at once
  • The eyes would evolve in proximity to the brain for rapid signal transmission
  • They eyes would be heavily protected (sunk into something like a skull)
  • The eyes would need to change their field of focus rapidly, therefore would need the ability to swivel. Protuberant eyes would be vulnerable, and it would be inefficient to swivel too much of the body, therefore intelligent creatures would have heads on necks, probably containing brains

Unlikely….

eyes in feet

Ears Also Useful

  • Language would probably use a medium other than electromagnetic radiation, because of the many advantages in being able to use eyes and ears separately, especially when the sun isn’t shining. You might imagine that a creature with more eyes than two would reserve at least one for language and communication, but eyes mean focus and that the interlocutor be in the field of vision, as well as that there should be light, so there would be advantages in using a separate medium such as distortions of the atmosphere (sound).

Hands are Special

  • An intelligent creature would need precise control of as much of its environment as possible. Therefore it would have hands, probably at the end of highly mobile sticks (arms), since this would enable rapid movement near the body. Prehensility is another requirement (the opposing motion, for example, of thumb and first finger) since this enables grasping. The ability to write, draw and conduct orchestras might follow.

Mobility

  • Wheels are unlikely, since, although they are efficient, they are less adaptable for jumping, reaching and stepping over things, than equipment such as legs. In any case roads and flat surfaces rarely occur in nature. Legs and/or fins of some kind would be the obvious choice, but I wonder if intelligent life could evolve underwater.

Unlikely to occur naturally…

cat on wheels

What else might we assume? I am sure that many other aspects of biology and psychology might be derived from first principles, or am I being too anthropocentric in my thinking?

So, how different might an alien be? I think it’s logical to assume at least:

  • Eyes, ears, face, brain, and at least one head
  • Language, probably based on sound (in the atmosphere or in a liquid)
  • Morality and community
  • Arms, digits and prehensility, but no reason to stop at two and five of these
  • At least two legs

We often wonder if we could create intelligent life. A lot depends on what you mean by intelligence, but setting the bar high, at the point where we must take moral account of such a creature, we would require the ability to use (even invent) language, to suffer convincingly, to express an inner life, to care, to learn and so on. I would think that the best materials for the manufacture of such a creature would probably not be metal and wire, but would be of biological origin – such as skin, muscle, etc. – so a convincing ‘robot’ would probably have to be human or look like an intelligent alien. Dr Frankenstein had the right idea.

Almost human…but made of the wrong stuff

bender

HAL, the intelligent computer in Stanley Kubrick’s 2001 – A Space Odyssey is a robot/computer that’s almost human. HAL lacks skin and facial expression, (though he has a rather alarming red all-seeing ‘eye’), but his(or her?) beautifully modulated voice, and ability to plan, to deceive, even to ‘die’ in some distress at the end of the film, almost convince us that he is, or was, alive. That he could never have evolved independently of human life is irrelevant to the issue of whether he lives alongside human life. I think it’s his dying that tips the balance, since he evokes our reluctant compassion, but in reality this is the least plausible aspect of his behaviour, since it’s hard to see how or why his gradual regression to childhood makes any sense in a programmed device that did not grow up (as far as we know) in an organic way.

In fact, computers and robots are useful precisely because they do some things (simple, repetitive, number-crunching things) better than we do, and the more human they become – vulnerable, fallible, moral – the less useful they will be.

Are programmers an endangered species? I saw an amazing thing in Australia.

I am an old dog when it comes to programming. I began nearly 35 years ago, coding in COBOL (an acronym for Common Business-Oriented Language), invented by Grace Hopper, who worked for the US Department of Defence in the 1960s. It’s a verbose language – lots of words to do only a little. Indeed, there really isn’t much that COBOL does to make your life as a programmer easier.

Of course, in ancient times, there were languages that were even closer to machine code, such as FORTRAN, but even with COBOL you still had to ‘lay out’ your use of memory and take care to prevent your variables from interfering with each other.

programmers

Over the years I’ve seen programming languages become more powerful. I wouldn’t say it’s made the job any easier. Whilst you no longer have to ‘lay out’ your use of memory, other complications have developed of another kind, as the possibilities and demands have multiplied.

Every few years or so, someone’s suggested a new ‘generation’ of programming environment where there’s more intelligence inbuilt into the language. But usually these ideas have disappointed. Either the language has to be so constrained as to be useless, or it simply doesn’t work. It’s either ‘you can have any colour as long as it’s black’ or ‘syntactical error at line 403’.

So, sceptic that I am, I was surprised to find myself responding enthusiastically to something a former colleague, Tamas, now living in Australia, showed me last week when I was in Sydney. He’s been taking some evening courses at the University of Erehwon in Sydney and he got drawn into a research project run by a Professor Kempinski. They’d heard about Tamas’s experience with business systems and were getting bored with the Professor’s obsessive experiments generating manufacturing systems for nylon stockings.  Tamas showed me how it works on his own PC and if it’s really what I think it is, then perhaps the programmer productivity revolution is finally about to happen.

I’m used to the idea of agile development. It’s a justifiably fashionable way of building systems, since it brings developers and consultants closer to the minds of those who will use the resulting system. The idea is that you involve real users in frequent prototyping sessions so that you can tease out what they ‘really’ mean when they say what they want, and they can see what you’ve ‘really’ understood when you show them what you’re building. But you still need consultants, and you still need programmers, so there are still the same two sides to the process – the users and the developers.

So, what if the users can simply ‘say’ what they need to a PC (or a Mac) without programmers or consultants present (and no Lilliputian programmers inside the computer).

The machine starts by asking you:

‘What system can I build for you today?’

You can’t exactly babble. There’s a kind of lexicon you must use. For example, you’ve got to say ‘location’ rather than ‘warehouse’ or ‘store’, and abide by many other similar rules.

So, I said, casting around for an application:

‘I need a system to manage the purchase, storage and sale of plastic bath mats.’

‘In how many locations can this bath mat be found?’ it came back at me, after only a very brief pause for ‘thought’. (One thing I should mention is that it only works if you adopt an accent somewhat like Professor Kempinski’s (Australian with a touch of Belorussian).)

‘Five locations,’ I answered. (I had to say ‘fife’ before it recognised the number, with a bit of a drawl on the ‘i’ to make it more Australian.)

‘Where exactly are these locations?’ it asked (the computer’s voice is modelled on that of Sheila, his wife, who is Australian, so actually there are risks of misunderstanding on both sides).

Once we’d got over locations, Sheila asked me how they might be assembled, what colour packaging they would come in (she rejected my suggestion of pink), who the suppliers are, whether they’re perishable (she obviously doesn’t ‘know’ what a bath mat is), what the pricing structure should be, and so on.

Finally we got to a very tricky area.

‘What do you imagine in the area of costing, mate?’ she asked.

‘I was thinking of actual costing, working out the true cost of products. Can you do that?’

‘True cost is an illusion, mate. There’s no such thing. Better use standard cost with monthly allocation of variances,’ she said. I have a strong feeling that Tamas was involved in this part of the algorithm.

After about an hour, I had a credible distribution system in front of me. In fact it looked like a system based on Microsoft NAV, but with more vibrant colours. Such systems usually involve months of programming work, not just an hour more or less alone with a PC.

I don’t know if the system would work in areas other than manufacturing or distribution (I see difficulties with simulation or planning), but it was an impressive demonstration, and for the first time ever I believe that we may not need programmers any more.

The software is still not ready. XI-Mera Version One will be launched exactly a year from now, on April 1st 2016. But when it comes, what will we do with our programmers?