I’ve always been sceptical about Artificial Intelligence (AI). It seems to me that ‘artificial’ and ‘intelligence’ don’t really go together, unless intelligence is rather narrowly defined, as, for example, the ability to consider a vast number of logical alternatives very rapidly indeed. Machines can do that well, and over the last decades sheer number-crunching (with, I acknowledge, some bells and whistles) has caught up with, and overtaken, human ability at games such as chess (where the ability to plan many moves ahead is of crucial importance). Processor speed is crucial and processors have got faster and faster. The human brain, though, is not merely a processor and number-crunching isn’t the name of every game.
It’s interesting, therefore, to read that Google’s DeepMind division has come up with a strategy and an algorithm for the ancient Chinese game of Go that can now defeat the best human players. The problem is that Go, though a more simple game in terms of its rules than chess, has many more branching possibilities. There are more arrangements of little black and white counters on the board than there are atoms in the universe (I wonder how they know that!). Even today’s fastest computer processors simply can’t consider the branching possibilities fast enough to plot the right move, the one that brings the highest chance of winning (perhaps quantum computers will eventually be capable of winning with this crude approach). Apparently, Go is a game in which it’s hard to know who’s winning – the tables can turn at the last possible moment in a cascade of black to white or vice versa.
DeepMind’s strategies involve an interestingly empirical approach, combined with the more traditional number-crunching one. The algorithm looks at the overall pattern on the board and compares it with a catalogue of patterns in other games and the resulting win or loss. And the more it plays (and, of course, it can play itself in virtual space millions of times a day) the more it learns about which overall patterns are successful and which are not. At a certain point, the algorithm switches to, or also uses, the classic ‘let’s consider all possible outcomes’ approach.
It’s a clever idea, a ‘broader-brush ’empirical’ start and then a fine-tuned logical attack. And it’s worked.
But how ‘intelligent’ is that? Certainly it sounds more like the way humans think and solve problems. We don’t have brains that work like computers, capable of simple logical planning at lighting speed. And it’s interesting to note that the best Go players in the world talk of using ‘instinct’ to decide on their moves. This could be something like the ‘pattern comparison’ approach. Is this the beginning of the creation of real intelligence in a machine?
Certainly, there’s a suggestion that this approach to artificial intelligence will be more fruitful, and might be applied to the diagnosis of illness or to business problems. I can see that that might be true – look at millions of combinations of symptoms and track who dies and who lives, and thereby ‘learn’ which patterns are the more promising. But there is one essential difference, which is why DeepMind’s computer can play itself – the options are constrained, however numerous they may be, and they are known. A given counter is either black or white. The problem is a digital, binary problem, rather than an analogue one. The possible ‘positions’ in business or sickness are unknown in advance.
I am a pessimist about artificial intelligence. We will never create a human mind by building a machine. And I find that consoling. I’ve worked in IT for more than 35 years and have read about one AI breakthrough after another. And yet, the most that’s been achieved has been to win at Go.
I have more faith in that other retreating dream – fusion energy. It will deliver sooner and more usefully than AI, but, even so, not next year.
This time I strongly disagree with you at least with your opinion on the future of AI. Simulation of human mind with curent HW was almost impossible but new technics are coming using current HW (you can see them at google everyday) and new approaches on design of processors (non von neuman’s) too.
Unfortunately.
LikeLike
I do see the point, but I can’t see how any hardware-based approach can result in something human. If it were possible to design some hardware that had the capacity to evolve (something warm and wet and cell-like) then perhaps, if evolution could be accelerated, we would end up with something human, but not by designing the synapses at the detailed level. ‘Human’ would emerge from the mess.
LikeLike