Intelligent machines? Think again
By Alan Cane
Published: October 7 2008 09:50 | Last updated: October 7 2008 09:50
The first computers were popularly known as “electronic brains” – so what could be more natural than for these apparently sentient machines to display artificial intelligence?
AI, however, has never quite lived up to the promises made of it. Even the term sounds old fashioned.
Practitioners these days prefer the expression “machine learning”, to describe the design of “intelligent agents” or systems that have some awareness of their environment and the capacity to respond appropriately to changed circumstances.
Here is an example. At Microsoft’s Cambridge laboratory, researchers have been building a video camera, designed for teleconferencing, which has intelligence built in so that participants do not have to think about whether they are “on camera” and in view.
The camera acts as if it has its own cameraman inside and tracks the participants as they move around: “The video communication is natural and transparent,” says Andrew Blake, principal research scientist.
The project is called “Eye-to-Eye” and is a sub-set of a larger one called C-Slate, involving a tablet personal computer which can be used as a common workspace when users are collaborating remotely.
“This was to be a concept demonstrator for how it would be if we could develop a system so good that you would not bother about travelling to meetings,” says Prof Blake.
Eye-to-Eye and the C-Slate demonstrate a number of points about the modern approach to AI. First, by improving the quality and usefulness of teleconferencing, it fulfills a genuine business need.
Second, it is low key and essentially invisible to the user – no humanoid machines or glitzy systems are involved.
Prof Blake says: “The aim is to build intelligence into machines which are familiar – the simplest example is predictive text. We don’t really want systems that do things on their own.”
Third, the tasks it is asked to perform fall within the limits of existing technology.
“Expert systems”, once the silver bullet that would transform the use of AI in business, never fulfilled their early promise because the task was simply too complex.
The idea was that expert knowledge would be poured into computer systems and used to solve problems. But, as Kishore Swaminathan, chief scientist for the consultancy Accenture points out, it never worked well enough: “The biggest obstacle in the quest for AI was getting the knowledge out of the heads of the experts and into the machines.
“At one time there were even employees with titles such as ‘knowledge acquisition specialist’ whose role was to capture the information as algorithms and ‘code’ it. But, despite small successes, the technology never scaled up.”
What has changed since those days is the availability, chiefly through the web, of vast volumes of information about people and things.
Prof Blake makes three points: “First, the breakthrough in AI in the past 15 years has been in machine learning probabilities. This recognises that intelligence necessarily involves dealing in uncertainty.
“Second, it is the hallmark of the flexibility of human thinking that it accommodates ambiguity, so there is a need for AI to use probabilities.
Third, the real contributions to business from AI are coming not from autonomous systems, such as the robots found in traditional science fiction, but from intelligent apprentices and helpers.”
Most experts agree that probability is the key to modern AI.
Mr Swaminathan says: “Much of today’s search and business intelligence is predicated on the statistical paradigm which works by extracting the intelligence that already exists in the world as patterns.”
An example is provided by the customer relationship management company Rightnow Technologies, whose customers include British Airways and BT. The company develops software which makes it simpler for people to interrogate websites and find information.
It does this by “learning” from each customer’s visit to the site and using the knowledge to improve the experience for subsequent visitors. The more visitors to the site, the better it works.
Doug Warner, who heads the company’s research effort, says: “There is a large volume of web traffic and for me, as a researcher, that is a wonderful thing because in AI you are dealing with statistical methods, looking for patterns in a mass of data.”
The company’s software is based on biomimicry, using the lessons of nature – in this case, ant colony optimisation and swarm intelligence – to develop effective algorithms.
Comparing earlier AI methods – the top down approach – with his research today Mr Warner says: “In my opinion, we have had the most success with the bottom up statistical approach in biomimicry.
“We might know a lot as humans, but the evolutionary processes that have driven the world for millions of years have a lot of inherent knowledge in them and we are probably going to make the quickest strides by emulating what has already been refined through biological processes.”
Some researchers, moreover, are looking at basic biology in their quest to improve machine learning.
IBM was a pioneer in the field and today continues to invest heavily in AI research. Dharmendra Modha, a scientist in the company’s California research laboratory is working on cognitive computing, which he defines as a computer model that simultaneously exhibits characteristics seated in the human brain, including perception and emotion.
His aim is to discover how the brain works, not how the mind works, he is quick to emphasise.
Last year, his group achieved a milestone by managing to simulate the operation of a mouse brain on an IBM Blue Gene supercomputer.
He notes: “We deployed the simulator on a 4096 processor Blue Gene/L supercomputer with 256 megabytes of memory per processor.
We were able to represent 8m neurons and 6,300 synapses (connections) per neuron in the one terabyte main memory of the system.”
There will be, of course, a considerable time lag before the benefits of this research are seen in actual products.
Mr Modha thinks it could be 10 years before cognitive computing of the kind he is working on makes its debut in productivity and security systems. It is, however, a giant leap from 1956 when an IBM supercomputer of the day simulated the firing of a mere 512 neurons
For some, however, the concept of intelligent robots retains its fascination. Honda continues to work on humanoid robots capable of dancing, climbing stairs and carrying drinks.
Clive Longbottom, analyst with the consultancy Quocirca, says that even if these developments are still at an early stage, they indicate that scientists are getting close to functional robots that will be able to replace humans for some tasks.
“It is less than half a century since (Isaac) Asimov wrote I, Robot and already we have much of what he talked about in place. Today’s games consoles contain much in the way of AI to calculate, on the fly, what an explosion, crash or interaction should look like.
“The power within an XBox 360 or PlayStation 3 is greater than the mainframe computers used by large organisations as little as 10 years ago,” he notes.
The capacity of the internet to gather and sift vast amounts of data and the power of modern supercomputers to analyse and model hugely complex systems have brought AI back into the spotlight.
As Mr Modha of IBM says of his work in cognitive computing, the technology will manifest itself in ways which today we cannot even begin to imagine.
Copyright The Financial Times Limited 2008
Comments