source Anyone following me on Twitter, LinkedIn, or Facebook over the past few weeks knows my excitement about the Go match between Google DeepMind’s AlphaGo program and South Korean professional Go player Lee Se-dol.
The charity best-of-5 competition was won handily by AlphaGo 3-0, with a friendly 4th game won by Se-dol before AlphaGo won the final game.
The fourth game is of the greatest interest to me because that’s the one Se-dol won. There are two key factors in this match that seemed to contribute to AlphaGo’s loss (these factors were not present in the other four matches):
- AlphaGo has a more difficult time playing with the black stones, and
- when Lee Se-dol made an unexpected move that confused AlphaGo, the program responded as if it had a software defect.
Artificial intelligence, a subject that has long fascinated me, seeks to solve several key problems: reasoning, object manipulation, planning, learning, knowledge, communication, perception, among others. Longer term goals would be social intelligence, general intelligence, and creativity.
It’s these longer term goals that are likely impossible to achieve. While the Kismet robot at MIT can recognize and replicate emotions, and can perform very basic visual tasks its low-level accomplishments hardly qualifies it for social intelligence.
(Not to crap on the work done by the brilliant Dr. Cynthia Breazeal, but let’s be real – this is not true social intelligence).
As cool as it was to see a piece of software defeat a champion in such a complex game, it’s clear we are still quite far off from real artificial intelligence. AlphaGo won by learning the game, not solving it. It was able to out-learn humans, and therefore achieved only one of the many problems sought to be solved by AI. It did not demonstrate planning, knowledge, communication, reasoning, etc.
I will be impressed (and concerned?) only when AlphaGo spontaneously, without any pre-programming, expresses joy over winning a match and expresses a desire to continue playing.