Google's DeepMind AlphaGo program beat South Korea's Lee Se-dol in the first of a series of games in Seoul. |
Machines scored worrying victories over humanity on Wednesday and Thursday, when Google's self-learning algorithm AlphaGo triumphed over world-champion Go master Lee Se-Dol.
Lee, who has racked up an extraordinary 18 world-championship wins since becoming a professional Go player at age 12, can still redeem himself, as this week's contests are part of a five-game series between him and the computer.
But whether Lee ultimately wins is irrelevant; the mere fact that AlphaGo exists is a testament to its power and the future of artificial intelligence.
The ancient Chinese board game Go, played on a 19-by-19 grid with black and white stones, was once thought impossible to master by machines.
Unlike the Western game of chess, where each move affords a maximum of 40 options, Go entails up to 200 choices. The number of possible game outcomes quickly compounds to a figure larger than the total number of atoms in the entire observable universe.
Some may see AlphaGo as an incremental step in the march of technological progress; perhaps along the lines of IBM's Deep Blue, which defeated chess grandmaster Garry Kimovich in 1997.
But what stands out this time around is AlphaGo's ability to autonomously improve performance, simulating what cognitive psychologists describe as intuition.
Before AlphaGo played against a human, the program had been developed to play video games — "Space Invaders," "Breakout," "Pong," and others.
Without the need of any specific programing, the general-purpose algorithm was able to master each game by trial and error — pressing different buttons randomly at first, then adjusting to maximize rewards. Game after game, the software proved to be cunningly versatile in figuring out an appropriate strategy, and then applying it without making any mistakes.
Go to Forum >>0 Comment(s)