AlphaGo races ahead 2–0 against Lee Sedol

Lee Sedol 9p began game two of the Google DeepMind Challenge Match on Thursday, with a newfound sense of determination and respect for his silicon opponent.

After losing the first game of the match to the computer Go player AlphaGo on the previous day — something that the majority of the world’s top players had thought to be practically impossible just months ago — Lee’s play exhibited a sense of calm patience and steadiness, which was not evident the day before.

Lee Sedol 9 dan, still struggling to defeat AlphaGo.

Lee Sedol 9 dan, still struggling to defeat AlphaGo after two games.

But somehow it wasn’t enough.

AlphaGo met Lee’s solid, prudent play with a creativity and flexibility that surprised professional commentators, eventually consolidating its advantage in the endgame.

Finally, after a tightly fought contest which left spectators on the edge of their seats, Lee was left with no better move than to resign.

This leaves him trailing 0–2 against AlphaGo in the best of five match.

A completely different game

While yesterday, it could be said that Lee may have underestimated his opponent and played too aggressively in the early stages of the game, today was quite the opposite.

If anything, Lee’s play in the second game was somewhat too cautious, but that is not to say that Lee played badly.

Such play from a Go master of Lee’s caliber would usually strike fear into the hearts of opponents, because it leaves behind few weaknesses and quietly accumulates power, waiting for the opportune moment to strike — that is, when the opponent makes a mistake.

Aja Huang plays the first move for AlphaGo, against Lee Sedol 9 dan in game two.

Aja Huang plays the first move for AlphaGo, against Lee Sedol 9 dan in game two.

Victory cannot be forced

As Sun Zi explained in The Art of War, “Invulnerability depends on one’s own efforts, whereas victory over the enemy depends on the latter’s negligence.”

“. . . Therefore it is said that victory can be anticipated, but it cannot be forced.”

This seemed to be Lee’s strategy for game two. As it turned out, there were only two problems with this plan:

Firstly, the machine wasn’t at all intimidated by Lee’s quiet confidence.

And, secondly, despite AlphaGo playing some very unusual moves, Lee never seemed to find an opportunity to land a knockout punch.

Brief analysis of game two

The following are An Younggil 8p’s preliminary comments for game two…

An unusual opening

AlphaGo-Lee-Sedol-game-2-Aja-Huang-Lee-Sedol

Lee Sedol 9 dan (right) plays his second game against AlphaGo.

AlphaGo, playing Black today, placed its first move on the star point, and Black 3 was the first 3-4 point opening move we’ve seen from AlphaGo so far.

White met AlphaGo’s opening formation with one start point corner and a 3-4 point corner of his own.

Black 13 was creative, and White 14 should have been at K4.

Black 37 was a rare and intriguing shoulder hit, but Black 43 and 45 were a little heavy.

White’s counter from 46 to 56 was exquisite, and White took the lead.

AlphaGo redresses the balance

White 70 was slack. It should have been in the center around F10, to pressure Black’s group.

Black 73 and 81 were calm and solid plays, and the game became even again.

AlphaGo’s continuation up to 99 was patient and effective, and the game seemed to be slightly better for Black.

However, White 102 to Black 113 was a sophisticated sequence, and the game became even again.

AlphaGo-Lee-Sedol-game-2-Chris-Garlock-Michael-Redmond

Chris Garlock (left) and Michael Redmond 9 dan delivered another nail-biting commentary.

Lee Sedol misses a chance

White 114 was questionable. Entering the corner at R17 would have been bigger.

White 120 to 122 was a sharp combination, but Black’s responses through to 127 were patient once again.

White 144 was a mistake. It should have been at N18 or at O17, in order to make the game more complicated.

AlphaGo closes out the game

Black 151, 157 and 159 were brilliant moves, and the game was practically decided by 165.

However, Black 167 was incomprehensible and White had one last chance. Perhaps AlphaGo believed that it could still win with this move?

AlphaGo-Lee-Sedol-game-2-Kim-Yeowon-Yu-Changhyeok

Kim Yeowon 5 dan (amateur) and former world champion Yu Changhyeok 9 dan commented game two in Korean.

White 172 was the losing move. Rather than capturing Black’s top right stones in a snapback, he should have pushed at 173. Black P17 and White M18 can be expected to follow, leading to ko.

Since White missed his chance, the game was over. AlphaGo’s endgame after White 172 was accurate once again and there were no more chances.

Summary of the game

AlphaGo-Lee-Sedol-game-2-Lee-Sedol-1

Lee Sedol 9 dan, still smiling.

AlphaGo’s style of play in the opening seems creative!

Black 13, 15, 29 and 37 were very unusual moves, but the game was still well balanced up to White 40.

The result from Black 43 to White 56 seemed at first to be disaster for Black, but surprisingly, AlphaGo wasn’t that far behind.

AlphaGo played thick and solid moves in the middle game (such as 73, 81 and 85), which were reminiscent of Lee Changho 9p’s play in his heyday.

After Black took the lead, Black 139 and 167 appeared to be mistakes, but since Lee didn’t try to punish the moves it’s very difficult to say for sure without playing against AlphaGo.

Lee did his best in the second game, and didn’t try to test the strength of AlphaGo’s opening like he did in game one.

He spent more time in this game, and was in byo-yomi after about 140 moves.

However, Lee’s overall play seemed a little passive in the end. He didn’t play severe moves or seek to complicate the game, even though he was already behind.

It seems that he struggled to count accurately under the time pressure, and he missed a good chance near the end (White 172).

Unlike game one, AlphaGo didn’t make any serious mistakes, and Lee didn’t get any clear chances to win after he gave up his advantage around Black 73.

Post-game press conference

At a post-game press conference, as Lee Sedol and DeepMind CEO Demis Hassabis were bombarded by an epilepsy inducing quantity of flashing cameras, Hassabis described the game as “unbelievably exciting and incredibly tense.”

He expressed Team AlphaGo’s excitement that “AlphaGo played some quite surprising and quite beautiful moves (according to commentators), which was pretty amazing to see.”

Lee Sedol was full of praise for AlphaGo, remarking that, “Yesterday I was surprised, but today, more than that, I’m quite speechless.”

He added, “. . . there was not a moment in time where I felt that I was leading the game.”

AlphaGo-Lee-Sedol-game-2-press-room

The press room at the Four Seasons Hotel in Seoul.

Further praise came from the official professional commentators, Yu Changhyeok 9p and Michael Redmond 9p. Both remarked on AlphaGo’s impressive play in the endgame.

When a journalist asked Lee what he thought AlphaGo’s weaknesses were, he quipped “I guess I lost the game because I wasn’t able to find any weaknesses.”

In response to the same question, Hassabis explained that while DeepMind can estimate AlphaGo’s strength internally, “we need somebody of the incredible skill of Lee Sedol to creatively explore and see what weaknesses AlphaGo maybe has, so we can see them.”

“That’s why we’re having this match, to find out,” said Hassabis.

Lee Sedol and his daughter at the venue for game two.

Lee Sedol and his daughter at the Four Seasons Hotel in Korea.

Follow the match

AlphaGo is now leading the match 2–0, which means Lee Sedol needs to win the next three games to defeat it.

Game three of the match will be played on Saturday March 12.

Check our match schedule for details and visit the DeepMind AlphaGo vs Lee Sedol page for regular updates.

Subscribe to our free Go newsletter for weekly updates, including news and detailed commentary of the AlphaGo match.

Game record

AlphaGo vs Lee Sedol – Game 2

 

Download SGF File (Go Game Record)

 

More photos

 

Related Articles

About David Ormerod

David is a Go enthusiast who’s played the game for more than a decade. He likes learning, teaching, playing and writing about the game Go. He's taught thousands of people to play Go, both online and in person at schools, public Go demonstrations and Go clubs. David is a 5 dan amateur Go player who competed in the World Amateur Go Championships prior to starting Go Game Guru. He's also the editor of Go Game Guru.

You can follow Go Game Guru on Facebook, Twitter, Google+ and Youtube.

Comments

  1. Wow. I hope that Alpha Go will challenge other top pros and that this will not be just a single publicity stunt. In any case it’s clear that the Deep Mind team have created something amazing that will change many things in the future… To me this is far more impressive than Deep Blue or Watson…

  2. Daniel Rich says:

    seem to be missing the sgf record

  3. I am not sur Deepmind will continue working on Go after this. They will probably take on other challenges that will require new creative algorithmic developments. But I hope they will make their code available.
    As far as I can see, their approach (Monte-Carlo tree search combined with deep neural network) should also work for any game with simple rules. No Limit Poker, a game still dominated (albeit slightly) by humans, is an obvious candidate.
    After that, I do not think there is any board game left.

    • I hope top matches, self learning and improvement of algorithms and hardware will continue for a while, as with go it is in a way possible to measure what your performance is, finding out where the top lies that can be achieved. Making the resources available on easy accessible devices may be next. That AlphaGo may beat top players even with a handicap at a certain point in time is not that important, how to reach the top, and measure it, may be of huge importance for the AI world.

      As far as I found out, a different game can be taken on to become virtually invincible: contract bridge. The challenge is different because of the nature of the game (non fully zero sum, non fully information available), being closer to the reality of a lot of the problems in the world. And it is kind of attractive to do so, the world of contract bridge is huge too!

      Kind regards,
      Paul

  4. Will games 4 and 5 be played even if AlphaGo gets a 3rd win?

    • Anonymous says:

      Yes they will play all 5 games.

    • Yes, it is not best of five, it is a gobango, so the 5-0 score is possible.

      Kind regards,
      Paul

      • When you say Gobango, i associate that with increasing handicap after consecutive losses. Although I would love to see that happen (can you imagine the humiliation, it would be universal), I don’t think they will increase handicap after 3 losses.

        • I wish Lee would accept a handicap if he loses the next game. I want to see how much stronger the computer is than the human. If not this match, I hope the next one with Ke Jie will be played with kadoban, assuming Lee Sedol is shut out in this one.

          • It is unlikely that AlphaGo will be good at handicap games. After all, it’s trained with a dataset of handicap-free pro games.

            • Pre-trained with KGS dataset and trained with games played with itself. Alternatively to the stone handicap, no-komi, reverse komi is also possible, which should be a simple parameter to AlphaGo.

    • Derelkstrak says:

      Yes its not a BO5 all games will be played, still hoping for Lee to make a comeback but hey, even if not they’ll hopefully be exciting matches !

    • Anonymous says:

      yes

  5. Derelkstrak says:

    “Chris Garlock (left) and Michael Redmond 9 dan delivered another nail-biting commentary.”

    Oh wow, this is very true indeed lol

  6. Lee Sedol is very gracious under difficult, very public circumstances. I applaud his aplomb.

    • Pierre Maurier says:

      How could he not be smiling?
      He will either make a come back and win a lot of money and great publicity as being (probably) the last human to beat a computer,
      or (more likely) he will be remembered as the top pro that have been beaten by a machine like Kasparov was for Chess.
      I think most people in the world have heard about Kasparov, which is not the case of all other chess players.

    • Warren Dew says:

      Is Lee Sedol really playing at a top professional level in this game? The analysis mentions 3-4 outright errors, which seems like a lot at that level.

      • Ke Jie, the worlds strongest player, said he couldn’t make out mistakes from either side. (I know, that’s ominous for a challenge by Ke Jie himself)

      • The consensus seems to be that Lee Sedol played surprisingly poorly in Game 1, but accurately (according to his strength) in Game 2.

      • Since Go is not a solved game, whenever a player wins the game it must be due to a mistake played by his opponent. Two players cannot play perfectly and both lose. Even the best chess programs in the world who have nearly solved the game still lose to one another, and so you could point out moves which led to their lives as, and were therefor mistakes. Essentially it is impossible to play a game of go without mistakes, unless you have solved the game.

        • Warren Dew says:

          I was thinking of these as “mistakes as identified by An Younggil’s analysis”. Based on his analyses, the absolute top players usually make 1-2 mistakes per game of that magnitude.

          Granted that may not be a uniform assessment of the magnitude of mistakes.

  7. Doesn’t Black 13 just wait to make a low chinese after the corner joseki are played out and seen to be favourable for a centre-oriented framework? Still creative, but surely the resulting formation is not unusual?

    • Uberdude says:

      The position at move 10 has appeared in 141 pro games in the ps.waltheri.net database. Of those black played the hanging connection 44 times and the Chinese directly 37 times (the 2 most popular continuations). After the hanging connection white made the one space jump 35 times (pincer, approach top left, and peep were the few other moves) and in all 35 of those games black continued by extending on the lower side, not making the Chinese. The reason is the hanging connection exchange for the jump commits black to saving that group, making it heavier, so it is logical to settle it. I imagine Lee didn’t make the k4 pincer as Younggil says he should because he feared black would tenuki again (or maybe make some sente exchanges to help the group a bit first) to play d10 on the left side and white ends up feeling overconcentrated on just the lower side of the board. But black has a weak/dying group and there are still many holes in black’s wide-ranging moyo. I have noticed that AlphaGo seems to rarely have a weak group (without an opponent’s weak group adjacent as in the first game). Maybe forcing it to make weak groups could be the way to beat it, but its sense of judgement in trades and sacrifices appears excellent.

    • Uberdude says:

      The position at move 10 has appeared in 141 pro games in the ps.waltheri.net database. Of those black played the hanging connection 44 times and the Chinese directly 37 times (the 2 most popular continuations). After the hanging connection white made the one space jump 35 times (pincer, approach top left, and peep were the few other moves) and in all 35 of those games black continued by extending on the lower side, not making the Chinese, so the position at move 13 is unique. The reason is the hanging connection exchange for the jump commits black to saving that group, making it heavier, so it is logical to settle it. I imagine Lee didn’t make the k4 pincer as Younggil says he should because he feared black would tenuki again (or maybe make some sente exchanges to help the group a bit first) to play d10 on the left side and white ends up feeling overconcentrated on just the lower side of the board. But black has a weak/dying group and there are still many holes in black’s wide-ranging moyo. I have noticed that AlphaGo seems to rarely have a weak group (without an opponent’s weak group adjacent as in the first game). Maybe forcing it to make weak groups could be the way to beat it, but its sense of judgement in trades and sacrifices appears excellent.

      • What if we make tewari after w k4 and b around d10? 3 b stones at the top against 3 w stones at the bottom, then bp4 – wr6, bc6 – wf3 – bd10, wp3 – b03 – wq3 – bn4. First exchange looks normal, second a bit passive for w, but by the third one w seems to gain quite a lot (b is blocked from r3 in and gain a heavy group instead). So on balance should be satisfactory for white although there is a feeling that white has become too concentrated at the bottom.

  8. The game info bubble gives Lee Sedol a 9d rank.
    I know he has lost but is it not a little too soon to take him away the 9p rank he deserves ?
    ^^

  9. Anonymous says:

    It’d be nice to add the post-game pro analysis in the SGF on the critical moves and mistakes.

  10. How long does it take to learn a strong Go player’s strategy? 5 games? 10?

    • Anonymous says:

      Are you asking how much games it takes to become as strong as someone like Lee Sedol? The answer is most players never become that strong and those that do only do so after thousands of games.

      • I think he’s asking how long it takes for a good player to develop a sense for another player’s style, and presumably to play to its weaknesses.

        In this circumstance, I’m not sure how much that applies: every place where AlphaGo has been speculated to be weak has turned out to be a strength as soon as someone challenges her on it.

    • 5 years? 10?

  11. D. Fischer says:

    Guess he means that how long does it take for Lee to find out a weakness to eploid in Alphagos playing style (if there is a style at all)

  12. The article was full of anthropomorphic phrases which are quite inappropriate. The only thing we can say that its algorithm for playing go on that hardware is quite advanced.

  13. I think together with earlier posters that pros will soon find ways to look AlphaGo dumb.

  14. The much talked about 37 is actually quite logical. Anything less is gote. Still, it’s hard to come up with …
    To me, w was never ahead. But b getting 73 felt disapointing. I wonder if 70 is to blame. Should w have played 73 himself?
    For the next game, the only hope I see is lots of kos. After all, bots can’t even figure out that you can’t win 2 kos. At least no other bot can …

  15. On reviewing the game, I was really impressed by Alpha Go’s play. It’s wonderful (and a little scary) that the program is capable not just of playing at a high level by mimicking top human play, but can synthesize creative and unorthodox moves while remaining extremely strong.

    However, I am sad that Lee is clearly not in top form. I was watching the game live for a bit, and as a mere mortal amateur player, I was quite shocked by move 70. Sure it settled the W group but it was just so slow that I was muttering angrily at the screen! The failure to make a Ko at the very end isn’t something that I saw myself, but it seems weird that a player of Lee’s calibre would miss it given he was clearly losing the game. He may have still lost (as is likely the case given the way Alpha Go selects its moves), but it seems to indicate that Lee isn’t in great shape in this series.

    Nonetheless Alpha Go is clearly a “top pro” in strength. I just hope we see it play against Ke Jie in the next 6 months… That would be just so cool and interesting…

    • Big Steve says:

      By the time that ko at the end showed up he was getting really low on time, which could have something to do with it. Also it’s possible that he simply didn’t see it as valuable enough. Eventually nothing valuable was available and he resigned 🙂

  16. What can you say about black’s move 33 (R15)? It’s very rarely seen in pros games, a little surprised me…

  17. astonished says:

    In the AGA video commentary, Andrew Jackson told us that Fan Hui, who is now working with AlphaGo, had said to him that she is now “free” – which doesn’t mean she doesn’t cost money – but that she has grown up, come of age, and transcended her earlier dependence on the initial training set of human games.

    Fan Hui hit the nail on the head; even a lowly kyu like me can see a marked difference in her playing “style” from that she manifested in her match with Fan Hui, which Myungwan Kim remarked reminded him of Japanese strategy, a “softer” style than the semeai-loving fighting style that is currently popular among younger players, for which Lee Sedol himself has to bear some responsibility!

    But now, we see she has developed a style all of her own, a style which has evidently emerged from her self-play experiments between then and now. And even managing to beat Lee at his own game of finding severe moves. Hassabis had said she was playing several hundred games a day against herself, and although his Nature paper seemed to imply that her learning curve would flatten out over time, it is obvious that the AlphaGo of today is more grown-up and independent-‘minded’ than that of October.

    I would guess that she is using a much bigger cloud than in October, and the DeepMind team may have tweaked some of her mechanisms. There is also the significant factor that the October match was 1hr+30s byo-yomi whereas this match is 2hrs+3x60s, giving its Monte-Carlo lookahead twice as much time to explore variations.

    Some people have wishfully hoped that a complicated ko fight might be her weakness, but I see it as the exact opposite: ko fights require reading, and with a cloud of thousands of tree searchers all working together, Alpha is very likely to be much stronger at ko fights than even the best human. We will see.

    Despite what shows every sign of becoming a triumph over the world’s best, AlphaGo still lacks what we call commonsense – or does she? One feature of commonsense is – apart from behaving like other people – an ability to handle novel situations smoothly, without the brittleness exhibited by earlier generations of hand-crafted “Expert Systems”. AlphaGo has demonstrated that she is more than capable of handling and indeed creating novel situations on the Go board.

    The Nature paper shows how it is is a trivial matter for Alpha to spit out what she figures to be the most probable sequence variation, but the next giant leap in AI will be when someone figures out how to map convolution configurations into symbolic expressions that relate to a hierarchical description of the position in terms of things like group safety, influence, and potential territory, so that Alpha’s daughters will be able to explain what they are thinking about in something resembling English.

    • Why would she need to “explain”? The concept of explanation only makes sense for humans because our communication tools (speech/text/charts) is so low level (technically, has low bandwidth). AlphaGo can just be copied, unlike our brains. If we could copy our brain to another one, we would not need to explain or teach.

      • Freeman Ng says:

        Mathieu, it’s not so much that she needs to explain, as that we’d love for her to be able to explain – to us!

  18. Later machines can play against each other and enjoy their time.

  19. the times of Alpha go has begun, where are the advengers?

  20. I think an article on “why AlphaGo is NOT the end of human Go” would be very interesting. I mean, it is true that these losses have a bitter after-taste, but there are a lot of advantages too. AI can help us get even closer to the “perfect game” which I think is exciting by itself. I think people are overreacting and exaggerating when they say that this is pretty much the end of creative and unique plays. Sure, the better AI gets the closer we’re getting to standardized sequences, which happens to every mind sport, but I’m sure there is still room for self-expression on the board aswell. Humans are still humans, not machines.

    • Freeman Ng says:

      That article could draw on the example of chess, where computers surpassed human players long ago and are now the rough equivalent of 3 stones stronger than the best human players. Yet, they haven’t ruined chess at all, and in fact, have enhanced it. Grandmasters have used computers as a tool to get better, chess programs have inspired advances in chess theory, and the computer evaluation is now a standard part of the broadcasting of major chess events, enriching the experience of us viewers.

    • I’ve written something along those lines here:

      http://soapbox.manywords.press/2016/03/10/alphago-lee-sedol-2-0/

      Skip the game commentary at the beginning, because ‘amateurish’ doesn’t begin to cover it, but the remarks on chess and its life post-AI are much more in my wheelhouse.

      The short version is this: go is likely to survive AI mastery even better than chess, because chess has some structural problems (namely draws) that go does not. Since go is scored by points and komi is already a thing, if top-level AI play reveals that there are balance issues, komi can be tweaked to address them. Working out how to handle ‘draw death’ in chess (correspondence tournaments over the last few years have featured 80-90% draw rates, which is not good) is a trickier problem.

  21. Have people tried playing mirror go against computers? Perhaps Lee Sedol can give it a try…

  22. The komi question can be settled conclusively once and for all – if we get Alphago to play against itself.

    It doesn”t matter if Alphago is optimized for win probability rather than winning margin, because its opponent (Alphago) is also optimized for percentage win, and both start with the same search tree and neural networks and computing power. So we have a self-controlled experiment where the only test variable is order of play, i.e. black or white.

    Start with a komi of say, 4.5 points, and have AlphaGo play one million games against itself. Note win/loss ratio. Then increase the komi stepwise to maybe 8.5 points. At some point the win ratio for black will drop below 50%, and we’ll have the answer.

    • AlphaGo can probably tell you whether current (7.5) komi is balanced.

      But the reverse is not correct: If you change the komi, I see no way to directly tell AlphaGo’s networks that the komi has changed. If you use the current AlphaGo naively, she will play as if the komi was still the old value – and she would miscalculate whether she has won or lost a game. You would need to retrain the value network (and probably more), building a new AlphaGo for that particular komi.

      The main issue is that strategies may change with different values of komi. (If it did not, then one could analyze komi without AlphaGo, by simply playing random moves.)

      There is probably a way to hone in to a correct value of komi with AlphaGo, but it is not so simple.

      • Yes, the value net would have to be retrained. Strategies remain the same insofar as the policy network is concerned, but opening sequences would be dictated by the probability percentage for each move based on the new komi. We may see black playing more conservatively and white more aggressively for lower komis and vice versa as the komi increases.

        The program doesn’t have to be re-written, but Alphago would have to use reinforcement learning (playing a few million games against itself) to re-establish values based on each new komi.

        Playing random moves is impractical for the same reason that it’s hitherto been so difficult to write a strong go program: just too many move possibilities to consider without some intelligent narrowing of the search field.

      • I wouldn’t necessarily agree. The komi value would normally be part of board state data structure that is the input to the neural network.

        For the sake of comparison, consider that the neural network based backgammon bots can play best of n matches and the best move can vary considerable for different match standings.

        The human KGS games used to initially train the networks also would not all have the same komi as well. In handicap games it’s mostly 0.5 for instance.

  23. Anonymous says:

    How likely is alpha go vs ke jie

    • Likely to happen after it wins against Lee Sedol. But Ke will lose 5-0 as well.

    • Hassabis has said that Ke Jie would be next. If that’s the case it had better been sooner since AlphaGo is improving at an incredible pace, or Ke wouldn’t stand a chance.

      • Anonymous says:

        It seems very unlikely that Hassabis had said that, on what source you are relying for this information?

        • This 9th March interview with Hassabis: http://www.cnbeta.com/articles/482067.htm

          He also gave the rationale for choosing Chinese over Japanese rules.

          • Anonymous says:

            Thank you, I will be happy to witness this game. I hope that Ke Jie will be humble enough to acccept to play with Black without komi, throught the series.

          • Despite a misleading headline (newspaper editors like to write misleading headlines that attract more attention), this article does not actually say anything about a planned match with Ke Jie. It just vaguely mentions the possibility of working with go professionals in China and Japan.

  24. Hassabis has already said that Ke Jie will be next. Of course the incentive to keep testing the AI against stronger opponents would be stronger for DeepMind if AlphaGo loses to Lee eventually. Given the rate at which AlphaGo is improving, Ke Jie should get into act fast if he wants to be the last and maybe only ever (assuming Lee gets whitewashed 5-0) human to beat AlphaGo.

    As for the komi question, yes the value net would have to be retrained. Strategies remain the same insofar as the policy network is concerned, but opening sequences would be dictated by the probability percentage for each move based on the new komi. We may see black playing more conservatively and white more aggressively for lower komis and vice versa as the komi increases.

    The program doesn’t have to be re-written, but Alphago would have to use reinforcement learning (playing a few million games against itself) to re-establish values based on the new komi.

    Playing random moves is impractical for the same reason why it’s hitherto been so difficult to write a strong go program: just too many move possibilities to consider without some intelligent narrowing of the search field.

  25. Derek Ip says:

    I believe in Lee Sedol – win the next 3!!

  26. William Chang says:

    To be fair, Lee was under time pressure, 2hr speed-Go not enough to look ahead tactically. AI has no such real constraint — simply add hardware and electrical current to buy more time. Either limit AI to equal power consumption as the human (see below), or better yet, let humans collaborate to more evenly match the AI’s (purported) 48CPU+8GPU array processors.

    On energy fairness. For kicks I found various bits of info: Average brain uses 20% of caloric intake; Chess players during a tournament burn 125Cal/hr mainly due to stress (Lee probably burned more); Soccer players burn much more during a game; 1KCal=0.00116KWH; each processor under load ~ 200W (double if over-clocked); so it’s very roughly 0.000145KW vs 11KW (48CPU+8GPU) per hour or 1:80000. The networked version of AlphaGo using 1200CPU+170GPU would be 25 times higher at 1:2-million.

    • Fruitcake says:

      Yeah, but machines use a far more convenient and scalable energy supply. Imagine how much energy I would save if I didn’t have to make dinner and stuff for myself everyday, but instead would just plug myself into a wall socket while I’m asleep? Not to mention the energy of growing plants and raising and killing animals, and transporting it again. Put the Deepmind tech into designing cleverer fusion energy chambers, and nobody will cary anymore about a few Megawatt more or less.

    • Your numbers are off. A person sitting and thinking will (with their whole body) burn on the order of 100 kilocalories (not calories) per hour, or roughly 100 watts.

  27. [email protected] says:

    If there is alpha go 2, will it make the same move as alpha go 1?

  28. Not sure if 172 was a mistake – against a computer.

    I tried playing out the ko, and I think Black may be able to handle it as she has some good ko threats. This is in spite of the fact that winning the ko by white has a good follow-up at L14 and M15 which will capture the tail of black’s group (Q16 stones). The problem is that Black has some huge ko threats and a number of whites likely threats look just a little too small. At any rate, Lee was in serious time trouble, not sure he had time to read out the ko properly…

  29. Steve McKay says:

    I wonder if Lee Sedol should have got conditions that the chess masters got back in their day when playing programmes — such as having a copy of the software to run against, and limitations on the hardware used and “opening book”.
    An example is Kramnik – Deep Fritz (2002). Of course that followed the Kasparov defeats to unlimited programmes, and was about PC-based software. At present, computer knows all about Lee Sedol’s games, but (given software changes) he has seen none of the computer’s games.

    • Diffusion says:

      I think a trial match one week before the tournament would have been a good idea, so that Lee does not underestimate AlphaGo. But I think the intention (on both sides) was to make this as thrilling as possible to generate interest in Go. Frankly, my interest always peaked when I read about computer advances in that game. Any program from the past ten years is of course capable against lowly amateurs. Additionally, a risk might have been that Lee would bow out if he realizes he cannot win – although as others have pointed out, losing against an almost flawless player still carries plenty of honor.

  30. JazzBruce says:

    I was very disappointed with the commentary provided during game two. Chris Garlock was as close to useless as is humanly possible, providing only weak “… yeah I was wondering about that …” type comments throughout. Michael Redmond on the other hand was too far in the other direction and was constantly showing off his expertise with extended excursions of “would of/should of/could of” on the display board while ignoring the actual play for several moves at a time. Also, although it was funny for about 30 min., he dropped more Go stones during the five hours than I’ve dropped in my entire life.

    • Sorry, Jazz, I very much enjoyed Michael’s commentary. It did contain a number of errors in analysis and score estimation, but that is par for most pros standing in front of a whiteboard and having to comment rather than concentrate on the game. If you’ve ever tried it, you would understand how tough it can be.

    • SoySauce says:

      The part that is really frustrating about Garlock’s “commentary” is his constant insistence on counting the score, trying to figure out who is winning and emphasizing how nervous he is about the whole thing. That was somewhat bearable during game 1 (even if I’ve seen a number of laypeople mention that they hated that) but for game 2 he was straight up unbearable. The way he communicated with Redmond reminded me of journalists interviewing scientists and constantly trying to fish for exciting-sounding headlines (to the point where they are trying to put words into the interviewees’ mouth.

      • JazzBruce says:

        Exactly, and the way he would forget and leave whole groups of example stones from prior explanations on the board until two or three moves later. Yikes, it sure seemed less than good to me.

    • Anonymous says:

      have to agree about Garlock; he is supposed to be the sidekick yet constantly tries to steal the limelight, even interrupting Redmond’s thoughts with sidetracks whilst he was counting. Redmond did a praiseworthy job of keeping his cool, but there were one or two moments when you could see he was keeping himself under control. I loved his riposte when Garlock said: “I trust your count”. The cheek of it! Redmond replied testily “Thank you” 🙂

      I wish i spoke Korean, so i could watch the Korean commentators, as the sidekick there is so much prettier (and stronger at Go) than the intrusive and noisesome Garlock.

      However, we must remember that the job of a sidekick is not as easy as it looks; he/she has to occasionally remind the star commentator to come back to earth when they fly off into their own world of complexity, as all expert enthusiasts are prone to do.

      But if Garlock is a pain, the DeepMind channel director is a disaster, frequently showing talking heads or the playing board when Redmond is demonstrating a sequence on the commentary board which the viewer cannot see. Nevertheless, the production on game 2 was a big improvement on game 1, so there’s hope for him/her yet.

  31. Anonymous says:

    Hippo, what a nice thoughtful reply to my unexperienced comment. I also “enjoyed” his commentary to the extent that I could understand it, but I was really frustrated when he would fall behind in the actual moves.

  32. There’s still hope to Lee (or humans). There’s an ongoing research whether the human brain can do quantum computation, a capacity that can outsmart conventional computers, like what AlphaGo is using.

    https://www.newscientist.com/article/mg22830500-300-is-quantum-physics-behind-your-brains-ability-to-think/

  33. “Did the lord say machines oughtta take the place of livin’, and what’s a substitute for bread and beans? I ain’t seen one. Do engines get rewarded for their steam?” -The Ballad of John Henry

    Fascinating to be able to witness an historical moment. Computers will definitely help improve Go, but what’s to come in other fields outside gaming? Once the program is good enough to produce a decent result, people can stop thinking and making judgments about that activity. Are you going to double check the route your self-drive car takes to see if it’s the fastest? Or learn the stock market to see if your auto-broker is properly indexed to the market?

    • Anonymous says:

      “What’s to come in other fields besides gaming?” – indeed, that’s the real question, but as it happens it’s a question that has been answered many many times long before Alphago came along. Alpha uses a novel combination of two basic mathematical techniques: pattern recognition and simulation. Both fields have been around for at least 50 years (cnns are a recent development in pattern recognition), and useful applications in every domain imaginable are emerging. To give you one small example that would be hard to find with a Google search unless you had all the words in the title as it never attracted much interest was a pattern learning technique that was able to both learn to diagnose liver disease as well as clinicians and to learn to bid at Contract Bridge as well as beginner players. And it could say what its patterns were in English. It was programmed in Algol68-R and LISP1.5 over 3 man-years, part-time.
      A Task-Free Concept Learning System Employing Generalisation and Abstraction Techniques.
      International Journal of Cybernetics, 9, 315-358, 1979.

      • Anonymous says:

        On a table of 21 by 21 or 23 by 23 complexity of a game surpass the power of comprehension of human mind , if put she deepmind to selfimproved herself on this table and after try to learn human based on his game on this tables will be a progression , also if split its algorithm in multiple pieces and make some go like diagramm of every pieces and put deepmind to find the better moves to win in this diagramms and create a better diagramm(self reference algorithm) or with 3D cues for cteate the illusion of depth , texture gradient , linear perspective ,superposition ,shadowing, aerial perspective and after make a sum on a new frame and some kind of dna from them and the next step will be transfer on a quatum computer with also a gpu unit for visual see like , and some of the algorithm obtained will be teach in school to our children for human progres like species .

  34. On a table of 21 by 21 or 23 by 23 complexity of a game surpass the power of comprehension of human mind , if put she deepmind to selfimproved herself on this table and after try to learn human based on his game on this tables will be a progression , also if split its algorithm in multiple pieces and make some go like diagramm of every pieces and put deepmind to find the better moves to win in this diagramms and create a better diagramm(self reference algorithm) or with 3D cues for cteate the illusion of depth , texture gradient , linear perspective ,superposition ,shadowing, aerial perspective and after make a sum on a new frame and some kind of dna from them and the next step will be transfer on a quatum computer with also a gpu unit for visual see like , and some of the algorithm obtained will be teach in school to our children for human progres like species .

  35. Quite a few comments here have been questioning certain moves Lee Sedol made, and that suggests an idea. Now that Alphago has demonstrated it can beat top individual Go players, how about pitting it against a ‘Human Cloud’? That is, a group of humans analysing the game and selecting moves, not just one. It could be an invited groups of the highest ranked pros, or even a real-time internet challenge against ‘the world’: any online Go player anywhere on the globe could be part of the Human Cloud team.

    The human moves would be selected in much the same manner as Alphago does it, by a ranking system. The move that gets the most votes is chosen. There should be some time element so the choice is made before a deadline, but the time period would be flexible depending on the progress of the game (obviously it would be much shorter under time pressure in the endgame).

    This would be a kind of Human AI! One person is fallible, having a ‘style’, certain propensities, and prone to errors that could be stress related, but having the Human Cloud make the choice of moves by consensus should eliminate obvious mistakes, throw up audacious brilliant moves and generally make a far more formidable foe for Alphago than any single individual. It would advance the game exponentially. Plus it would be an enthralling and inclusive spectacle that could turn into a series, especially if the Human Cloud turns out to be equal or even superior to Alphago.

    To me it seems the obvious next challenge after this. If Alphago still beats all humans working together then we know we really are f****d!

  36. This is completely different match to Deep Blue vs. Kasparov. Now, the machine plays a new Go. Black 37 Moves like me remember what it says about the control of territory Qijing Shisanpian

    All these things were debated by the ancients, and the rules were then
    studied by their successors. Therefore, those who do not wish to accept but who wish to change their methods, cannot know what the results may be

    The machine accepted the challenge of playing a new go: we’ll see the results! This opens new perspectives for the game. Pros will learn the lesson with wisdom.

    Congratulations to the fans of the beautiful game infinite as the stars!

Speak your mind