AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match

DeepMind’s groundbreaking artificial intelligence, AlphaGo, defeated Lee Sedol 9p in the final game of the Google DeepMind Challenge Match on March 15, 2016, winning the five game match with a 4–1 score.

AlphaGo-Lee-Sedol-game-5-signed-Go-board

Demis Hassabis and the AlphaGo team receive the signed match Go board from Lee Sedol.

After Lee pulled off a surprise win in the fourth game, hopes of a repeat performance in the fifth were high amongst Lee’s fans, but it was not to be.

Although Lee got off to a good start in game five, and AlphaGo even made a miscalculation around move 50, the computer’s superior judgment and efficiency of play eventually won the day.

AlphaGo now goes down in history as the first computer Go program to defeat a top professional player, and was awarded an honorary professional 9 dan rank by the Korean Baduk Association (baduk = Go in Korean).

AlphaGo-Lee-Sedol-game-5-David-Silver-AlphaGo-9-dan-certificate

David Silver received an honorary 9 dan certificate on behalf of AlphaGo.

When you’re onto a good thing…

As in game one, Lee (Black) began with territorial moves on the 3-4 points, with Black 1 and 3.

These were met by AlphaGo’s more flexible and center oriented moves on the star points, with White 2 and 4.

Lee enclosed his top right corner with Black 5, claiming the corner territory for himself, and took the bottom right corner territory too (from 7 to 11) after AlphaGo approached at White 6.

It seemed that Lee was following a similar game plan to the one that had worked for him in game four — namely, to take territory early on and confront AlphaGo’s influence later in the game.

AlphaGo-Lee-Sedol-game-5-Aja-Huang-Lee-Sedol-1

Lee Sedol (right) plays the first move of game five against AlphaGo.

No territory for you!

When AlphaGo played to develop potential territory on the right side, with White 12 to 16, Lee resisted with Black 17 instead of extending to P15 and allowing AlphaGo to play around P10.

This move once again focused on taking territory and denying AlphaGo territory on the right side, at the expense of ceding center influence to White with 18.

Lee pressed his advantage in the bottom right with Black 19 and Black 21, attacking White’s three stones, but AlphaGo played flexibly and sacrificed them with 22 and 24.

Though the game was still very close, Lee’s plan seemed to be going well so far.

Shifting gears

As negotiations shifted to the top left corner, Black changed tack slightly with Black 31.

The standard local tactic after White 30 would have been to continue moving into the corner at C17, but then White would have played at D14 and practically connected its pincer stone (White 30) to its top left corner.

In the process of doing so, White would have built a wall which greatly expanded its influence over the center.

Black 31 made contact with White’s pincer stone instead, which indicated that Lee wanted to prevent White from closing off the center.

Up to Black 39, Black stabilized his group in the top left and established a beachhead in the center, but allowed White to play on both sides and retain the initiative.

The overall position was still well balanced, and generally speaking this was a sensible strategy for Black.

Nevertheless, Lee appeared to be taking the game in a new direction, instead of repeating his all or nothing strategy from game four.

AlphaGo-Lee-Sedol-game-5-David-Silver-Chris-Garlock-Michael-Redmond

David Silver talks to Chris Garlock and Michael Redmond before the game.

AlphaGo hallucinates

AlphaGo continued to develop the center from 40 to 46, and then embarked on a complicated tactic to resurrect its bottom right corner stones, from 48 to 58.

Though this sequence appeared to be very sharp, it encountered the crushing resistance of the tombstone squeeze — a powerful tesuji which involves sacrificing two stones, and then one more, in order to collapse the opponent’s group upon itself and finally capture it.

This was a strange and revealing moment in the game.

Like riding a bike

Even intermediate level Go players would recognize the tombstone squeeze, partly because it appears often in Go problems (these are puzzles which Go players like to solve for fun and improvement).

AlphaGo, however, appeared to be seeing it for the first time and working everything out from first principles (though surely it was in its training data).

No matter where AlphaGo played in the corner, Lee was always one move ahead, and would win any race to capture. And he barely had to think about it.

AlphaGo-Lee-Sedol-game-5-Kim-Jimyeong-Kim-Seongryong

Kim Jimyeong (left) and the one and only Kim Seongryong 9 dan commented game 5.

It’s a bit like when you learn to drive a car, ride a bike, or tie your shoelaces. To begin with, you really need to focus on what you’re doing and it’s all very hard.

Remember to look at the road, oops, I’m going to fast, check the mirror, remember to look at the road, change gears… oops, too fast again.

But given time, it gradually becomes second nature, to the point that you barely need to think about what you’re doing at all. You just do it.

For Lee, and all other experienced human players, the moves from Black 49 to 59 were like that. Strangely, and perhaps inexplicably, they were not for AlphaGo.

Demis Hassabis tweeted about AlphaGo’s error during the game:

Lee Sedol takes the lead

AlphaGo-Lee-Sedol-game-5-Lee-Sedol-9-dan

Lee Sedol 9 dan.

Because of AlphaGo’s mistake, Lee Sedol took the lead.

However, it wasn’t a commanding lead. Winning a game of Go is like running a marathon, and Lee was only a few points ahead.

Fortunately for AlphaGo, the damage from its mistake was mostly confined to the bottom right corner, involving stones which it had already decided to sacrifice.

Though the machine had stumbled, it still had plenty of chances to catch up and Lee needed to stay focused.

Don’t celebrate too early

There’s a saying we Go players have, about not celebrating too early.

It’s not uncommon for one player to win a small battle early in the game and take the lead, only to have their focus waver and lose the war because they were too pleased about their earlier success.

This has happened to most players at some point, including me…

For a moment I recalled the wonderfully entertaining and varied conspiracy theories we have seen about this match and joked about a new one with a friend.

Maybe AlphaGo deliberately allowed its stones to be captured, to soften Lee’s resolve?

No, I don’t actually believe that’s what happened at all, but despite Lee Sedol’s incredible mental toughness he’s still susceptible to the vicissitudes of the human psyche, just like everyone else.

The bears ambush Goldilocks

White 60 to 68 flowed smoothly and developed the top. It was time for Black to do something about White’s sphere of influence at the top and in the center.

Lee chose the shoulder hit of Black 69, which was a kind of ‘Goldilocks’ move — not too shallow, but not too deep.

Such moves are difficult to deal with, because they stretch just far enough to be annoying, but not so far that a counter-attack is easy.

In other words, they are just right.

AlphaGo-Lee-Sedol-game-5-Nie-Weiping-2

Go legend, Nie Weiping 9 dan got in on the action with his own live commentary.

Regardless, AlphaGo counter-attacked with White 70, rather than defending at G17 and allowing Black to skip lightly to J16.

An experienced human player would think twice about this attack. It feels good to play, but there’s also a fear that the strategy will backfire if Black can withstand the initial attack.

That’s because trying to capture Black’s stone at G16 could leave White’s potential territory in the center in ruins if White fails to capture after raising the stakes.

In fact, this kind of situation is usually scary for both players. Except, in this case, one of the players felt no fear.

Achilles’ heel

The position was still wide open, so there were all sorts of flexible tactics which Black could have tried.

For example, the elephant’s step to J14 is one idea for contesting the center.

Lee, however, chose the safe and simple strategy of making a living group at the top, from 71 to 79. He clearly believed that this would be good enough.

AlphaGo-Lee-Sedol-game-5

Lee Sedol at the post-game interview.

The broader situation in the match was important here.

After losing the first three games, Lee had lost the match and had nothing to lose. Most observers believed at that point that Lee would lose 5–0.

Given this freedom from expectations, Lee was free to pursue the aggressive and somewhat reckless strategy of gambling the whole game on a single battle in game four.

But now the situation was different. He had won a game, proving that the machine was fallible, and scores of people wanted him to win again.

The difference between 4–1 and 3–2 is significant, because the latter result would have given ‘team human’ room to argue about the result and begged the question: what if it had been a 10 game match?

Lee had something to lose, and this was where the Achilles’ heel of humanity was exposed. Creeping fear set in.

The danger of being cautious

Lee had already played cautiously up to 79, and when White played at 80 he ensured that he was safe with 81.

But this was too careful.

White harassed Black’s stones at the top starting with 82, and although Black managed to make two eyes (thus ensuring the life of his stones), his efforts at the top only earned him three points in the end.

Meanwhile, far from White’s potential territory being ruined, the computer was able to wall off a large area around the right center, up to White 90.

A moment of caution had allowed AlphaGo bully Black and catch up. The game was even again.

God mode enabled

As the game continued, AlphaGo played a series of unusual (even boring looking) but subtly impressive moves which did not bode well for Lee.

White 100 initially looked too tight, but interestingly avoided approaching Black’s lower right power too closely and exposed a weakness at K7.

For Black’s part, 101 to 105 comprised an impressive combination which reduced White’s center potential and indirectly protected the cutting point at K7.

White 106 didn’t completely secure the lower left corner for White (C5 or C4 instead would), but it helped three stones on the left side after Black 105.

And when Lee invaded White’s bottom left area from 107 to 111, White 112 was an unintuitive but effective way to attack Black (Black 113 was also very nice).

For the rest of the game, there was an Agent Smith-esque feeling of “inevitability” to White’s play, despite Lee’s valiant efforts to catch up.

Once Lee lost his advantage at the top, the pressure on AlphaGo was relieved and it was able to do what it does best — maintain a very small lead, minimize risk and maximize its probability of winning.

To this imprecise human author, it simply felt like god mode had been enabled.

AlphaGo-Lee-Sedol-game-5-Demis-Hassabis-David-Silver-Lee-Sedol

Demis Hassabis (left) shakes hands with Lee Sedol with David Silver in background.

Brief analysis of game five

Here is An Younggil 8p’s preliminary analysis of game five. Further game commentary will be posted over the coming week.

To ensure that you don’t miss it, click here to subscribe to our weekly Go newsletter.

A territorial opening

Black started with a territorial opening, and White 12 was the start of a cutting edge opening strategy.

Lee had planned his opening with other pros, and his strategy in this game was to take territory first and invade later, which was similar to game four.

Black 17 was a sharp attacking move, and White took influence while sacrificing her three stones from White 20 to 24. The result was well balanced.

Black 31 was the right choice to reduce White’s influence over the center, and the position was still even up to Black 39.

AlphaGo-Lee-Sedol-game-5-Aja-Huang-Lee-Sedol-2

Aja Huang plays the first move for White, on behalf of AlphaGo.

Unnecessary moves

White 40 and 42 were powerful, but Black 43 and 45 were firm responses.

White’s moves from 48 to 59 were unnecessary, and White lost ko threats and aji by playing this way. Black took the lead.

Black 69 was an excellent move to reduce White’s influence, but White’s cap at 70 and the continuation up to 80 were flexible and gentle, maintaining the balance.

White reverses the game

Black 81 was too cautious. Black should have played at White 84 to make a better shape with larger eye potential.

White 82 to 84 were sharp, and the game was reversed in White’s favor.

White 90 and 100 were calm and solid, and it appeared that AlphaGo thought it was ahead.

Black 101 was ingenious, and Black caught up by erasing White’s thickness in the center up to 105.

AlphaGo-Lee-Sedol-game-5-Michael-Redmond

Michael Redmond 9 dan takes a break.

Still favorable for White

Black 107 and 109 formed a good combination for invading, and Black 113 was a good tesuji, but White’s responses were still patient and solid.

The result up to Black 127 was satisfactory for Black, but White took sente, and the game was still favorable for White.

However, White 136 was questionable, and Black managed to catch up a little up to 147.

The game appeared to be even again, but actually White 154 was big, and White was still slightly ahead.

Hail Mary

Lee had been aiming for the Hail Mary pass of 167 and 169, but White’s responses were precise and Black lost a few points up to White 180.

White 186 and 194 were big endgame moves, and the game was practically decided.

Black tried to catch up, but White’s endgame from here on was perfect, and Black didn’t get any chances.

AlphaGo-Lee-Sedol-game-5-Nie-Weiping

Nie Weiping 9 dan waxes lyrical about Go.

AlphaGo wins the game

In the end, White was winning by 2.5 points, so Lee Sedol resigned.

AlphaGo showed us another impressive game, and eventually it seemed that there wasn’t any clear way for Black to win.

I think that AlphaGo calculated that it was ahead by at least half a point, so it played calmly and solidly in the second half of the game.

Lee Sedol receives a commemorative photo from the AlphaGo team.

Lee Sedol receives a commemorative photo from the AlphaGo team.

Game record

Lee Sedol vs AlphaGo – Game 5

 

Download SGF File (Go Game Record)

 

What now?

Regardless of how you feel about it, the world of Go is going to be very different after this historic match.

As a result of this spectacular contest, we welcome a contingent of new players who have only just heard about the game (hello there!), and public awareness of Go is higher than it has ever been.

My hope is that a reasonable number of new players will stick around and we’ll have a stronger base for promoting Go in the West from now on.

Perhaps we can organize Learn Go Week again later this year?

The future of Go

Already some are concerned that people won’t want to play Go anymore once computers surpass humans, but others point to the continuing popularity of chess after Deep Blue defeated Garry Kasparov, arguing that the level of play is higher than ever.

Personally, I will continue to play and enjoy Go. How about you?

I play Go because it’s beautiful, and fun. It’s a game which never ceases to surprise and fascinate me. There’s nothing that compares to that feeling of being ‘in the zone’.

I’ve made so many friends through Go, and learned so much about myself. I’m really looking forward to playing against something like AlphaGo.

For most of us, the fact that there’s always someone out there who’s better at the game than us has never really mattered before, so why would it now?

The majority of professional players seem to be excited about the arrival of AlphaGo too.

The preliminary rounds of the Bailing Cup were held in China over recent days, and we’ve already heard from friends that a number of pros are experimenting with AlphaGo’s opening from game two.

The AlphaGo team.

The AlphaGo team.

The future of AI

And in the longer term, human society will have to become accustomed to this new technology as it is gradually applied to fields beyond Go. Maybe now is a good time to start talking about that?

As DeepMind CEO Demis Hassabis freely admits, there are difficult conversations which continued advances in artificial intelligence will bring to the fore.

As funny (or scary) as they might be, jokes about Skynet do not do this issue justice.

In the press conference after the match, Hassabis said, “As with all powerful technologies, they bring opportunities and challenges.”

“We think of AI at DeepMind as a powerful tool. A tool to help human experts to achieve more.”

“In general, we believe in open and collaborative research, and we believe in the power of AI to benefit the many, not just the few.”

What’s your take on this? Join the conversation below.

Lee Sedol and Demis Hassabis answer questions after the final game.

Lee Sedol and Demis Hassabis answer questions after the final game.

More Photos

Related Articles

About David Ormerod

David is a Go enthusiast who’s played the game for more than a decade. He likes learning, teaching, playing and writing about the game Go. He's taught thousands of people to play Go, both online and in person at schools, public Go demonstrations and Go clubs. David is a 5 dan amateur Go player who competed in the World Amateur Go Championships prior to starting Go Game Guru. He's also the editor of Go Game Guru.

You can follow Go Game Guru on Facebook, Twitter, Google+ and Youtube.

Comments

  1. Great article to summarize all that have happened. I believe the future of Go and AI will both be brilliant. As of professional Go playing, the exploration by AlphaGo is going to open the door to the unknown territory of the game and thus improve people’s understanding of strategies and tactics to a new level. New openings and moves supported by good reading and global calculation will appear more frequently. For AI, improvements to avoid the collapsing situation in game 4 should be on the way but might take relatively longer time than people can expect, but the application in other areas outside of game playing should be on the horizon.

  2. Jorik Mandemaker says:

    Many thanks to the gogameguru staff. I’ve had a lot of fun watching the games and the coverage of gogameguru and the discussions contributed a lot to that.

  3. I will continue to play Go because of the value that it has brought to my life. It is a game that expands the mind. You have to learn to think more flexibly. Moves that seemed inconceivable before gradually become more intuitive as you get better – and then you have a new challenge to get even a little better as time goes on. It is such a great feeling to be able to understand a position or solve a problem that you know you could not have solved a year before. Go also has taught me to be calmer and to care less about “winning” and more about having a good game. I’d much rather lose a game in which I played my best than to win a game because my opponent made a dumb blunder. I believe that Go players are better, more decent, human beings than the average because of the appreciation for open-mindedness and nuance that the game necessitates.

    I don’t think AI will hurt amateur players – in fact it ought to make it easier to get better. However, I do wonder about the pro world. I wonder if parents will regard it as less valuable to encourage their talented children to follow the professional’s path given that the AI will always be superior. It’s how I would feel about it. It’s one thing to have Go as an enjoyable hobby, but it’s a different thing to dedicate one’s whole life to it. Maybe with the help of computer study we will see more and more very strong amateurs, ones who today would be seen as top pros, but perhaps at the expense of a decline in the professional world.

    As far as more general applications of AI are concerned, I hope that it will be a net benefit to humanity. But it is something that will surely be very difficult to adapt to. Soon many jobs will be lost to AIs. Thanks to this kind of technology robot labourers will be able to harvest crops in a way they cannot today. AIs will interpret xrays and mris, and so on. Losing such jobs both at the labor/blue collar as well as white collar levels will mean an unemployment problem such as we have not seen before. That means it will be absolutely necessary to introduce a welfare system that makes it possible for people to live without working. It also means we must work even harder to reduce the human population or else we will have too many people trying to eke out fewer and fewer resources in a world where they just won’t have the skills to find jobs that sustain them and their families…

    • Anonymous says:

      Sober thoughts, Vlad. However, to me the endgame picture concerning AI seems to be even darker: total control for a group or, more probable, a single person, over humanity. As for me, there isn’t anything worth to live for, which Ai is better at than humans.

      • Thanks @anonymous 5:16. Yes, it does raise the spectre of drone armies/security forces as well as surveillance technologies…

  4. I went to bed just after move 89, was disappointed when I heard the results, both because I was pulling for a 3-2 split AND because that meant I missed AlphaGo’s comeback.

    I’m pretty much a novice, but even I could tell it was an exciting match. I think this is probably the coolest time for AI Go where a program is playing just at/just above champion level play. Before now, 5-0s were a given, and I suspect in the near future, they will become a given again (although now in the other direction). But the time where we can get a 4-1 is rather neat.

  5. So maybe Lee has lost because of move 169 and the continuation, if the final score was only 2.5 for Ali? Probably Lee playing on byo-yomi coundn’t count there exactly.

    • Younggil An says:

      @Derek: I think AlphaGo was already leading by at least half a point at the time.

      It appeared to be Black lost a point from the trade up to 180, but Lee’s endgame was also kind of perfect afterwards under the byoyomi situation.

    • of course 🙂 Lee it’s the best! Maby more time, Lee win! 🙂

  6. Difficult conversations? Only for those who are anti human intelligence and technology. I welcome the strides in AI and can’t wait for those days humanity completely reengineers its DNA from scratch and merges its intelligence with AI. Let the floodgates open.

  7. Was it just me, or did Lee seem to be fed up with the event and bailed on the press conference quite abruptly.

    • Crystal Shepard says:

      It has to be quite frustrating for him. Still, his post game response compares quite favorably to Kasparov’s response after his loss in the comparable chess matchup.

    • Younggil An says:

      I think Lee was just too tired from the match, and the last game was regretful for him as well.

      He said he was excited and happy during the match, and I think that’s true since he’s had very special experience which he’s never felt before.

      • How does the effort that Lee Sedol put into this match compare with the effort for a championship final match. Do you think he put in much more effort for this match with AlphaGo than he would for a championship final match?

  8. gonewbie says:

    Lee could have stayed for the post-award press conference. But it’s totally understandable that he didn’t. He already gave a press conference right after the game. And he probably had nothing new to say. After the game and all that happened afterwards, it would be unreasonable to expect him to stay for the second press conference.

    • Younggil An says:

      Lee said that the winner is AlphaGo and the Google DeepMind team, so he thought he’d better leave after a short comment of congratulations, so that the AlphaGo team will get a full attention from the media.

  9. Lafcadio says:

    I think Lee’s win rate would increase the more games he played against Alphago.

    Thanks David and GoGuru for the commentary, forum and game records!

    • Younggil An says:

      @Lafcadio, thanks for your kind comment.

      Yes, Lee also said so, and he doesn’t think AlphaGo is perfect. However, after some upgrades, it will become more difficult for a human champion to defeat I guess.

  10. To me the main lesson is that even for AI researchers, it is hard to predict when this or that will happen (reaching top professional level at Go by a machine had been assumed to be a decade far until recently).

    This means that, if there is any substantial danger with developing a strong AI, it is going to be horribly difficult to navigate around (because we are not going to see it coming).
    Of course, we do not know if it is going to be dangerous, but even a chance of 1/10 should make everyone overtly cautious.

    We should compare this with climate change (another species-wide risk). There is a substantial risk of catastrophic climate modification. But we are 30 years after the first warnings backed by solid evidence, and not much has been achieved (yet, hopefully!), even if the issue is pretty obvious to everyone.

    The problem with the possible dangers of AI, is that it is immaterial, much less visible, and bad consequences will probably develop much faster.

    I believe humanity is going to gamble huge on a single game pretty soon, and I truly hope that everything will turn out OK.

  11. John Harding says:

    I’m glad I found your site (because of the AlphaGo vs. Sedol coverage) – I’ve often wanted to learn to play Go.

    But it looks like you might need to update your “What is Go?” page – specifically the Go and Computers section 😉

    Great site.

  12. The smart minds at Google will figure out what went wrong with AlphaGo in game 4. I can’t wait to work at Google!

  13. Definitely the game is losing something with this match. An exclusivity, a magic.
    Dominated by computers. Ok, we have to live with it.
    I hope we get something in return.

    • What is lost? Computers are a purely human achievement. Nothing they do, thus far, is anything but a reflection of our abilities.
      Pros may go away but, as in “The Player of Games” a class of dedicated “amateurs” will take their place.
      It’s also possible, as in the same book, that games which incorporate an element of chance (thus guaranteeing the impossibility of AI domination) will arise.

    • dichotomy says:

      I wonder what exactly is lost. If something is lost, perhaps it is human fallibility? Consider the music to which we listen on the radio. No singer in the world can produce such exceptionally excellent performances because they are combinations of several performances to get the cleanest, clearest, best parts of each song. What is created is, ultimately, the best of humanity.

      As for the magic being lost: I understand exactly how a rainbow is created. The knowledge that the reflection and refraction of light through droplets of water does not diminish its wonder, but rather, enhances it. Every time something new is learned, a broader, arguably more magical, perspective about the world opens up.

      • Michael Brownell says:

        There are LOTS of people who prefer one take with flaws to edited together takes. There’s a certain energy and spirit there. Glenn Gould thought recording would lead to the end of live performance, but it hasn’t been so.

  14. I wonder if Black 125 at k6 was a better solution to cutting problem. It seems to protect from White k2 as well.

    • Younggil An says:

      @Lake, that’s a good idea.

      However, White will extend at J7, and Black’s two weaknesses will be exposed at J2 and K7.

      That’s why Lee just chose to connect quietly with 125.

  15. Having achieved that 1 win, and depending on how AlphaGo is improved before the next time it plays, Lee stands a chance at being remember, not as the first 9 dan professional to lose to a computer, but as the last human to win against one.

  16. Thank you for this. I downloaded all the SGF files and played through ’em while reading your commentary. Awesome coverage of a historic event. And honestly, speaking as a long term Go player and computer programmer, I never thought I’d see this happen in my lifetime.

  17. David Tan says:

    For those we claim or fear machine’s superiority to human, even at the game of go, please consider the following mitigating factors:

    1. This was not a one-to-one match. It was one human against many computers (I heard there were over 1000 CPUs and hundreds of GPUs). As a result, computers consumed much more power to do the task that they required air-conditioning (I am sure Lee did not need special cooling tool for his brain). Computers also had lots of memory and tools to aid their calculation, whereas Lee had to sit there alone (without even a pencil and paper to help).

    2. Before the games, Lee knew very little about AlphaGo. The only records were the outdated (or even misleading) ones from October, since AlphaGo has improved vastly since then. On the other hand, AlphaGo had plenty of training data from Lee and other top players. This was clearly extremely disadvantageous to Lee – given Sun Tze’s “know yourself and know your enemy” motto.

    3. Further, notice how much Lee had improved since game 1. I would contend that Lee’s best games were games 4 and 5. AlphaGo, however, was not able to adapt its strategy and playing style to counter Lee’s changes.

    Cheer up, human is still much better in playing go than computer.

    • It is inevitable that as hardware improves, Computer Go will be unassailable. But I agree that AlphaGo v18 is not yet at that level.

      According to legendary player Nie Wieping, he said:
      AlphaGo is a 6-7 dan pro in the beginning; 13d mid-game; 15d end-game

      Obviously there is no such thing as above 9d but if there were, the above analysis is more or less correct IMHO.

      AlphaGo v18 can be defeated by Pros, probably consistently once people have figured out the beginning game that AlphaGo is weakest at. But it would take players with strong opening to early midgame. Players that are stronger in later game will not do well because Alphago will simply be able to more or less brute force the optimum way to play and leave the human to room to catch up if behind even slightly.

    • 1. Computer also had much less compute power than Lee sedol. It wasn’t able to incorporate new knowledge as fast as us. It’s also trying to emulate an old fashioned model of his we thought the brain worked.
      It has so many disadvantages compared to us that this achievement by the team is all the more amazing.

      2. It’s been said, numerous times, that it DIDN’T train using the games of top professionals (especially Lee Sedol). It was trained almost entirely on amateur internet matches.

      3. Ignore game 1. That was a one time event.
      Looking at the rest of the games, it seems like there isn’t any one strategy that you can employ that will reliably give you an advantage against AG.
      For all we know, that forth game was a 1 in 10 000 occurrence. We won’t really know until its played more games against the top players.

      • Crystal Shepard says:

        “For all we know, that forth game was a 1 in 10 000 occurrence. We won’t really know until its played more games against the top players.”

        There’s a sense in which we’ll never know. The program will undoubtedly be improved yet more before it plays the next professional. I wouldn’t be surprised if Lee Sedol was the last human to ever beat AlphaGo.

  18. AI improvement will transform our society. But just like our current economic inequality, the benefit will be enjoyed by only few numbers of people. They will have access to it and use it for their own advantage.

    • You don’t know that. Also, you’re not considering the advances to technology/society that it can bring.
      Even now, the poorest among us (in the First World countries) have access to a level of medical care that the wealthiest humans on earth didn’t have 50 years ago.

    • Tom Hancock says:

      Agree. Those who think IT is going to usher in a libertarian paradise for everyone are either mistaken or part of the 0.0001% who benefit from technocracy.

  19. I have to say this game was the most impressive to me, because Lee didn’t seem to make any obvious mistakes. Both players proceeded very firmly and cautiously. It didn’t feel to me very much like the modern style of Go, in particular Korean style.

    I could only say it felt more like a classic Japanese style, but with a faster and more modern opening (almost as if Shusaku had learned modern joseki ;). I would compare its play to the classic games against the various go houses in Japan (Honinbo, Meijin etc.), where the master had an entire team of eager students to pore over the game and help him to find the best move. And of course we know that its training reflects that: it processed millions of online games played by strong amateurs… not pro games, because, [i]there simply weren’t enough of them[/i].

    It lacks the personality of a human player. Beyond even a go house, it felt more like an army of students, in the thousands, had written down their best choice for every move, and submitted them. Then, an algorithm had processed and selected the best one, devoid of emotion or personality, uninfluenced by teaching, or rules of seniority.

  20. Much of what people think of the future is based on the the inevitability of hardware to become increasing available as it has in the past and that humanity accepts this new technology. Is it inevitable that hardware continues to improve though? Much of the hardware is stagnating because the economics involved with making processors smaller and more powerful is starting to become too large for the potential payoff. This is a very impressive feat and it is quite possible that I am wrong, but everything I know about computer hardware suggests we are fast approaching a limit to what we can achieve with current technology.

    Secondly, we only have to look at something like self driven cars to expose what people think of technology. It’s easy to fool ourselves into thinking that people will welcome these new technologies with open arms, but people are more conservative than we like to believe.

    As for Go, I’ll continue to play the game because I love playing it. I don’t care if Google makes some super AI Go bot because it has no effect on me. There exists robots that can lift cars but that doesn’t stop people from lifting weights. There is more to life than just trying to the best at something for the sake of being the best.

    I congratulate the AlphaGo on their achievement and I look forward to being proved wrong again on my pessimism about the potential usage and benefits of this technology.

  21. AnthonyC says:

    I hope to see Kie Jie vs AG next. 🙂

  22. AnthonyC says:

    * Ke Jie

  23. David Britt says:

    Thanks for your wonderful commentary throughout the series. I look forward to hearing what you have to say about a future Ke Jie – AlphaGo match!

    As for the question of “what now,” I totally agree with your opinion that humans will continue to do what they do. As someone who will never be more than an amateur at Go, I feel lucky to live in a time when I can expect to see the highest level Go games ever played. My (surely vain) wish is to see the theoretically perfect game of Go in my lifetime. If AlphaGo gets us closer to that, then I’m all for it!

  24. Next step 21×21 board?

    • Crystal Shepard says:

      As I understand the current situation, Lee Sedol would have probably won 3-0 (and 5-0 with the additional matches) on a 21×21 board. AlphaGo hasn’t trained on it at all*, where Lee Sedol is operating, in large part, by principles that would apply fine on a larger board.

      Not to diminish what AlphaGo has done (a few years ago I thought it wouldn’t happen in my lifetime), but it lacks even that much flexibility. Part of what is impressive about Lee Sedol’s go (relative to AlphaGo’s) is that it comes from a general purpose mind, one that can also write poetry, tie shoes, and give press conferences after games.

      *If it has trained on a 21×21 board, it would be from a very small database.

    • Agreed. I think AG would struggle at that (relatively; that is, it would be perhaps one stone weaker overall, though still indomitable in the end game).

  25. I will contintue to play go and enjoy it, but maybe some things will change:

    1. If the google engine/AI can be combined with a good interface then amateurs can analyse their own games and understand were they went wrong. So maybe this will have an effect on the teaching income of pro’s.
    2. Watching pro-games online will become annoying because a whole lot of kibitzers will be analysing the game using alpha-go and people will stop thinking about the game and will at every move wonder “what does alpha go think about this move?”.
    3. Two day matches in Japan will probably be abolished as the players will be able to use alpha-go to analyse the position.
    4. I really hope that this will not lead to “cheating” allegations in tournaments, both pro and amateur.

    • AlphaGo runs on some 2000 CPUs and 200 GPUs. It’s not like everybody can run it on their cell phones. So most of this shouldn’t happen yet. But this is possible in the future.
      Also, some cell phones can beat chess pros now, however chess have not been abolished and current chess champion, Carlsen Magnus, has the highest elo rating ever achieved by any chess player.

  26. Ke Jie soon is good idea not because of different level of his play, but because he could prepare. At least games 1 and 5 were highly influenced, even a rematch would mean a better score for Lee.

  27. We can hardly compare a team of people playing against a single expert senior player. In fact, a collective intelligence will generally give the best response in situations where there are no experts.
    But, because of errors committed by people and the difficulty to make a good vote system, a good expert can be more efficient than hundred of people.
    Partly because of the internal decision process which helps to select the right response and it is a very complex task (like the Neural Network Policies part of Alphago program). Remember kasparov, playing against the “world”, a game that he won.(https://en.wikipedia.org/wiki/Kasparov_versus_the_World). And that’s was not the only time that kasparov has won against a community of people.

  28. Chris Miller says:

    I’d be very curios to see how AlphaGo would handle playing with a handicap. No low-dan amateur could beat a human 9p even with an 8 or 9 stone handicap. But AlphaGo, with a 0 probability of winning and so no good moves might fall to pieces as it did when behind in game 4. Be very curious to see such a match.

  29. Greg Christensen says:

    The debate about whether or not AG is unbeatable is an interesting one.

    Consider the fact that if AG plays AG, one of the AG’s lose. Games are interesting in part because there will generally be a winner and a loser. In Go, AG doesn’t have any particular information that a human player doesn’t. AG can certainly make what looks like “divine” moves but there is simply nothing preventing a human from doing this. While AG has some advantages, technically there is nothing preventing a player from playing the perfect game against AG and winning if by 0.5 points.

    A couple other prerequisites for any such debate is understanding the assumption that the counting rules are fair. There have been Komi-rule changes and in fact in all games black plays with a “handicap.” If that handicap is not precise, it will certainly make games appear more one-sides than they may actually be. One interesting result of AG playing AG in hundreds of thousands of perfectly played games would be to understand if black or white has any advantages in the current system. As the rules are adjusted players will have an even better chance of beating AG in a fair match.

    Anyways, in my opinion there is nothing preventing a human from beating AG except for near perfection. We’ve already agreed pro players are very close to that space and in a game like Go there is an imposed limit on that perfection as while the number of moves is vast it is still a finite number.

    • Crystal Shepard says:

      I think your arguments only hold in some theoretical sense. There is nothing technically to prevent a player from playing a perfect game against AG and winning. However, there’s nothing technically preventing me from playing a perfect game against Lee Sedol and winning. Yet at my skill level Lee Sedol simply *is* unbeatable.

      As for pro players being near perfection, I see no reason to believe that whatsoever. Why should I? The fact that there is such a huge range between a rank beginner and a 9-dan pro seems to me to argue that there might well be quite a range above them. It might simply be that human players are very close to being as good as human players can be.

      • Greg Christensen says:

        You’re certainly right and I admit I could have more clearly added “as unlikely as it might be.” Regardless of the likelihood or the probability of it being astronomic, I think a statement that is even more dangerous was “human players are very close to being as good as human players can be.” This seems to me like a pessimistic view especially considering it is widely understood that the marvel that is AG wasn’t even possible without recent human ingenuity. It’s ironic how we look at AG and perceive it as mostly limitless when in fact AG is limited by the capabilities of its programming. As much as the doomsayers like to believe there is no way that AG without intervention of humans can do anything besides play Go at this time. When we look at the human in this equation we assume a number of limitations when so far in all cases it has only been the human that has not yes met its limits and we certainly cannot begin to assume what those are, in my opinion.

        • Crystal Shepard says:

          It seems to me that “Pro players are near perfection.” and “human players are very close to being as good as human players can be.” Provide exactly the same diagnosis for the future of human advancement in Go – a fairly static one. Where they differ is in the space beyond human capability – the former asserts there isn’t much, the second is neutral on the matter.

          One thing that a player better than humans will do is help us gauge that. At the moment, Lee Sedol plays very close to the quality of game that AlphaGo does, and understands all of its moves. If that’s still the case in a few years, that would provide evidence that the best pro players are fairly close to the perfect game. If pro players are beaten by a future version of AlphaGo and have significant trouble analyzing the games, it’ll be evidence that there’s room for better play that’s currently, and perhaps forever, beyond us.

          Any way it goes, however, I see no reason to despair. Humans can’t run as fast as planes, but we still find footraces a worthwhile endeavor. Chess didn’t die when computers got to the point where the best players couldn’t manage to beat them at all. Go will continue to be a beautiful and elegant game that challenges and improves those who play it, no matter what happens with computers.

          • This was a thoughtful post, though I disagree a bit (possibly).
            One, I’m not sure we know that Lee Sedol is close to alphago. We simply have far too small of a sampling size to make a judgement, I think.

            Two, I’m also not sure that we can use analysis as a metric to determine how advanced is an AI. For one thing, it’d be, perhaps, impossible to separate post hoc “understanding” from a genuine understanding of the tactics. There’s also the current issue where, for the top players, there can be great differences in the analysis of certain moves, or indeed, even the standing of a game (as I understand it).
            All of this is by way of saying that humans seem to be right at the limit of our ability to analyze something like Go unless we use heuristics (which machines will also do, but they’ve the advantage of not having any memory/compute limits… at least as far as things are now).

            • Crystal Shepard says:

              I think the fact that Lee Sedol won a game, and that experts believe there were points in the other games where he was ahead, gives us good reason to believe that Lee Sedol is close in skill level to AlphaGo right now. We’d use those standards to judge whether two human players were close in skill level, and I don’t see why they wouldn’t apply when a program is involved. Though we’ve already seen differences in what AlphaGo’s skill looks like from what human skill looks like – it missed a tesuji that no human playing Sedol would have missed. But wins and losses and competitive games are still a good overall measure, I think.

              I think the way to distinguish between genuinely understanding (future) AlphaGo’s tactics and only having a post hoc “understanding-lite” is whether we are capable of using those tactics in our human games. Or, at least, whether people at the top levels of the game are capable of doing so. That’s also true of whether we truly understand go heuristics, or whether we only understand how to say them back to our sensei. 😉

      • Humans, good as they are, can only perform certain calculations at a certain speed. The AI can (and will in future) perform those calculations far much faster. The time will come when after the first move the AI has calculated every combination – right down to a 0.5 win. A human has to play the perfect game, impossible under time pressure.

  30. Tom Hancock says:

    People comparing AIs to minds are missing the fact of subjective experience. Human minds aren’t primarily about calculation. Did Alphago enjoy the matches?

  31. forevergo says:

    I wonder if Lee lost due to playing too much “prevent defense” after he gained the lead by winning the corner fight brilliantly. Instead of continuing on the attack, he started to play safe and decided instead to try to nurse his slim lead. One thing we can all agree on is that you will never beat AlphaGo by trying to out-grind it 1 point at a time. You have to be constantly attacking!

    Anyone else agree?

  32. There is a certain symmetry at how AlphaGo 9p and human masters will proceed to advance the game from now on. AlphaGo cannot be taught human strategy, the corpus of knowledge developed over time and transmitted via textbooks and word of mouth is not accessible to it, we cannot explain to it why “sente” is important – and this contributed to perceived weaknesses in pre-match analysis. Absent this channel of knowledge transfer, it learns by observation and experience, developing its own strategic and tactical patterns. These patterns, potentially revolutionary contributions, to the game, in turn, cannot be learned by human players, except by observing AlphaGo games. We cannot learn its strategy in terms that AlphaGo formed internally, the bridge of pattern transfer is closed both ways. It is truly a foreign intelligence – but clearly intelligence.

    • Anonymous says:

      Alphago does not have or use any strategy. It just runs down a algorithm that makes it decide which move it assumes to result in the largest winning probability. of course, this algorithm makes use of a database of games/sequences/positions/moves which can continuously be modified and/or expanded. but “strategy” is a human term that simply does nnot apply to a computer algorithm.

  33. Anonymous says:

    I wonder, whether top pro´s soon or later (in two or three years?) accept the idea of taking handicaps against AG. This would be the metric for measurement, how far from human´s being an AG will be.

    • The handicap between 2 consecutive professional dan levels is about a third of a stone. I don’t think AlphaGo (currently) is 2 stones ahead of top 9p professionals. At the very most AG can allow top pros to take black without komi.

  34. Yes, the current version. But if you consider the progress AG has made since October..

    • Indeed. But more generally, the question is “what is the distance between top Pros and God’s play ?”. Who knows, it may be about anywhere … In a world with IQ-limited people, the best player could be what is today a “not too bad amateur, ie a low Dan player”. That would be around 6 to 9 stones away from top pros and even more from God’s play. On the opposite, in a world with supra-clever people where average go players are easily 3d amateurs without working it a lot, the threshold to become insei would probably be Lee Sedol’s level ; -), and top pros would be significantly better ! But of course they would still have zero chances against God’s play without handicap … All that to say it is hard to assess where the roof is : it may be several stones above, or not …
      My guess : God may give 2 to 3 stones to top pros … (btw, I read “about 3 stones” somewhere, but I don’t remeber where). I don’t think it is significantly more because 1) despite significant differences in strength between pros, they cannot sustain handicap. If we were far away from the roof, from time to time an exceptional player would arrive and could sustain handicap against most of the other pros, but that’s not the case. 2) AlphaGo is clever but very different, so there is no reason it should arrive at a very comparable level vs Top Pros. However, it seems it is very close in strength to top Pros (or slightly better, but it may be a “novelty effect” and we will need more time and games to assess it’s “true strength”), and hence AlphaGo can probably not sustain handicap against top Pros. If the roof was several stones above top pros, that would be a surprising coincidence : possible, but statistically unlikely. So my explanation is that it is not a coincidence, it reflects just the fact that we are quite close from God’s play in terms of handicap (ie less than “a few” stones).
      All together, I think God’s play would of course win 100% of games against top Pros and AlphaGo, but it would be close with 2 or 3 stones handi.
      The good point is that we will know the answer within 10 years !
      Indeed, once AlphaGo successors are way more advanced that today and will be winning all games against top Pros without handicap, either they will sustain 2 or more handi and it means we are further away from the top (at least 3 stones). Or on the opposite, if in 10 years time it wins almost always without handi but rarely with 2 stones, then we are surely very close from the roof, probably around 2 stones.
      Hence, let’s wait for 10 years ; -)

      • I think it was Go Seigen who estimated that perfect play would be 3 stones above the level he was playing at. Don’t know how he arrived at that figure, but that guy’s intuition was always spot on.

        If that was the case, even if AI could never reach perfect play (near-infinite possibilities), it could help humans improve by at least a stone in the next decade. That would be a 9 1/2 – 10 dan level, very very close to God’s play.

      • bobiscool says:

        One of the biggest problems with giving stones as handicap to see how far away we are from perfect play, is that with handicap stones, we’d play differently.

        Imagine Barcelona or Real Madrid, playing against some Japanese team. They’d dominate, no question, right? But what if Barcelona had to give a 3 goal handicap? What would happen?

        Well, the Japanese team will simply put all 11 players on defense, and waste as much time as possible. They won’t even try to attack. So the game becomes completely different.

  35. Chris Miller says:

    Not sure how “God” or AG’s successors would handle playing w/ a handicap, where there are no good moves, no move on the complete decision tree is winning, and so where overplay is necessary. Then it would have to decide which move is most apt to elicit mistakes. I’d be very curious to se AG play agains an amateur 1 or 2 dan with a 8 or 9 stone handi. My guess is it would fare far worse than a human 9p.

  36. True Chris, but:
    – that’s not the case for other softwares such as Crazy Stones or Zen : they handle handicap games with 6 or more stones efficiently … So even if AlphaGo doesn’t handle it well, future version should be able to handle it.
    – and it’s apparently not the case as they explained it wins about 70% of the games against other top programs with 4 stones handicap, if I remember it well.

    • Chris Miller says:

      Wonder how it decides which plays are most apt to evoke mistakes from its opponent.

      I think in a fair(er) test of human vs. artificial intelligence via Go, time should not be a factor. The human should be given time to sleep between moves. Biological endurance should not factor in. Also the human should be allowed to analyze by playing out and studying positions (like the computer can). Humans, unlike computers, do not have perfect eidetic visual memory, but which should not be a factor in comparing Go playing intelligence.

      • Crystal Shepard says:

        A significant part of human intelligence is how quickly and easily you can accomplish mental actions. I don’t see why AlphaGo should not have that counted in their favor, just as any human would.

        Also, some human players are better able to concentrate than others. Some have more mental endurance over time than others. Some have better visual memory than others. Some are able to play out complex positions in their heads, others can’t. We call the people who can do these things more effectively *better players* than those who can’t. What we don’t do is give the less capable player a board to play out variations on, extra time and rest, etc.

  37. I checked :
    “To provide a greater challenge to AlphaGo , we also played games with 4 handicap stones (i.e. free moves for the opponent); AlphaGo won 77%, 86%, and 99% of handicap games against Crazy Stone, Zen, and Pachi respectively”.

  38. Thank you. Great analysis! I think the AlphaGo revolution bodes will for both the game of Go and for the ability of humanity to collectively get smarter and live better.

    I wonder if, for a little while longer, teams of pros will be able to beat AlphaGo, especially with increased time limits. I’m surprised I haven’t seen much discussion of that – is there any experience with ways that teams of humans can play better than the smartest one of them does?

    I’d like to see Deepmind release lots more data, which should help us further develop the game. E.g., they should release a lot of AlphaGo’s games against itself. And they should release lots of data about AlphaGo’s assessment of the game at each move – who was ahead, by how much, chances of winning, list of moves ranked by preference in terms of winning and preference in terms of gaining points, etc. Best of all would be to release their massive training sets and trained neural networks, so that open-source projects like https://github.com/Rochester-NRT/AlphaGo can take advantage of it.

    I do sadly expect that professional go players will see reduced income potential, both from teaching and from tournaments, but I hope that as with Chess, Go sees increased popularity among amateurs.

    I agree that there are risks in terms of AIs replacing more and more jobs, including both blue-collar and white-collar jobs. Hopefully that will lead to an overall improvement in economic efficiency, which can help make a “guaranteed minimum income” sensible politics.

    • @Neal McBurnett your Web site is cool (and very old school hehe!) but I guess you need to update your computer Go section about Alpha Go 🙂

  39. @Vlad. Thanks 🙂 And wow – I forgot that I wrote (back in 2001) that “I think it will take decades before Go programs achieve mastery.” Yeah – that got pretty dated 😉 Fixed now.

    • There’s also this in the “What is go?” introduction page: “Computers and Go — One thing that differentiates Go from similar board games is the amount of freedom and creativity it allows. For this reason it has long been regarded as one of the most difficult challenges in the field of artificial intelligence (AI). Despite recent advances, computer programs are still a long way off being able to compete with the top human players.”

  40. They published the 15m summary on the deepmind chanel. Redmond didn’t like B31 and preferred taking the 3-3 point instead (because the ladder on the outside favors black).

    • Interestingly, both Lee Sedol and Nie Weiping independently assessed B69 to be an overplay, in effect the losing move. This seems to differ from An’s assessment.

      Any comment, An?

  41. Alphadol says:

    I demand a re-match! Lee-Sedol vs Alphago in 2017.. With extra training I believe Lee can easily defeat Alphago because he has learnt it’s weakness

  42. Would just like to say a big thank you to An & David for a great job in doing these commentaries. Keep it up!

  43. Gil Dogon says:

    Hello Ann/David.

    I wanted to ask you a question and hear your opinion. To my not so trained/subtle understanding of the game (I’m just a 1d IGS) if I try to understand the ‘style’ of AlphaGo playing, I would say, that it is pretty Solid/Calm style, placing emphasis on superior positional judgment of the board situation, almost without any need to play ‘brilliant/sharp’ moves, specialy in contrast to Lee Sedol’s style, which seems much sharper. Even the most surprising of Alphago’s moves, the now famous shoulder hit from game 2 does not look that sharp/suprising at least from a tactical/local standpoint. It certainly does not seem that it tries to make the game complex/chaotic. If anything I guess the most human analog to alphago would actualy be the famous chess champion Capablanca. To quote wikipedia:

    “Capablanca excelled in simple positions and endgames, and his positional judgment was outstanding, so much so that most attempts to attack him came to grief without any apparent defensive efforts on his part. However, he could play great tactical chess when necessary ”

    Maybe in go the equivalent style is eralier Lee Changho’s . What do you think ? which other players style may be similar ?

    • Yes, I think Lee Changho comes closest to AlphaGo’s style of play.

      Lee once said that his style wasn’t spectacular because he would choose the most efficient move with only a 50+% efficacy but which will allow him to calculate and analyse to a greater depth than a more ‘brilliant’ move. This squares with AlphaGo’s move optimization which is based on winning percentage instead of winning margin.