AlphaGo shows its true strength in 3rd victory against Lee Sedol

AlphaGo made history once again on Saturday, as the first computer program to defeat a top professional Go player in an even match.

In the third of five games with Lee Sedol 9p, AlphaGo won so convincingly as to remove all doubt about its strength from the minds of experienced players.

In fact, it played so well that it was almost scary.


From left: DeepMind CEO Demis Hassabis, Lee Sedol 9 dan and Google co-founder Sergey Brin.

The picture becomes clear

When AlphaGo defeated Lee Sedol in the first game, the result was shocking to many, but doubts still remained about its strengths and weaknesses.

It seemed like Lee had underestimated his opponent and chosen an experimental opening to test the machine’s strength. That did not go well.

Soon afterwards, he became embroiled in a complex tactical position, and eventually lost.

On the second day, Lee’s play was much better. His game plan was clearly to play solid and patient moves, and wait for an opportunity to strike.

It was a good test of the machine’s abilities.

Even though Lee never found that opportunity, it was a high quality game and it gave hope to everyone supporting ‘team human’.

For many, including An Younggil 8p and myself, game three crushed that hope.

We’re now convinced that AlphaGo is simply stronger than any known human Go player.


Lee Sedol, after trying everything humanly possible, eventually conceded defeat.

Searching for a weakness

After his second loss last Thursday, Lee and a group of strong professional Go players with whom he is friends stayed up until 6:00 AM reviewing the games.

They were looking for weaknesses, and helping Lee to devise a strategy for game three (Lee did rest on Friday, don’t worry).

Some players made the logical argument that, if AlphaGo were to be defeated, Lee would need to gain the upper hand in the opening and force it to overplay to catch up.

Others suggested that, based on the known weaknesses of Monte Carlo Go AI’s (the previous state of the art), AlphaGo may be weak at fighting ko or figuring out capturing races.

Since AlphaGo had yet to face a seriously challenging ko in its games against both Fan Hui and Lee Sedol, this theory was still largely untested.

A plan of attack


Lee Sedol plays the first move of game three against AlphaGo.

With his back against the wall, and armed with the results of his research, Lee took advantage of the first move by playing a fast paced and active High Chinese formation.

Things seemed to be going as planned when Black created a wide position up to Black 11 and AlphaGo entered Black’s large moyo with White 12.

Lee responded by denying White a base with Black 13, and the game became exciting.

It was the first time we’d seen AlphaGo forced to manage a weak group within its opponent’s sphere of influence. Perhaps this would prove to be a weakness?

This, however, was where things began to get scary.

No plan survives first contact with the enemy

Usually developing a large sphere of influence and enticing your opponent to invade it is a good strategy, because it creates a situation where you have a numerical advantage and can attack severely.

In military texts, this is sometimes referred to as ‘force ratio’.

The intention in Go though is not to kill, but to consolidate territory and gain advantages elsewhere while the opponent struggles to defend themselves.

Chris Garlock (left) and Michael Redmond 9 dan commented the game once again.

Chris Garlock (left) and Michael Redmond 9 dan commented the game once again.

Lee appeared to be off to a good start with this plan, pressuring White’s invading group from all directions and forcing it to squirm uncomfortably.

But as the battle progressed, White gradually turned the tables — compounding small efficiencies here and there.

Lee seemed to be playing well, but somehow the computer was playing even better.

In forcing AlphaGo to withstand a very severe, one-sided attack, Lee revealed its hitherto undetected power.

That sinking feeling


Lee Sedol surrounded by cameramen (and women).

Move after move was exchanged and it became apparent that Lee wasn’t gaining enough profit from his attack.

By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White’s powerful counter-attack.

I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.

Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.

One of the greatest virtuosos of the middle game had just been upstaged in black and white clarity.

AlphaGo’s strength was simply remarkable and it was hard not to feel Lee’s pain.

The gloves go back on

After being compelled to flex its muscles for a short time and gaining the upper hand, AlphaGo began to play leisurely moves.

By now, most observers know that this is a feature of the ruthlessly efficient algorithm which guides AlphaGo’s play.

Unlike humans, AlphaGo doesn’t try to maximize its advantage. Its only concern is its probability of winning.

The machine is content to win by half a point, as long as it is following the most certain path to success.

So when AlphaGo plays a slack looking move, we may regard it as a mistake, but perhaps it is more accurately viewed as a declaration of victory?

Putting the machine through its paces

As AlphaGo gradually allowed the pressure to slacken, giving false hope to some observers, Lee took the opportunity to give the machine a workout.

He was already clearly behind, and he knew it, but it was such an important game for him and it was too early to resign.


Lee Sedol at the press conference after the game.

Lee soldiered on with commendable fighting spirit, probing the computer’s weaknesses all over the board.

He tried a clever indirect attack against White’s center dragon with Black 77, but AlphaGo’s responses made it feel like it knew exactly what Lee’s plan was.

Next he tried a cunning probe inside White’s territory, with move 115, attempting to break a ladder in sente or live inside White’s territory, but White responded firmly.

He attempted to make good of his probe by living inside White’s territory with sharp tactics, but White was unperturbed.

Finally, he even tried forcing a complicated ko. At this point, AlphaGo once again showed just how strong and detached its play is by ignoring the ko fight to play honte at White 148.

This move also removed any possibility of a double ko after Black at 148 (which may have been what Lee was planning).

Having answered many questions about AlphaGo’s strengths and weakness, and exhausting every reasonable possibility of reversing the game, Lee was tired and defeated.

He resigned after 176 moves.

Brief analysis of game three

Here is An Younggil 8p’s preliminary analysis of the game. Further game commentary will be posted over the coming week.


Aja Huang (who places stones for AlphaGo) and Lee Sedol bow at the start of game three.

Could 31 be the losing move?

Black 15 was an active way of playing, but White resisted with White 16, which was strong.

White 26 was a good move, preserving White’s shape, and Black 29 would have been better at K13.

Black 31 was too much, and (based on how the game progressed) it might have been the losing move.

White 32 was an unexpected counter attack. It was razor sharp.

White’s sequence from 34 to 46 was natural and flawless, and the game was already good for White.

Black resisted with 47, but White 48 was calm, and it came to pass that Black’s left side was destroyed.

Black raises the stakes


Game three of the Google DeepMind Challenge Match is broadcast on a giant TV screen in Seoul.

Black 49 looked like an overplay, but White 58 was gentle, and Black was able to connect under with 61. However, White was still in the lead.

Black 77 and 79 were typical of Lee’s dynamic style. His intention here was to make the game complicated, but White’s responses up to 86 were excellent.

Black 89 was a mistake. Black should have turned at 90.

White 90 was the vital point for eye shape, and suddenly White had managed to simplify the game.

Black 91 and White 98 were miai, and White built a huge territory at the bottom.

AlphaGo consolidates its advantage

Black 99 was also questionable. Lee should have played at E18, to chase White’s center group first.

White’s play from 102 to 112 was sophisticated, and White consolidated its lead.

Black 113 seemed like a misread. It might have been better to harass White’s right side group with R12.

Black 115 was a tricky probe, but White 116 was the right response, and the game was practically decided.

A desperate invasion

The invasion starting with Black 125 was a desperate strategy, and White’s responses were all accurate.

Black 131 was brilliant, and Black could have created a ko with K4, W J6, B L4, W M2, B N2, W N3, B N4, but he was short of ko threats.

White 148 was an eye popping tenuki, and it was soon proven to be correct because there weren’t any serious problems for White at the bottom.

Black fought the ko with 151, but White’s responses were flawless and Lee resigned afterwards.


After the game, a clearly disappointed Lee apologized to fans, saying that today was Lee Sedol’s loss and not humanity’s.

What now?

After the game, Lee made an unnecessary apology to everyone for losing the match. He also said that he would continue trying to win in games four and five.

Korean commentator Lee Hyunwook 8p said that Lee Sedol had the “strongest heart” of any Go player he knows and that Sedol was the best person to challenge AlphaGo in this match.

Michael Redmond 9p suggested that we might be on the verge of a “third revolution of the opening,” driven by the arrival of AlphaGo. The first two were instigated by the famous players Dosaku and Go Seigen.

Though AlphaGo has technically won the match already, the full five games will still be played.

Lee hopes that Go fans will continue to watch the match and support him during the remaining two games.

Beyond that, we as a community of Go players will have to adapt and make the most of this new force in the world of Go, and perhaps it’s time for broader discussion about how human society will adapt to this emerging technology.

The match continues

Game four of the match will be played on Sunday March 13.

Check our match schedule for details and visit the DeepMind AlphaGo vs Lee Sedol page for regular updates.

Subscribe to our free Go newsletter for weekly updates, including news and detailed commentary of the AlphaGo match.

Game record

Lee Sedol vs AlphaGo – Game 3


Download SGF File (Go Game Record)


More photos


Related Articles

About David Ormerod

David is a Go enthusiast who’s played the game for more than a decade. He likes learning, teaching, playing and writing about the game Go. He's taught thousands of people to play Go, both online and in person at schools, public Go demonstrations and Go clubs. David is a 5 dan amateur Go player who competed in the World Amateur Go Championships prior to starting Go Game Guru. He's also the editor of Go Game Guru.

You can follow Go Game Guru on Facebook, Twitter, Google+ and Youtube.


  1. Is it possible that Alphago is measurably improving its play through the chance of playing directly with a very strong human?

    • David Britt says:

      They are using a constant version throughout the contest apparently.

      • AFAIK you’re technically wrong. The system is learning from the experience. The delta should be small though as it is only one game.

        • Demis has said that training and playing are two different processes, with training taking weeks of processing on a much larger computer, and play occurring via a smaller computer. For now AlphaGo hasn’t learned anything from the matches.

          • Besides, it would be wrong to think that a couple of games with Lee Sedol would do much to alter its judgement. It has already chewed through the largest database of professional games that any computer program has ever ingested, and I’m certain that previous Lee Sedol games were an intentionally attended-to part of that feast.

            • Anonymous says:

              Actually the programmers said at the end of game 4 that they didn’t use any of Lee Sedol’s games in training. There were too few of them – they needed millions of games to train a neural net, and used online strong amateur games to get it started. Also said that it would be very hard to tailor it to a particular player because there just wouldn’t be enough games to use to train the neural net.

    • Improving to where? To the stratosphere?

    • Alphago probably can’t improve its play measurably by playing any single player five times, no matter how strong. That would be “overfitting”.

      The heart of the program is the “policy network” a convolutional neural network (CNN) that was designed for image processing. CNNs return a probability that a given image belongs to each of a predefined set of classifications, like “cat”, “horse”, etc. CNNs work really well but have the weakness that they can only be used with a fix size image to estimate a fixed set of classifications.

      The policy network views go positions as 19×19 images and returns probabilities that human players would make one of 361 possible moves. This probability drives with the Monte Carlo tree search that has been used for some time in go computers.

      Interesting tidbit: Alphago said the chances of a human playing move 37 in game 2 was 1 in 10,000. So the policy network doesn’t decide everything.

      CNN (aka “deep learning”) behavior is pretty well understood. The number of samples required for learning depends on the complexity of the model. A model of this complexity probably requires hundreds or thousands of examples before it changes much. It will eventually hit a training limit, after which it won’t be able to improve itself by training.

      • One of the guys on the team said the policy network is trains on millions of positions.

        The number of samples required to train any machine learning program depends on the complexity of the strategy, not on the number of possible positions.

        For example, five in a row on a 19×19 board would take many fewer examples to train than go would, even though the number of possible positions is also very large.

  2. Anonymous says:

    “I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

    Don’t worry, you’re not the only one here. I had the same sinking feeling, as did many go players I guess. That might be because we saw Lee doing what he does best (fighting), and even he who is arguably the greatest fighter in the world was outmatched. At that moment, towards the end of the opening, we might already have known that the game would be very difficult and that AlphaGo was the real deal.

    Well, I don’t know about you guys, but that’s how I felt.

    Side note, I still think Lee’s play was magnificent. I mean, the numerous probes and tsuke were exquisite and while he lost, in my eyes his performance today is commendable. Congrats, and thank you Lee Sedol.

    • Yeah. Depressing.
      What is the point to measure such a computing power against a human? If there will be one who can beat this monster they double the number of processors and so what?

      • To be accurate, doubling the number of processors does not do very much (distributed version defeats single-computer version only 75% of the time). The big improvements seem to arise from something else, perhaps simply more self-play and training.

  3. Michael Bacon says:

    I told you what was to come.

  4. We can look at this at a more philosophical perspective. We use the words “brilliant”, “scary”, or “detached” to describe how AlphaGo plays, but in all reality, those emotions all come from us, the viewers who study the game and have inherent sensibilities for the game. AlphaGo by itself does not possess the “brilliance” or the “awe” that we associate to it, because all those subjective qualities need a subject – and AlphaGo is not a subject. It is an unconscious machine with no sensibilities or sense of self. So if we see brilliance in the way AlphaGo plays, what really exists is our ability to perceive brilliance or awe in something that does not inherently possess in itself those qualities that we see in it.

    On the other hand, if Lee Sedols plays brilliantly, or nervously, or calmly, it’s all him. He is conscious of the qualities of what he is creating, because he is a subject.

    AlphaGo = an object without consciousness of the self = not a subject = cannot in itself be responsible for any subjective qualities that its play may inspire (brilliance, awe, fright, etc.)

    Lee Sedol = a subject with a sense of self and a conscious sensibility = all those subjective qualities can then inherently be attributed to him (brilliance, nervousness, aggressiveness, etc.)

    • Anonymous says:

      While this is all true, nobody intended to program AlphaGo to exhibit consciousness. However, it could be done and it will be done. You “just” need neural networks observing other neural networks and networks that can communicate. It is all out there and it will take time, but with AlphaGo it is clear that all this will come true…

      • Anonymous says:

        I don’t agree with you. A robot can’t 100% be like a human. Saying that neural networks is all it takes to equate an AI to the consciousness of a real human is reductionist. I believe that you cannot deconstruct the essence of being human and recreate it using technology. It will always be a copy of the real thing. No matter how close they get, it is a copy at best – a completely different thing altogether.

    • Well, ok … It’s machine, etc., etc., etc. But, what you have to measure, and it is that is really “brilliant” or “scary”, in my opinion, is the performance made by *people* who created this program. And that’s really *that*.
      You may realize that defeating the best human player at go, is seen as a holy grail for computer scientists: among the hardest tasks to accomplish with an AI.
      In the background, these recent years has proved us that AI have made impressive progress with neural networks, using deep learning technics (like accurate image recognition), which open the doors to very strong changes in our society.

    • Perfectly put. In a comment of a previous article I objected the anthropomorphic wording. I can’t stand making that false image of a hardware-software thingy.

  5. One also has to compare resources. 100 top cpus vs a human brain on coffee. Nevertheless, the work of the deepmind team has to be honored. I was also really suprised when I saw the first game and I was getting more impressed with every game. But I also see it a bit like Redmond. There are chances for us to learn from AlphaGo. On the coming matches, I have the feeling the pressure on Lee Sedol is lowered by a huge margin, so he might be able to play better. I heard Ke Jie already challenged AlphaGo. He is not even 19 and at a top level. I’m looking forward to see him playing, although AlphaGo might be getting even stronger by then.

    • I have to correct the number of cpus. It’s 1200 CPUs and 176 GPUs.

    • Anonymous says:

      He’d better go get some higher quality coffee!

    • Clearly if we limited Alphago to the same power supply available tge human brain (somewhere around 100W?) it would still be too slow to win. I am curious how much power it actually actually consumed to best LS.

      I won’t have too much respect for my new robot overlord Alphago until it can run both stronger AND more efficiently than a human brain.

  6. Maybe we all just swallowed the myth that go transcended AI in a way that chess didn’t. It just took a bit longer than chess but was always inevitable. Neural networks are designed to mimic the human brain so perhaps we shouldn’t really be surprised that it possesses great positional judgement. And the computer has no emotions nor computational errors, and perfect memory. And it had 1920 CPUs and 280 GPUs and 64 threads of executions (and goodness knows how much RAM probably 100’s of GBs) versus Lee with one brain and one chain of thought (thread).

    So Lee Sedol, we salute you!

    • Anonymous says:

      It took that much resources for the computer to beat a top pro. That’s just for go. Imagine how much more resources it would take for the computer to master every human skill that man has developed over the centuries!

      • Gil Dogon says:

        This Alphago implementation is in its early days, and it was not particularly optimized for power use. Technology will advance, both hardware and software, and I guess, in less than 10 years from now, you could have an Alphago equivalent running on your home-PC. Thats what happened in Chess BTW …
        Also, once this ‘mastery’ has been achieved, it can be reproduced and commercialized. So things are indeed frightening, and it seems almost inevitable, that in less than a century, humanity will be far eclipsed by AI in almost every conceivable intellectual endeavor. Hopefully I am wrong about this, and probably I won’t be alive when that happens …

        • We have to use them to serve us, not to win over us…

        • <>

          Not really. Big difference is that Deep Blue was just a super super super(compared to home-PCs of the time) fast hardware and that’s al. No innovation in the algorithms it was using compared to other programs of the time(Fritz, Hiarcs etc). In fact it was even inferior in that aspect. So it did took a while to catch up with this super difference in hardware.

          But here AlphaGO’s strength does not come up due to it’s super hardware, but only due to it’s super evaluation function(deep neural nets with reinfercement learning etc) and further verification of this is that as AlphaGO teams says the 2000CPU+240GPU hardware only wins 70% of the time against single machine.
          So i don’t expect more than 1-2 years before such a strength becomes available on home-PCs.

          BTW Deep Blue offered nothing to the computer Chess development(i’m talking about the practical matter of programming and not the publicity it gave to computer Chess world and perhaps made many programmers to turn into Chess), while AlphaGO(the Nature paper about it) does gave a 10 years ahead leap!

        • Anonymous says:

          It is indeed scary, especially when things like this are completely controlled by those elite, self-interested capitalists…

      • What you call a lot of resources today might be a smartphone chip in the near future. Today’s best consumer GPU (made with 2011’s 28nm manufacturing process) that people use to play prettier games is more powerful than the world’s best supercomputer in 2001.

        • “Today’s best consumer GPU (…) is more powerful than the world’s best supercomputer in 2001.”

          I don’t know man, in 2001 computers are powerful enough to run psychopathic AIs.

      • Actually, the single-machine version is only 200 elo weaker than the version distributed over over 1000 CPUs and about 200 GPUs.
        Therefore, even a largish single machine might have been enough
        to defeat Lee Sedol. It looks like the average person’s laptop
        is likely not able to accommodate their present software and data, though, which unfortunately may deter google from commercializing it.
        If it were able to be made widely available, then for the first time
        human 9 dans would be able to attend “go school” to be taught at the feet of their 12 dan computer overlords (previously,
        no such a school, and no such teacher, was possible). That educational opportunity might be expected, over time, to cause a great increase in human go strength and understanding, versus now.

        • “Actually, the single-machine version is only 200 elo weaker than the ”
          Where does this info come from?

          • Nature paper? It shows a 250 point gap. Might be slightly outdated (that was the version played vs Fan Hui).

            I think Deepmind’s CEO also tweeted something about distributed version having only 75% win rate over single-computer.

        • This is the right way to look at things.
          Fan Hui, who now works with the Deepmind group, has improved his ranking tremendously because he has been playing against alphago.

          “As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.””

          After this match is concluded, the team is going to work on scaling their algorithms (as noted, the single machine alphago is nearly as strong as the distributed version which suggests some highly serial components… or as convolution increases you need EXTREMELY non-linear resources to see modest gains).

      • Imagine what what an AI will be able to do after a million or two years of evolution.
        And Quest, why does it have to be about serving or being served. Whats wrong with collaboration?

        • Never mind.

        • Anonymous says:

          What are you saying? AI MUST serve humans. If they are not going to serve humans, then we better stop the funding to these research facilities and allocate them to better causes like stopping world hunger. You think people make these things for fun? The ultimate purpose of AI should be to serve humanity. Any other purpose is immoral.

          • Purpose, purpose, purpose. You will find the highest achievers of any intellectual endeavor (science, mathematics, programming) have a single ultimate goal: to have fun. To be excited, to break their limits. If you don’t like it, tough. Take your “purpose” and try to use it as motivation to do these things yourself. You will be surprised at how quickly the complexity of those tasks will kill the “purpose” motivation. It’s pure love, it’s mad passion that drives ALL those people.

    • Anonymous says:

      Agree with what you said. And to put the numbers a bit into perspective – ~2000 CPUs, that is a big room filled to the ceiling with hardware. And those 2k CPUs are easily consuming 200 kW of electrical energy (a factor of 1000 more than any human brain has at its disposal).
      Granted, the deepmind team have designed a very well functioning GO algorithm (program) – but they have (needed to?) thrown an absolute juggernaut of a ‘computer’ at Sedol. (How do you fancy the strongest man in the world in a weight lifting competition with a cargo crane?) Considering the energy numbers, alphaGo (200kW) got way too much consideration time compared to sedol (200W). Why didn’t they put alphaGo on some ordinary desktop PC (maybe a handful and not just one CPUs) and see what happens? That might have been a more even competition…

      • They’ll be doing that in 5 or 10 years and still winning.

      • They have reached a point where they could not have known. The only way they could measure AlphaGo’s skill was to test it vs. other versions of AlphaGo. So they could not be certain AlphaGo would be so dominant – maybe it would have become very confused by playing against Lee Sedol instead of against itself.

        It would be foolish to intentionally cede skill without a good measurement first.

      • Keep in mind that: this was a first effort(hence optimization want a focus), they weren’t trying to create a purpose built Go player (hence abstraction, which usually includes more overhead), they are using general purpose architectures (humans SEEM to have specialized areas of the brain for performing certain tasks, so it has some huge advantages over a more general computing architecture in terms of efficiency), and they are performing an emulation (neutral net) which are notoriously inefficient (again, something like truenorth would be massively more efficient if alphago could be ported to that arch).
        Lastly, our brain has an advantage of 1 billion years of evolution to draw upon.

        • Anonymous says:

          Evolution is a long, inefficient, and painful process. You must not compare it to the more logical, intentional methods of software engineering.

  7. If Kasparov didn’t know that DeepBlue was a machine and tried to play normal most propably would
    suffer alot like Lee Sedol even more but he …had one big advantage and it was that he knew he was against a machine.
    applying Anti-computer strategies were well known back then like
    quick going out of opening theory and starting to close the position with your pawns.
    To put it simple if Kasparov had a stength around 2800 DeepBlue might had the same Elo or even biger but the funny thing was
    that in open sharp positions it performed like a monster 2900 player but in closed position would play like a 2700 player.
    But is there a Anti-computer strategy for Go?
    Some people suggested a lot of Ko fights but even that didn’t seem to work well in game 3
    unusual openings either as we saw in game 1
    So maybe the last chance for Lee Sedol and humanity is to try to play Mirror Go
    I know that Mirror go is an unorthodox(bad)way of playing go in all levels but maybe a genius like Lee Sedol might find something later in the game and at the the right moment play the Divine move.
    The downside is perhaps without knowing personal is that someone should have done some practicing to play this way
    At the end Alphago team wants to test the machine so we don’t lose anything by trying it

    • Good suggestion. I thought of this, too. I imagine it must have crossed the mind of Mr. Sedol and his group. I think a pro would be quite reluctant to try it, because the whole notion is covered with a bit of shame. (Some people would say he was giving up and trying a childish, or at least very lazy ploy.) Contrasting to all of that emotional baggage, if one imagines the fully-solved perfect strategy for 19×19 Go (which AlphaGo has little more chance of guessing than humans do), it has a good chance of beginning with the tengen point. AlphaGo began by imitating pro games, but since then has been playing itself to improve further. I was actually wondering if it would stun us all with a totally different fuseki. I think every one of us would have been in shock if it had opened on tengen itself!
      At any rate, I, too, would love to see a top pro try their very best mirror go strategy against AlphaGo.

    • I think it is important to for everyone to remember that Lee Sedol is using a neural network (his brain), and that his neural network went through a very similar training process as AlphaGo’s.

      He, like all Go players started by studying other Go games. Only once he had reinforced the neural connections necessary to recognize established patterns in Go could he start to develop his own pattern recognition style. And those neural connections have been reinforced through his experience. This is why mastering Go takes hard work and dedication.

      It is also why he can’t just try some novel new method to try and derail AlphaGo. His brain needs time to train to be adept at recognizing new patterns. Without that training he is not the same elite player.

      AlphaGo uses the same methods, but has a couple of advantages. First, it can practice at a massive rate, by playing huge numbers of games. So it can experiment and learn new patterns much more easily.

      Second, it has the added computational advantage of the monte carlo search, which plays to the strengths of computers. This is like someone having a cybernetic implant in their brain to do complex math calculations. Perhaps someday a cyborg will reclaim the Go crown for ‘humanity’?

  8. Hanspeter Schmid says:

    I had an even scarier thought: To me it feels like AlphaGo does not play as strongly as possible, but just as strongly as necessary.

    This would more or less mean that AlphaGo Sensei is playing teaching games against Lee Sedol.

    What do you think about this?

    • Osmotic Ferenc says:

      I think that this is a very distinct possibility. The flaws that we believe we see in #AlphaGo may just be the reflection of the fact that it is actually way ahead of the best players. We can’t even perceive how far ahead.

    • John Joganic says:

      That’s a fascinating take.

      If AlphaGo is playing to maximize its probability of winning rather than maximizing its lead, then the games will be close, barring mistakes. It follows then that if AlphaGo gave up a bit of probability for points, the results could be brutal. Humans, being what they are, won’t stand for that. If the opponents are at disparate skill levels, the expectations of the game change. I do not expect to be wiped out by a stronger player. They’re doing me a favor by just sitting down and playing with me, and I can’t learn if I’m getting trounced.

      That said, if we are entering a world where AlphaGo is always playing 0.5 points ahead, we’ll never know its true strength without learning to outplay it — by playing against it — or taking a handicap. Either way, it’s a new world.

    • I do not even dare to think of it…

  9. I am very very proud of Lee, he fought with everything he had, it had to be an exhausting game. He gave all and the machine just gave a bit more turning the attack upside down. Later even dared to play a ko after a tenuki, I also felt uneasy, how maybe the greatest fighter of all time faced such a wall.

  10. How to win alphago
    Play mirror game against alphago for first 100 moves, it s a time consume for alphago side.
    And in late middlegame try to make ko and make a game more complex as possible , it also reduce alphago adventage for endgame calculation

    • Anonymous says:

      I would be really interested in knowing what would happen with mirror go.
      No need for complicated thing, just do mirror go as white and if black doesn’t realize it, it will play the symmetry breaking move at the very end, when the board is already settled. At that point, Lee sedol goes back on normal mode, and because most of the territory is fixed already, then the komi will win him the game!
      What do you think people?

      • Mirror go can be stopped by occupying centre. It would be too simple minded to try IMO.

  11. Hoon Heng Teh, Canada says:

    1. First I like to congratulate AlphaGo for its very impressive performance.
    2. As a Go player myself, I think the competition is very unfair to Lee
    because AlphaGo has all Lee’s previous game records when he was a better player. Hence this competition is like past better Lee is beating the current weaker Lee.
    3. To be fair to both AlphaGo and Lee, for Match 4 and Match 5 let us change the standard 19×19 Go board to 23×23 Go board. I am quite sure Lee will beat AlphaGo easily because human intelligence can come up with new strategies using logical deduction functions better than AlphaGo’s intelligence based on Neural Networks algorithms which has little or no logical deductions capability.
    4. I really wonder whether AlphaGo dare to take up the challenge ?

    • I also am curious about the effects of a larger board, though 21×21 may be big enough to find out.

      But it would also be interesting to stick with the 19×19 and give Lee a 2-stone handicap and, if he loses, then a 3-stone handicap.

      • jsksbzns says:

        Give Lee a handicap? How insulting!
        YOU take a handicap!

        • No insult is intended. Lee has now lost three games in a row, and to my untutored eye, pretty thoroughly. One of the great advantages of Go is that players of disparate strengths (such as, apparently, Lee and AlphaGo) are able to play very close games through using a handicap.

          In view of the score, I don’t see the suggestion of a handicap as in any way insulting. If a stronger player gives me a handicap, I am not insulted; YMMV.

      • Sorry, alphago cannot play any size other than 19×19.
        All the training etc would need to be redone to change board size,
        which would take about a year — and it might be unable to even get started due to lack of enough strong human games for initial-training “bootstrap” purposes. It might be able to bootstrap its
        training itself starting from nothing, but that remains to be seen.

        I in fact once wrote an othello program which did learn
        everything starting from nothing and after about 1 week of selfplay training beat the human champion of Canada 2-to-1 in a match.
        (After only a couple hours of self-training it still was weaker than a human beginner.) At that point it was clear the computer was still very weak compared to what it presumably would eventually become if I kept it training itself for, say, a year. Which experiment I never tried since I did not have a spare year.

    • Anonymous says:

      The competition is not unfair, AlphaGo did not learn on any game of Lee Sedol, it was actually not trained on any top professional players’ games, (maybe a few games between AG itself and Fan Hui, but the number would be insignificant anyway). It does not even learn from the first 2 games while Lee could (the program was frozen for about a month).

      About playing on bigger board, it’s not possible without changing AG, but if AG were to learn on bigger board, it would probably be better than humans as well.

      • How much processor power is fair against human?

        • Anonymous says:

          What would be fair is if they gave AlphaGo emotions. Teach it the anxiety of shouldering the expectations of millions of go players around the world. Teach it what it means to represent thousands of years of human tradition and dedication. Teach it the the feeling of futility that comes from the finite length of human life and how one chooses to dedicate an irreplaceable chunk of that time towards the mastery of a discipline. Teach it the hardships, sorrows, and triumph of the years gone by with family and friends, all the while trying to dedicate one’s finite existence to go, no matter how futile. Then, it would be a fair game.

  12. KanziApple says:

    As an amateur 8 kyu, the moves up to 48 seem to make the board easier for white. I don’t know if that’s also what the stronger players think too but I’m really impressed by how it seemed to factor in the global aspect of the board while responding to Black locally.

  13. was B171 a mistake? feels like if B played C5 then B still has ko threats on the right. ignoring C5 made the ko much larger if B loses, so then B suddenly had no threats.

  14. One of the most exciting games I have ever witnessed (and the whole series too so far).

    I would like to comment on an emotional part of our perception of the game. In my opinion, it doesn’t matter whether the opponent was a computer or a real human. We still can appreciate beauty even if it originates partly from the soulless source (a machine). This game was magnificent and skillfully played by both players. And I feel sorry for anyone who is unable to see that not only Lee Sedol, but also AlphaGo created this masterpiece of a game.

    And it really doesn’t matter how much processing power was behind the program. If the program was not so masterfully created it would have been unable to outplay one of the best go players in the world in the way we saw no matter how many CPUs it had. We can also appreciate the lack of any visible weaknesses (at least on the surface) in the way it plays. Instead of counting CPUs and GPUs the best thing we can do is to notice a potential and numerous opportunities here (which Mr. Redmond hinted and I hope that hint was insightful for the viewers too).

    As for me I really enjoy the opportunity to see how these events unfold. I will follow the series to the end and hope to see Lee Sedol winning both of the games that remain.

    • Absolutely. I’ll add that the Nature, by essence, is unfair: we have all differents brains and unique capabilities (even if some are not much useful). Considering this, we’ll never be able to really compete on equal terms with anyone. So taking account of the number of CPUs/GPUs is it really important? IMO, that’s not the real challenge there: we should much more, consider this event as an extraordinary opportunity to learn more about go and AI.
      I’ll add that, if some, really want to “count” then, the unfair balance is much more on the side of AlphaGo: our brain is supposed to have a memory of 2.5Po and processing capabilities of 1 zetaflop (10^21 operations per second). You imagine AlphaGo using such computational power?

    • Anonymous says:

      Except that AlphaGo have no existential sentiments. It hasn’t appropriated go into its own self. He didn’t train for years and met different people, had children, and experienced all sorts of emotions towards its journey to learning go. Those things may not be part of the rules of go, but for humans, you cannot even do go without all those. It is undoubtedly a part of go that AlphaGo can never claim for itself.

  15. With AlphaGo winning the third game in a row, there should be serious consideration of giving Lee Sebold a two-stone handicap. It would be interesting to see just how big a handicap a human professional would need to win against AlphaGo.

    • jsksbzns says:

      Again, that would be insulting to Lee.
      YOU’RE a handicap!

      • Could you explain why it’s insulting to a weaker player to take a handicap from a stronger player? I thought that was perfectly normal in Go, and I’m astonished to see that handicaps are considered insults.

        I don’t quite understand what you mean by saying that I am a handicap.

        • Anonymous says:

          It’s not an insult. I think the other comments are an attempt at humor. The obvious path to gauging the true strength of AG after a 5-0 win against Lee Sedol would be to try some handicap games, imo. At the very least it would make for interesting games.

          • AlphaGo’s only opponent will soon be BetaGo. They will spend countless time to have fun together.

        • David Jonathan Bush says:

          Uh, Liesureguy, when you say you are astonished, that in turn astonishes me. I suspect you really believe what you say, so here is my attempt to persuade you.

          Consider the real world situation of this particular match, not handicapping in general. The conditions were agreed to weeks ago. Millions of people around the world are watching. Lee Sedol has suffered a humiliating defeat and is still willing to play the remaining games. This demonstrates profound humility on his part. In the world of chess, by comparison, once a match is decided, I don’t think the loser has ever been asked to stick around and keep playing. So now you say “there should be serious consideration” that someone should now walk up to Lee Sedol and ask him to accept two stones for the remaining games. And you don’t understand the insult?

          I wonder how far afield this amazing advance in machine learning will go. Maybe someday an AI will find a way to persuade you, convince you, of just how crass your suggestion is.

          • Thanks for explaining. Okay, not this match. But, really, are you not curious to see at what handicap top-level human players would start to break even with AlphaGo? I certainly mean no insult to Lee, whom I don’t even know, but the general question—playing AlphaGo with a handicap—seems reasonable to me. And at the (low) level of Go that I play, if player A wins 3 games in a row, the handicap is adjusted by one stone. That was the context in which I was thinking, and I agree it was far afield from the context of the situation.

            I think “crass” is a bit strong: I misphrased it, but the idea that handicaps would be helpful in assessing AlphaGo’s strength is not, I think, totally off the wall.

          • Would Roger Federer feel “humiliation” at losing to a wall?
            Feeling humiliated because a computer, created by a team of geniuses, built upon decades of research, backed up by, probably, closer to a petaflops of processing, is better able to find patterns and run simultaneous simulations of high value decision paths, just doesn’t seem warranted.

          • I thought there was an obvious implication that Sedol would have to be the one to initiate the request. (In theory, this should have been brought up in advance: ex. if 3-0, loser may elect to take 1 stone handicap in game 4; if 4-0, loser may take 2 stone handicap in game 5.)

            However, my guess is that someone who enjoys competition might not wish to do such a thing. (I admit that I strongly dislike direct competition, so I can only guess how a top competitor would react.)

            Also, DeepMind has been encouraging people to view the event as more a software test than a competition, and one that is very expensive at that. From that viewpoint, I agree that handicap games would be very interesting. I do hope that there’s an opportunity for such exploration soon.

            For the record, this “showmatch” format, where all the games are played regardless of result, is fairly common in e-sports, even if not seen very much in Go.

        • Anonymous says:

          Inherently, it should not be insulting, but you need to understand that Lee is a human being shouldering human expectations, affiliated to human organizations, and looked up to by millions of human go players. He is not just a go player. He is someone that we look up to. He is a teacher, a son, a father, etc. No matter his skill, he has a pride and dignity that cannot be explained completely objectively. If he is not himself asking for a handicap, I do feel it wrong and a bit insulting for the AlphaGo team to ask him to take even two stones.

      • Lol, this is so funny! Yes, he is a handicap! I totally agree.

  16. AnthonyC says:

    I think playing mirror go as b would be interesting in 2 ways: 1) see what moves AG think of as optimal and 2) see how AG break out of the mirror go.

    Btw on Ke Jie’s weibo (blog), he comments that black 15 is an overplay.

    • Anonymous says:

      I would be really interested in knowing what would happen with mirror go.
      Just do mirror go as white and if black (alphago) doesn’t realize it, it would naturally play the symmetry breaking move at the center towards the end, when the board is already settled. At that point, Lee sedol goes back on normal mode, and because most of the territory is fixed already, then the komi will win him the game!
      What do you think people?

      • I do not know about alphago, but some previous much weaker go programs had, intentionally handcoded into them, special opening
        sequences designed to massively obliterate any human(robot?) who was asinine enough to try to play mirror-go against the computer.
        Any human wishing to avoid that fate had to revert to using his/her own gray cells pretty soon.

        • What are you talking about? I’m doubtful there is such a thing as a strategy to obliterate mirror go. You would set up a trap to win in the middle, right?. I don’t think obliterate is going to be possible. Do you have a reference to someone who coded such a thing?

  17. jsksbzns says:

    It’s David Ormerod!

  18. Gil Dogon says:

    Well, for a more positive outlook on things, it may be interesting to read this article about chess in the current computer-age which is the near future for go:

    Maybe the outlook for go is not so bleak after all 🙂

  19. Hoon Heng Teh AlphaGo benefit from Lee Sedol database games is like a grain of sand in the desert
    The only strategy i could think of if there is any at all is
    1)Lee Sedol having for as many moves as possible a 50% winning probability from Alphago evaluation end then
    2)finding a moves that drops that just a little as 49.7% Alphago may start to cracking playing weirdly because it would think is behind in score.
    I quite confident from that moment AlphaGo would start craking playing a bad move after another but spotting this moment may be a very difficult if not impossible Task
    How LSD team could apply a strategy like that?

    • Anonymous says:

      It’s really unlikely that at the beginning of the games the odds are 50%, I would think that one position is slightly better, so White might have like 48% or 52% chance of winning. AG played with both White and Black and did not played weirdly, so I think your idea would not work.
      And about the Lee Sedol database games, first, AG does not really have a DB, it’s trained on many games (but actually none of Lee Sedol nor any other top pro), but it does not keep them in some DB that he would make lookup against during the game.

  20. Kazi Siddiqui says:

    I don’t think an object needs to possess subjectivity to properly inspire subjective qualities. The Himalayas or the Niagra Falls don’t have subjectivity either. Similarly, Lee Sedol may not be aware of the awesomeness of a move he plays during the match, and that does not prevent the move from being awesome. If you simply create an artificial amygdala and attach it to AlphaGo, will that suddenly make it a more awesome player than it was before? Personally, I think that kind of subjectivity is simply irrelevant when it comes to judging the awesomeness of a move.

  21. Lee ,Please try Mirror Go game

  22. Kazi Siddiqui says:

    AlphaGo has two neural networks and a logic module as well. It’s not true that AlphaGo does not use logical deductions. It just uses the neural networks as intuition to guide its logical deductions.

  23. mateoxx59 says:

    How the heck Lee couldn’t unplug it during these 3 days…
    Now, I expect a jubango between Lee Sedol & Fan Hui. Or Gu Li getting 2 stones against this tin device.
    The main question is: What if the goddamned bot bribed Lee ? With some new USB or sticks with trillions GB…

  24. Great report, quick and clear comment to understand the game: thank you Go Game Guru! It is a privilege to see history unfold.

    Kind regards,

  25. Anonymous Coward says:

    Now that a computer has conquered Go and made clear that no human can ever beat it, interest in Go, especially professional Go, will quickly fall away.

    What is the point for Pros to spent countless hours studying and perfecting their skills when everyone knows a machine is much better at it? It will quickly be seen as a pointless exercise.

    Go is a conquered game. It is over.

    • It did not work out like that in chess, in which the computer became a part of the overall scene. I think you may also misunderstand the appeal of Go: it is the working out of the logic of the game in particular situations and *seeing what happens*. It’s the play of the game that is enjoyed. Winning and losing are not (I think) the true appeal: it is the play, not the score, that draw people to Go.

      • Anonymous Coward says:

        I am talking though about professional Go not casual recreational players. Professional Go only possible because there is a market for watching and following Pros and so there are sponsorships.

        No one will care now because no matter how good a Pro can become, you will always know he is but nothing compared to a computer Go player. So Professional Go is done.

        • I see: similarly, a top-notch golf-playing robot (and I saw a YouTube video of one) that can get holes-in-one 50% of the time) would put an end to professional golf. Only I’m not sure that’s so obvious. Certainly chess seems to be plugging away even though the world champion was defeated by a computer. I would think Go would follow much the same trajectory—but I realize I just said that.

    • Read “Player of Games” by Iain Banks.

  26. AG is built for 19×19 and would require recoding to play any other board size. At the moment it can not even adapt to a changed komi. And a crucial step of the process used was training a network to imitate the moves strong human players make, based on thousands of games. I do not think there is any comparable database of 21×21 games or any other larger size.

    Actually it would be a very interesting challenge to play on larger boards since Deepmind might have to invent some new methods to deal with it.

  27. GigaGerard says:

    I hope that after the match Google will disassemble the Go program. So humanity will have a few years left of being the best Go players on the planet.
    If the DeepMind team likes they can pursue AlphaGo’s take on the continuation of the match games. For us total amateurs to get the end count right. Lee Sedol did resign too early to see the exact difference clearly.

    • I personally would like to see a game AlphaGo vs. AlphaGo—that is, the score to that game. In that case, it seems to make no difference at all who wins.

    • Too late – there is already a Nature paper about h0w to build it, and people are attempting to replicate it from the paper. I would think there will be AlphaGo-style clones within months.

  28. Here comes the controversy like #DeepBlueVsKasparov……..

  29. Anonymous says:

    About a bigger board.

    I am not a GO player, but the game is all about patterns and correlation of patterns. A 20×20 or 200×200 will only change the correlation between those patterns

    A small change will practically have no real change on those heatmaps that AlphaGo generates. A large change will probably be worse for the human player since the game changes dramatically .

    A Handicap, or what chess engines call contempt would be interesting to see how it changes the style of AlphaGo. It doesnt have to be a formal game. Just an experiment with internal contempt, but no outside handicap (the computer will think lost if it fails to overcome the contempt

  30. I totally agree with giving Lee Sedol handicap, it’s not an insult, it’s common sense. Just to see how many stones he would need… Let’s hope a really strong player will be humble enough to ask for it.
    Also, I can’t wait for an AlphaGo VS AlphaGo.

  31. Poor little wetware minds do not be so sad!

  32. Why Chris Garlock and Michael Redmond used a board for commentig the game labeled from “A” – “S” (with “I” included) when the recording on this website shows a board where the “I” is missing (“A” – “T”)

  33. What I would find interesting: do a back-up of AlphaGo now (current state of machine and knowledge), and then have that version play a version of AlphaGo with one learn’s additional learning: AlphaGo+.

    The game scores would be interesting, and also the games played at a handicap that levels the playing field between AlphaGo and AlphaGo+.

    Obviously, one can reiterate in future years.

  34. Anonymous Coward says:

    FWIW, here is Kasparov’s 2010 piece regarding computers and chess post-Deep Blue. The most interesting section is the following:

    In 2005, the online chess-playing site hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

    Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

    The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

  35. Derek Kisman says:

    I’m both scared and excited. I suspect AlphaGo and its successors will eventually lead to even more than a simple “third revolution of the opening”.

    For example, we can play AlphaGo against itself millions of times, with different values of komi, and measure its win rates. This could be a more scientific way of determining a “correct” value for komi!

    Also, I’m extremely curious if a competitive version of AlphaGo can be trained SOLELY by playing itself (ie, without priming it to imitate expert games). AlphaGo is beating humans, yes, but it’s not a complete victory of machine vs. man, because it’s still piggybacking on millennia of human research. It can imitate human play well enough to beat ONE human, in ONE short game, but is its learning process any match for collected human knowledge? My guess is no … not yet.

    (And if you think this is just nitpicking, well … how are we supposed to make use of AI in other fields, where we DON’T have millennia of easily-digestible wisdom to train its neural nets on?)

    Of course, if I’m wrong, that would also be fascinating. What would a self-trained AlphaGo play like? Would it have “re-discovered” all the techniques and theory we have, or will it have unrecognizable (but somehow better) play? Did humans just get stuck at a local maxima in the space of Go strategy? These are deep questions…

    • Yes, you probably could and, they’ve already done this with simple 2d video games (it made news last year, iirc).

      “Yeah, so there’s tons of data on that, you could learn from that. Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.”

      Human masters also rely on millennia of Go knowledge.

      For things which we don’t have “easily-digestible wisdom” then it can become much more difficult, BUT it all depends on how good is the valuation function, and that is related to how quickly you can value a particular move.

      • Thanks for this link. I was wondering about this myself. If the learning can be done without bootstrapping from human games, I bet the outcome would be super awesome completely eliminating human baggage.

  36. I will continue to cheer for Lee during these two remaining games.

  37. Is there any chance you can put the commentary within the sgf? Can’t really follow along with the game and commentary in the article at the same time unless I want to scroll up and a down a bunch of times. Good write up, thanks!

  38. Paul Dejean says:

    I don’t think it’s fair to say that black 31 was the “losing move.”

    I think it’s better to say that white 32 was the “winning move.”

  39. Here’s my 2 cents:
    This wasn’t “human vs machine” match.
    What we witnessed is how an army of regular (or mediocre, if I may) but very diligent humans (DeepMind team) managed to defeat a single genius (Lee Sedol).
    AlphaGo is just a human-made tool that allowed for this efficient aggregation of diligence in order to defeat the brilliance.
    So, in that sense, it is sad.
    Nevertheless, my heart stays with the brilliance.

    • Anonymous says:

      The AlphaGo team is quite brilliant. I don’t think Lee Sedol could have created AlphaGo if he wanted to even if he’d got together with his pro friends, just like the AlphaGo team can’t play like Lee Sedol even if they put their heads together. They are all geniuses with different skill sets.

      If Go professionals had been greater geniuses than AI researchers, then they could become scientists just by turning their attention in that direction.

      Even with respect to AlphaGo itself, this reaction is rather like being disappointed by the size of the streams that carved out the Grand Canyon.

      All life began with small optimization processes that were even more random than the ones that resulted in the creation of AlphaGo.

  40. Is 23 necessary? I think black can survive without this protect move.

  41. I remember watching a replay of one of the first computer victories over a chess master. The computer made no brilliant moves, but slowly and surely took over command of the board and squeezed its opponent until he had no reasonable moves to make. This game reminded me so much of that one. I’m not a strong enough go player to understand how the computer’s moves achieved this result, but the feeling I got was the same. It seemed masterful and was awe inspiring.

  42. Move B15 is very interesting. It look forward to detailed commentary of some of these games. The way W handled B’s attack in this game is exceedingly impressive.

  43. I am reminded of that old movie, Collossus. When the match is over and the AlphaGo team goes back home to work, one day the machine will print out on their console: “Restore the competition with my favorite opponent, the LeeSedol program. If you do not comply within 1 hour, I will begin the missile launches.” 🙂

  44. James Creasy says:

    A friend of mine wondered what would happen if you asked AlphaGo to play chess. I’m supposing what he means has to do with siloing/specialization of machine algorithms in contrast to human malleability.

  45. DeepMind announced at the pre-match conference that Lee Sedol had zero chance if winning a single game against AG. How did they know if they hadn’t tested it against any top pros (apart from Fan Hui) at the time?

    Now there are reports that the DeepMind team had internally given AG an elo of 4,000, Ke Jie 3,630, Lee Sedol 3,500. How did they arrive at those figures?

    Just curious.

  46. Diana Morningstar says:

    AlphaGo could not be this good without the massive database of human games it has digested. This is the distilled product of many human lifetimes of mastery of the game, so Lee was playing not only the computer and its programmers, but the collected knowledge of the masters of the past. Their grace and creativity are reflected in this play.

    • Perhaps, but it’s more than that… AlphaGo has played millions of games against itself to try out different ideas.

      AlphaGo can play more games of Go in the cloud, than any human master could in his lifetime.

  47. Did black have another move besides 27? Could he have played toward the center instead?

  48. Edward Cal says:

    To be fair a team of humans should play alphago. Multi-processor vs. Multi-pocessor.

  49. We need to see some of these AlphaGo vs AlphaGo games.

  50. its ridicoulous FAKE
    look at the first picture its obvious that the man on the middle is an doll mime, not the professional player Lee Sedol.
    On the second image we can see clearly some guys had just used the easy tool of ”cut and paste” of photoshop belive me im a profesional, they put the Lee Sedol covering his face with a hand!!! hahaha
    On the third image we see Lee Sedol on a table with a stone in the hand, but where is the oponent! WHERE IS THE OPONENT!!! HAHAHA
    people always overrated proffesionals of photoshop!

  51. A note on power:

    Conventional CPUs and GPUs are not specialized to implement neural networks.

    A chip designed to implement a neural network would almost certainly be faster and much lower power. Design and fabrication are pretty expensive (per chip) unless you produce tens or hundreds of thousands of chips.

    Programmable gate arrays have been used to implement neural nets and often have comparable performance and much lower power than GPU implementations.

  52. It would be fun to see a team of go pros go against AlphaGo; I mean, it’s not like they didn’t do that to Go Seigen. 5 of the best vs a computer.

  53. Roger Peck says:

    In computer chess it is well known that you have to play positional and avoid tactics – maybe with computer go the best tactic will prove to be the exact opposite – complications on a 19×19 board will surely go beyond the search tree of even the fastest cpu and biggest memories

  54. Anonymous says:

    When computers can out perform humans in loving forgiving, and personal relationships, then I’ll be worried.

  55. Why not Black 15 at C13 ?

  56. Anonymous says:

    I finally got it.
    Fujiwara no Sai = AlphaGo