Computer Go player ‘AlphaGo’ defeats pro in even game for first time – challenges Lee Sedol next

For the first time ever, a computer Go program has defeated a professional Go player on even terms – an event that experts previously thought was at least a decade away.

This startling announcement comes from a paper [PDF] published in the science journal Nature, by members of Google DeepMind, on January 27, 2016.

Fan Hui 2p was defeated 0-5 by AlphaGo

Fan Hui 2 dan (pro, left) was defeated 5-0 by AlphaGo

The program, named AlphaGo, defeated Fan Hui 2p (three time European Go Champion) with a surprising 5-0 score! The games were played from 5 to 9 October, 2015, but the news was only revealed upon completion of the paper.

Fan Hui is a professional with the Chinese Go Association and has been living in France, where he teaches and promotes Go, since the early 2000s.

The holy grail of AI

Fan Hui pondering his next move against AlphaGo.

Fan Hui pondering his next move against AlphaGo.

Many people outside of the Go or computer science communities would have been surprised to see an obscure board game splashed across news headlines around the world today.

And they would have been equally surprised to learn that Google had invested in DeepMind’s artificial intelligence (AI) software, AlphaGo, in order to beat humans at an ancient Asian board game.

Since IBM’s Deep Blue defeated Garry Kasparov in 1997, defeating top Go players has been something of a holy grail for AI researchers.

For a time, it was thought by many to be impossible due to the countless number of possible Go games, but for most Go players and AI researchers, the question was always when, not if.

A few years ago, Crazy Stone caused a stir by defeating Ishida Yoshio 9p with a four stone handicap (meaning Crazystone was allowed to play four moves in a row, before Ishida started to play).

Since then, many more human-AI matches have been held.

Lee Hajin 3p, Secretary General of the International Go Federation, who herself has played against the Go AI AyaMC, was impressed by the strength of AlphaGo:

“AlphaGo seems to be stronger than Fan Hui, but we don’t know how much stronger yet… Other strong AIs generally play four stone handicap games against pros, so AlphaGo is clearly stronger than other programs.”

AlphaGo to face Lee Sedol next

After losing his early advantage, Lee Sedol will be keen to put an end to Gu's winning streak in the next game.

DeepMind AlphaGo will challenge Lee Sedol 9 dan next.

Off the back of their recent success, the Alpha Go team has now challenged Lee Sedol 9p to a five game match.

According to the Korean Baduk Association, the match will take place from March 9 to 15, 2016, in Seoul, Korea.

Google DeepMind chose to challenge Lee Sedol because of his record as the best Go player in the world over the last decade.

The games will be even (no handicap), with $1 million USD in prize money for the winner.

Lee is quietly confident about his prospects in the DeepMind AlphaGo vs Lee Sedol match.

He said:

“I’m honored to play against an AI invented by Google. I regard this to be an important match in the history of Go, so I accept the challenge. I’m confident that I can win the match.”

Here’s a video produced by Nature about AlphaGo and the match with Lee Sedol:

Most Korean professionals favor Lee

Most members of the professional Go community in Korea were also backing Lee:

“Lee Sedol’s never played against a strong AI, so the computer might be able to win a game or two, but it won’t defeat Lee.” 

– Yang Jaeho 9p, Secretary General, Korean Baduk Association


“I’m very interested in this match. I’d heard that computer Go was around four stones behind pros, but if an AI has defeated the top European player, that would indicate a big improvement. I think most pros will be interested in following this match, but I don’t doubt that Lee will win.”

Choi Cheolhan 9p


“It will be an interesting match, but I think it will be impossible to defeat Lee Sedol this time.”

– Choi Myeonghun 9p, coach of the Korean National Baduk Team


But Choi Moonyong 6p, who in addition to being a pro has also pursued a career in IT, wasn’t so sure:

“The match will be interesting, and AlphaGo might have a chance to win. Or at least, they are on track to defeat the strongest human player in a few years.”

He is also worried about the developers of other Go AIs, because AlphaGo seems so much stronger than other contemporary computer Go programs.

Why do this?

You may be wondering why Google is willing to invest so many resources into an ancient board game.

The answer is that DeepMind sees Go as a stepping stone on the way to a more general purpose AI which is able adapt to different problems the way a human does.

Part of AlphaGo is actually based on such a general learning algorithm, which has already mastered classic arcade games and has future applications in fields like medical imaging and machine translation.

Demis Hassabis explains it best in a speech he gave to the Royal Society in London (the videos of the AI playing arcade games are great fun):

More to come soon

This is an exciting development for Go players and we’ll be back with game commentaries and more news about the match soon.

Subscribe to our newsletter to follow the match with us.

Game records

Fan Hui vs AlphaGo – Game 1


Download SGF File (Go Game Record)


AlphaGo vs Fan Hui – Game 2


Download SGF File (Go Game Record)


Fan Hui vs AlphaGo – Game 3


Download SGF File (Go Game Record)


AlphaGo vs Fan Hui – Game 4


Download SGF File (Go Game Record)


Fan Hui vs AlphaGo – Game 5


Download SGF File (Go Game Record)


Related Articles

About Jing

Jing likes writing, and can occasionally be convinced to play a game of Go. Even though she doesn't play Go as often as she once did, she still enjoys following the professional Go scene and writing about it on Go Game Guru.

You can follow Go Game Guru on Facebook, Twitter, Google+ and Youtube.


  1. Thank-you for the nice article Jing.

    It should be noted that they played an unofficial match before this one and the score was AlphaGo 3-2. And as your article mentions, the games were last October, which may be a lifetime, or more, ago for the AI. It will likely be much stronger when if faces Yi Sedol in March.

    • Glad you liked it, Logan. 🙂

      If you have a look at page 31 of Google’s paper, two games were played each day. The informal games were played after the formal games last October.

      It’s interesting that Fan won the informal fast games when usually computers do better on shorter time settings. Not being a computer scientist myself, I thought perhaps Fan was more relaxed during these games – what do you think?

      • Maybe it has to do with the fact that the AI isn’t aimed at relying on computing power / Monte-Carlo evaluation alone but on these “neural networks”?

        I’m not a computer scientist either, but that’s what I took away from reading Google’s paper.

        >”Even without rollouts AlphaGo exceeded the
        performance of all other Go programs, demonstrating that value networks provide a viable alternative
        to Monte-Carlo evaluation in Go. However, the mixed evaluation (λ = 0.5) performed best,
        winning ≥ 95% against other variants.”

  2. These games were complete crushes. It looks like AlphaGo could have given a 2 or 3 stone handicap. Give me even money and I will bet that Lee Sedol will lose at least one game.
    Give me 10-1 and I’ll bet that Lee won’t win a single game.
    What people may not be taking into account is that the program is rapidly improving, according to its creators. In one year, human domination of go will be obliterated–count on it.

    To me, it was interesting that AlphaGo’s play looked very classical and natural—much more so than modern pros.

    • “To me, it was interesting that AlphaGo’s play looked very classical and natural—much more so than modern pros.”

      So maybe that’s its current level 🙂

  3. In the matches AlphaGo played in October, there are apparent mistakes in some moves. If Lee Sedol was playing it back in October, I’d be pretty confident of the human win.

    But six months will have passed, the machine will learn from its mistakes, and will become that much stronger. It will always be playing at the top of it’s ability against Lee Sedol – so Lee will need to be on top form to beat it.

    AlphaGo was able to give 4 stones to the other top computer programs and win. Ishida Yoshio didn’t manage the same feat. I expect and fear an AlphaGo victory overall.

    I think a small group of top pros (say 3) would be able to beat it reasonably consistently, because it’s possible – and common – for individuals to error, but together they’d avoid minor and major errors in judgment.

    • “It will always be playing at the top of it’s ability against Lee Sedol”

      That’s a myth. Bots may never tire, but sometimes their blind spots are exposed, and sometimes not.

  4. Looking at these games, the computer seems much stronger than Fan Hui. A super development, where I hope it doesn’t stop after the computer starts beating the strongest human beings consistently. Smart by Lee Sedol to accept the challenge now: he may not be able to collect the money a next time. Really looking forward to that match. Each player gets one hour to think about his moves? A bit more would be nice, for both sides.

    I wonder what is going to happen to go as we know it in a few years time: different fuseki’s and joseki’s, like in chess? Different kind of games, where the computer quite probably has a much bigger reading ability on a grand scale, and a much better assessment of the resulting position? This latter is the most interesting to me: which concepts and proverbs are going through the window, which ones will appear to be true? Very exciting!

    By the way, you should change the names of the players in games 2 and 4.

    Kind regards,

    • “Looking at these games, the computer seems much stronger than Fan Hui”

      Myungwan Kim has said that he might lose, so he agrees with you. And 5:0 is certainly impressive. But to me Fan Hui seemed off balance after the first loss. And in the quick informal games they played before, AlphaGo only won 3:2. Since quick games should normally be to the bots advantage, this supports my feeling that Fan Hui was off his game.

      • Anonymous says:

        Actually the informal games were played together with the formal ones. I.e. on each day they played two games, one formal with longer time control, and one informal, quick game. Whats fascinating, and I think what most people including Hui expected, was that the computer will have greater advantage in the informal “quick” games, but it turned out the exact opposite. Maybe indeed Hui lost concentration or got too tired in the longer games …

      • Paul Dejean says:

        In chess quick games are to the humans advantage.

        • Hi Paul,
          That’s very interesting. Have there been any hypotheses as to why that might be the case?

          • Because in quick games, intuition becomes more valuable, and humans are really good at that. In comparison bots have to spend time reading through implausible sequences unless they have a something to guide them, like the highly performant human pattern recognition which AlphaGo seems to emulate so well.

        • Michael Bacon says:

          This is not true.

        • Where do you get that idea, Paul? It has been several years since I observed chess games online, but computers have been absolutely killing the top pros at Blitz for 10 years.

          • Warren Dew says:

            And Deep Blue defeated Kasparov almost 20 years ago. By 10 years ago, computers were just plain better than humans at chess.

    • Interestingly, in the paper linked above, Fan was the one that chose the time settings.

      I wonder if Lee Sedol will specify longer time settings for his match?

  5. Impressive achievement for sure. It really is only a matter of time before computers reach a similar status as they have in Chess. I may be wrong, but it looks like AlphaGo is running on some pretty impressive hardware utilizing both CPU and GPU resources. I’d be interested to see how other programs (CrazyStone, Zen, etc…) would run if they had access to similar resources. I don’t think it’s impossible (or even improbable) that a computer can beat a top pro, but this strikes me much more as hype and I’ll believe it when I see it.

    • of computers that spanned about 170 GPU cards and 1,200 standard processors, or CPUs

      According to the Wired article on this.

      • “170 GPU cards and 1,200 standard processors, or CPUs”

        This refers to the computational capabilities of their cloud-computing super cluster, which they actually didn’t use against Fan Hui. For the matched they used a single computer (8 GPUs/48 CPUs – which you could build yourself).

        • Gil Dogon says:

          No, they used the best hardware they had (1200 CPUs + hudreds of GPUs) . To quote the original paper:
          “Finally, we evaluated the *distributed* version of AlphaGo against Fan Hui ….”

          • Oh thanks, has missed that detail. Was also finding it weird they’d mention it but not test it. Another weird thing is the KGS database from 6d to 9d used to train the NN, not an exclusively pro one. Could explain the style adopted at times.

    • One of the exciting things mentioned in the paper (at least to my lay person’s interpretation) is that Deep Blue was more of a triumph of throwing a lot of hardware at a problem, while AlphaGo is more of a triumph of software. Sure, it has a lot of hardware but it seems to make more efficient use of it.

      See page 13 of the paper linked above.

      “During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov 4; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network – an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, AlphaGo’s neural networks are trained directly from game-play purely through general-purpose supervised and reinforcement learning methods.”

      • Gil Dogon says:

        Well, they still needed quite a lot of hardware, that in fact if you compare in absolute terms is much stronger than the one Big Blue had at the time. Of course they needed a lot of software advances as well. And indeed using the machine learning methods of neural networks, are the interesting part. I do hope Lee can win this March, but the eventual top human defeat now seems inevitable,as it seems now the hardware/software will improve soon to superhuman level …

      • Michael Bacon says:

        The current World Computer Program champion, an “engine” named “Komodo” sold by Chessbase, hired a gentleman who earned the Grandmaster title by winning the World Senior Championship, Larry Kaufman, who helped Komodo use has been called a “selective search” which differs from the “brute force” used to defeat Garry Kasparov, a match many believe was rigged.

      • No doubt they are using some novel software approaches, but still I would be interested to see how well it would perform using the same hardware that Zen or CrazyStone used. As I said, it’s an impressive achievement, but it strikes me as hype. I don’t think AI will lag behind the best human players forever, but I don’t think Lee Sedol will lose in March. I’m sure the Google team is beefing up their program, but I’m also sure that Lee Sedol is studying these games and coming up with his own game plan. It reminds me of when certain top players come to dominate their peers like Lee Changho did. He was thought to be unbeatable till players discovered his weakness. If something is following an algorithm, it has a weakness. It is just a question of whether people can discover it.

        • “If something is following an algorithm, it has a weakness”. This statement is not true. If for instance the algorithm tells to do a full tree search to find the best move, and the algorithm succeeds in doing so, you will lose whatever you try. Of course, the example is not possible for go, but it is for tic-tac-toe. Or for every chess endgame with seven peaces or less.

          Kind regards,

      • Anonymous says:

        The paper link is not working

        • David Ormerod says:

          It looks like Google took it down for some reason, and made the link die. We’ve updated it here.

  6. Looking at three of the games, I felt Fan Hui was not on top form. Perhaps 1 hour was a bit in favour of the computer, as I would expect human intuition still to be better than computer algorithms and learning, so Fan may have had time problems with the detailed reading that a computer tends to focus on more than a human. Lee will not probably fall over in the fighting as long as he has the time to read out normally and accurately, and count. If he plays today I think he wins, but how much stronger will the computer be in a few months time?? My money stays in my pocket! 🙂

  7. Can anyone tell me the time limits in Fan’s match, or what is planned for Lee’s? The previous best Go programs seemed to mostly be strong in fast-paaced matches.

    • Anonymous says:

      1 hour and 3*30 second byoyomi. Unofficial match (Fan Hui win 2/5) were only 3*30 second byoyomi.

  8. Game 5: w54-b55. White should have omitted this exchange if it wanted to play elsewhere. When black plays 73 white can no longer make two eyes unconditionally wih t12. But why doesn’t black play 75 at t11? T11 would take 7 stones and white should still have to play s16 to save half the group. Black 93 at s16 is a move woth over 25 points and it makes the central black group 100 percent alive. I find it hard to believe black played 75 and 93.

    Game 2: the timing 31 is unusual. Usually the order is b33-w34 first,
    then 31 is one choice. Having played 31, black should continue with
    either a6 or a7. In the game the b31-w32 exchange is not only unnecessary; because of it, after b 41, the block at g2 is no longer sente, thus making w42 possible.

  9. Note for game 2: having played 31, black should play a6 or a7 at move 35
    after white’s push at d10.

    Game 3: 93 was not a lucky number in this series. Black should play p17, white p18 -black p16 – white n17 is expected. This makes black unconditionally alive in sente (o14 and s13 are miai). Black could then continue with h12 (place of 93). After w o15( move 94), black has to resort to a ko where white takes first. Black does not have any ko threats, so the game is decided by the elementary mistake at 93.

    • You’re wrong… Even if white defends at N17 in your sequence, Black is dead when white plays O14 cause it threatens to remove black’s eye at P15 and there’s no way to make two eyes on the right.

      Maybe you should be a little more careful before pointing out the “elementary mistake” of the european go champion.

      Just saying.

      • My bad. Black would indeed be alive. But I believe that if black chooses to live, white will not defend the top area, and will focus on attacking the middle group which is much more valuable. Anyway, I’m looking forward to see a commentary of these games.

        • I case white doesnt’t defend but chooses a move to attack, black should still be untroubled as white will have two serious weaknesses and both n17 and o18. Plus, the black group in the middle already has one eye, so it’s not that weak.
          Anyway, black had to live, and p17 seems to be by far the most effective way.

          • Update: black can play p16 at move 93 and leave a cutting point at n17 and there is still p17 to destroy white’s top area.

            Black p17 may not be the best, as it can be answered by white o18, and then black still has to play p16 to live. In this case the cutting point at n17 would be eliminated.

            So, black should choose between p16 and p17.

      • My bad. Black would indeed be alive. But I believe that if black chooses to live, white will not defend the upper right area, and will focus on attacking the middle group which is much more valuable. Anyway, I’m looking forward to see a commentary of these games.

  10. Gil DOgon says:

    Will There be any analysis by Ann of one of those games ? It surely would be fascinating. Interesting also if he can find and show us some mistake/slack moves by the computer ….

    • Hi Gil,
      Younggil’s working on a commentary of game 5, which will be up later today. He’s also going to review game 2 at the Sydney Go Club, which we’ll record and post online.

  11. Lee Sedol for the win! Also, please analyze the games, GoGameGuru!

  12. Game 2: move 82. White should of course have turned at p9. Black still may very well cut at p3 to remove the corner aji. If it jumped to r7, for exmaple, white could play s2, black t2, white r1. making o2 and the s4-t4-t1 combination equivalent options. So black will probably cut at p3.
    White then can play q9 in sente. (If black doesnt’t answer, the white o3-black o2-white p2- black n3- whte p3-black r2-white o1-black n2-white s2 sequence wins the capturing race for white. So black will answer at o3.

    White then could switch to f17 himself. Instead, he basically ended up having a stone at s2 which has little use as the corner is still dead, and
    black got precious sente to play at f17 with move 87.

  13. As a humble 9kyu KGS, I think this event has a very huge impact on the non-go players. Indeed, I was told several time this day of the event by non-go players that this had happened. This is a great opportunity to reach potential new players.

    I’m quite disappointed that a computer beat Fan Hui 5 out of 0. The fact that Google did this makes it even worse. I hope Lee Sedol will kick out this computer brain.

    I’m not surprised that with more time the computer won more easily than only byo-yomi. This is because a computer can explore much more faster tree branches.

  14. Looking again, there were a lot of strange moves by both in those games. I think Lee Sedol would make mincemeat of the computer as it stands. I am not sure why, but Fan Hui seemed to me to miss a lot of chances. Hard to explain, as he is quite a bit stronger than I am…

    • Younggil is working on two commentaries. Game 5 should be posted later today and he’s also going to review game 2 at the Sydney Go Club, which we’ll record and post online. Hopefully that will answer some of your questions.

    • Vincent Pan says:

      Fan and computer both have the big mistake in their Go game. I want to know when AIphaGo will be online to play with normal go player.

  15. Roland California says:

    Crazy stone program was able to play with Cho Chikun with 3-4 stone handicap. With Google into the ring, I expected they would be able to play with 60 year old Cho without handicap, which would have been more impressive than beating Hui -ranked #632.

    However, challenging Lee this soon was surprising. Google must be confident enough that they have a decent chance. Perhaps the newer version can beat Hui with 2 stone handicap or Crazy stone with 5 stones? What do you guys think?

    It seems certain that Google can win vs. the old Cho now; my guess is they can win 1 or 2 games vs. Lee.

    Well, at least G was not bold enough to challenge Ke Jie. In 1 or 2 hour games, Ke has been 3-0 vs. Lee. If Google wins next month, then they should challenge the current #1, don’t you think?

    • Ke and Lee have different styles, so it would be interesting to see both play I think. I would back the top pros this year.

  16. Most of professional GO players expect Lee wins 4-1 or 5-0

  17. Most professional Go players have no idea what the current state of AI is and how Deep Learning is completely different from previous approaches. This match happened many months ago. By March 2016 AlphaGo will have been further trained and fine-tuned. Lee Sedol may get crushed.

  18. Charlie H (3D) says:

    Most Go pros back Lee Sedol to win. Most AI people think Alphago will win. I just wish the vast majority of the media didn’t write their articles as though this is already a done deal. While Fan Hui is a far stronger player than I will ever be, I think there are few at GGG who do not realise the gulf in class up to Lee Sedol or Ke Jie (I predict if Lee Wins this ‘Kasparov’ match then Ke will get it next time, given his current form).

    • It’s something to marvel; A few in the team of DeepMind express quiet but sure confidence that AlphaGo would win. This may be partly because most amateur Go players would have dismissed the possibility of a Go AI winning against any pro, so they may just make the same Dunning-Kruger mistake in underestimating the technical advancement of AlphaGo over time. However, the impression is pros in Korea (including Lee Sedol himself) are also just as quietly confident in the same way. This is reasonable, considering the effect of factors such as the Dunning-Kruger effect, prone to people with only basic knowledge of any advanced field, resulting in a false calculation of the various levels of strength AlphaGo would be required to progress, or the nuances required to increase by each level of strength. AlphaGo would have to become very detailed in it’s analysis. Both “sides” may generally be predicting a 5-1 or 5-0. I’d lean towards the 5-1 or more probable 5-0 victory for Lee.

      • I know how Koreans love to bet. I would like some action on the 5-0 prediction for Lee to win. I could retire off the winnings.

  19. Michael Bacon says:

    Kii says:

    As a humble 9kyu KGS, I think this event has a very huge impact on the non-go players.

    Kii is correct. A thread was started on the forum of the United States Chess Federation concerning this development in Go. It can be found under “All Things Chess” of all places…

    Jing replied:

    Hi Kii,
    This happened to me as well! 🙂
    It’s definitely a good opportunity to promote Go.

    Jing is mistaken. Yes, it will bring more attention to the game of Go, but for the wrong reason. This is indeed a sad day for the wonderful game of Go. If only Kasparov had not played Deep Blue…Lee Sedol should not play the program, and all other professionals should reject playing the thing. The result of the match with Deep Blue has had a deleterious effect upon Chess. The same will happen to Go. As NM Chris Chambers said, “GMs used to be thought of as Gods. Now the mystery is gone.”
    You Go people simply do not understand. Cheating is rampant in Chess now because it is far too easy to cheat using a gizmo. In chess there is a saying, “The threat is stronger than its execution.” Suspicion surrounds every game. Every result is now questioned, and the game is in its last throes. The same will happen to Go.
    This is part of an email I sent to a chess person yesterday:
    The implications of what the Google Deep Mind team has done is frightening…It is possible that, in future, yesterday will be looked upon in the same way as the day Sarah gave birth to John Connor!

    Go was the last bastion, Jennifer. I am sick of hearing some GM tell me what some ‘puter “thinks” about a position. The great thing about Go was that when a 9 dan commented on a game, I was hearing his thought process, not some machine. When that goes we humans will have lost something extremely valuable.

    • What would it do for the game of go if Lee Sedol wins?

      One way to put it. Tong Yulin, winner of the recent US Go congress, seemed to appear stronger than every other player there (only losing once from an unbelievable blunder). Despite being ranked around 100 in China, able to beat most other human go players with his eyes closed, maybe, and very capable of beating a top level player, it doesn’t necessarily translate to him winning a gobango a a top level player. One gets the impression Lee sensei has already lost from saying it’s a sad day.

    • I question your assessment that chess is dead or dying. In fact, computers have allowed more people to train at a high level than ever before.
      The prize money for chess tournaments is also way up, as far as I can tell.
      When we have top level go programs in a year or two, then some kid in Zambia can have a chance to learn go and possibly compete professionally —a chance he would not have otherwise. I fully sympathize with the feeling of sadness that comes when humans are bested by a computer. However, there is only the way forward, into a world of new possibilities beyond our imagining. Go-playing computers are only the beginning.

  20. Roland CA says:

    I heard Google sent Lee the invitation in Dec. before M-Lily. G would have heard about the 5% chance for Lee to win. So they thought AI has a good chance. But Lee put up a tough fight in M-Lily. So maybe it is 5% chance for the AI to win? 🙂

    And no wonder Lee fought so hard in M-Lily… to deserve being representative of top human players vs AI.

    When will Ke Jie get a shot at the million dollar prize?
    The political issues may prevent it. But just saw on the web that Sergei Brin has been changing his tune and wants to re-engage China.

    Perhaps they can have the match with Ke in Hong Kong, or Singapore like last Ing cup. They can challenge Ke 3 months after the match with Lee.

    Unlike Lee, AI can only study 1 year’s game records for Ke, the games when he was younger than 16 are not much use. Thus much harder for the AI to crack Ke.

    I think the AI can fine-tune its play depending on the opponent. Learning from Lee’s ~1000 games will train the AI on Lee’s strength and weakness.

  21. Anonymous says:

    What is the difference in strength between Lee Sedol and Fan Hui ? I thought that there was at most 1-2 stones between the weakest and strongest pro. I guess there is no way Lee Sedol could win a 3-4 handicap stones game again Fan Hui and even if he managed to win at 1-2 stones, AlphaGO might already be able to do the same. If we consider that AlphaGO will be stronger (+1 stone ?) in march, then it might be very close to lee Sedol strength…

    • Younggil An says:

      That sounds reasonable.

      I don’t think it’s easy to be improved 1 stone although computer has no limitation of time.

      However, if it really will be improved that much, the match against Lee Sedol will be very exciting.

  22. the alphago paper estimates its strength as about 5p, which is consistent with its triumph over Fan Hui and may be consistent with reviews by stronger pros such as An. Only after it has played many games against many pros, can its strength be determined reliably. It will have had 6 months to improve before playing Lee, but will that make a difference? Personally, i suspect not, unless it is completely retrained from scratch, from only 9d pro games. Whether the alphago team are doing this i cannot guess. the version that played in October used data from KGS “expert” games for its “Supervised Learning” although they did not say what “expert” meant. 30 million positions were used, so “expert” probably meant around 7d amateur or possibly less.

    it’s easy enough to use electronic game records from a server, but are there enough electronic 9 dan pro games on archive to train it? i suspect not, so i predict that it won’t improve much before March even though it can engage in many many more self-playing training runs to provide a more solid statistical basis for its “Reinforcement Learning” algorithm. But given that its playing strength has informally been quasi-estimated by competent reviewers like An to be less than that of a top pro, my prediction is a resounding defeat for it in March.

    However, my track record as a prophet is not very good… in 1979, in an New Scientist article, i predicted that the “brute force” approach to Go is a no-hoper. AlphaGo and Zen are both kinds of brute-force approaches, and both have achieved at least 7d amateur, so i was wrong wrong wrong! 0/1 for me !

    In 1982 i made another prophecy: that UNIX would displace DOS within 10 years. I was only half right on that one, as Windows 3.1 was an implementation of UNIX-like multiprogramming on a DOS basis, and it became the de facto standard. So as a prophet, my score is 0.5/2, which is unlikely to win me any apostles (and unlike Emperor Flavius, i don’t have an army to force my opinions upon the populace either).

    Nevertheless, AlphaGo’s and Zen’s successes perhaps tell us something about the nature of Go and the nature of intelligence.

    On the one hand, since both MCTS-based algorithms are better than 99% of humans at Go but can’t tie their own shoelaces, this may suggest that Go skill and intelligence are not highly correlated – and that Go is more like chess than it is like being able to survive in the world. Such a view would not be popular within the Go playing community, but it might resonate with people who regard pursuits like Go or Rubic’s Cube or mental arithmetic to be “nerdy”, best suited to autistic savants.

    But on the other hand, the information processing functions performed by convolutional neural networks (CNNs) may be closer to those of human neural networks than a cursory glance at their wildly different architectures would suggest. In both cases, pattern learning is achieved by a network of local hill-climbers (individual synapses, or connections). We use language to describe what we have learned using a hierarchical ontology of sets and subsets and so forth (things like sente, aji and group safety), whereas all alphaGo can say is what it thinks is likely to happen at the lowest level of reasoning (ie move sequences). But that will probably change in the future, as CNN developers come up with a way to create symbolic expressions of the pattern configurations represented by the convolutional planes.

    This may sound fanciful, but in 1976 i was able to write a program that could create English-like expressions of decision rules induced by a very simple pattern-learning algorithm, so i expect others to be able to do so for the patterns induced by CNNs. When that happens, computers will be able to talk to us in their own language, in a way that we can understand what they are “thinking”. Exciting times to come; my only regret is that i won’t live long enough to see it happen.

  23. Roland CA says:

    Aja Huang researched on Ai ko fights.
    Google bought Deep Mind which hired Huang and Silver. They have more than 10 years of experience of Ai on Go between them.

    The likely worst case for Google: a 1:4 defeat;
    the best case: 4:1 win, if the Ai learned the skills of Lee at 27 years old -his peak.

    I think it is possible for the Ai training to focus on Lee, and particularly his games around 27.

    • Anonymous says:

      Hmm, I thought that the current problem with the AI (and something Dr Hassabis and his great team are trying to improve upon) is that it requires a large number of games/high amount of study material in order to incur the same level of learning of the concepts in those games a human would, with a fraction of the study material. So it is difficult to determine how alphago can use several hundred as such game records of Lee Sedol as “training” (I believe his games are already in the database of games). And apparently, Deepnind doesn’t want to train Alphago to a certain player (it makes sense).

      • Roland CA says:

        AlphaGo has already learned most of the general go skills. It is likely 9p level now. And I doubt Google will challenge Lee if it would not be top 20 ranked in March. Lee is top 3 right now.

        They can train it specifically for the match vs Lee, by assigning higher weight of learning to Lee’s ~1000 games, and especially his ~300 best games during 25 to 27 years old.

        During Ai’s 5 games with Fan, it was already learning about Fan’s weakness. It learns quickly.

        The good thing about deep learning is that it can be applied to mastering language, programming and psychology. Your phone will have an App as your assistant and close friend 🙂

  24. By the way: we have seem more pictures of Fan after losing 5 games than after winning the European championship.

    Kind regards,

  25. This is a great outreach opportunity for clubs to host tournament parties at Pizza places, Universities etc…

    Could you please post the upcoming match schedule. It is not on the Korean Baduk Association site, nor AGA, to make planning easier? Thank you so much!

    • Younggil An says:

      Thanks Anthonv for your interest.

      Here’s the planned schedule of the match according to Korean news.

      Game 1: March 9
      Game 2: March 10
      Game 3: March 12
      Game 4: March 13
      Game 5: March 15

  26. Thank you so much, but perhaps I was not totally clear, what time will the matches be held? Its important so we can plan. Thank you again.

    • Younggil An says:

      Thanks Anthony.

      The time of the match hasn’t officially announced yet. We will update the details of the match when it’s all fixed.

  27. I was surprised to read djhbrown comment that “AlphaGo and Zen are both kinds of brute-force approaches”. My understanding from reading the Nature research paper is that AlphaGo is not a brute-force approach, but a general-purpose method to learn that is similar to the way in which humans learn.