Go Game Guru
Learn all about the board game Go
Game 4: Lee Sedol vs AlphaGo
The comments below are from the discussion during the live stream.
Welcome everyone! Today we’re watching game 4 of the match between Lee Sedol and AlphaGo.
AlphaGo is winning 3–0, and the match was decided. However, Lee Sedol is going to do his best to win a game.
I’ve just posted my video commentary of game 2 here, which you may like to watch later.
Also, Lee Sedol fans may be interested in our new book Relentless: Lee Sedol vs Gu Li which is now available for pre-order. You can download two free chapters here.
If you have any questions about anything just let me know!
After losing to AlphaGo 3 out of 5 matches, I hope Lee sensei can enjoy the last two games.
Yes, I really hope so too.
That just shows that not only alphago the supreme player but also humane one.
Sensei is japanese, buddy. Don’t call him as sensei, plz. That’s the right attitude for Korean champion, I guess.
The display board is wrong! A1 should be in the lower left corner and there is an “I” row and no “T” row! C’mon people!
You’re right Jerry. That’s Korean version of the display board, so that’s different from the Western standard.
Thank you so much!
You are very right!
You’re very right. Someone should make a investigation.
The opening up to White 6 is the same as game 2.
Some say ‘mirror Go’ can be a good strategy for Lee Sedol as White, but let’s see what’s going on.
Let’s see if Alphago will play as the same as in game 2.
Lee thought the opening in game 2 was good for himself, and if he follows the same sequence, he can save time as well.
White’s kosumi at P15 (12) is very unusual, but Lee researched that move after game 2.
Black’s extension at I17 is normal.
Lee clearly looks more relaxed than he did last game. I hope he bounces back and has some fun with these last two games.
Hi Wasi, yes, he looks so. The opening seems to be alright so far, and Lee would like this style of game.
White is very calm! Go Leee
Yes right. 🙂
By the way, Black’s attachment at E16 (23) was very early, but it will be interesting to see the continuation.
Learn wei qi starting with Wu / Go and even up to 1980s, the tourney game is longer and more interesting in the moves. Every week we have a prof 7 dans (later 6d) to explain what those move meant. I enjoyed every moment up to Cho … After that it is all competition. Short time. Fight.
May be this is a wake up call to go back to this. 2 decades have passed and may I say I finally care about a match result. (Last time is when Cho has an accident like Wu/Go. Still remember he played one of the best white game.).
We want better game not just prize.
The most important point of the Game is not we have won but we are in … Let us see some good game. Yesterday one is good. In fact the whole match so far so far is good to watch. I even de-dust my pieces and board and first time in 2 decades to play a match game on board. Good. Painful as a human but …
The most important is that we have tried our best.
Thanks Dennis a lot for your opinion and feeling about Go.
As an enthusiastic amateur, it’s really useful for me to see the different commentary on the match — people bringing different things out. What a treat!
Black’s shoulder hit at D11 (25) was another interesting move. Black’s play is creative and flexible.
Younggil, do you think these matches will lead to some new variations in some of the fusekis?
Lee seems to be playing fairly safe.
Yes, I think so. It’s hard to mimic AlphaGo’s moves, but that’s still possible to adopt in some openings.
By the way, Lee’s opening in this game is very patient so far.
It’s difficult not to, but i think that to humanize Alphago is a mistake.
this games seems more promising for the human…
Yes, I agree that today’s game is more promising for Lee.
The opening seems to be playable for both, and Lee chose to invade the top which was earlier than expected.
Lee is playing too slow! Is he being too cautious he wants to keep all of his advantage and to make no mistakes?
I don’t know, but that doesn’t seem to be a good strategy for Lee. He’d better play more quickly I think for the endgame.
It seems like Lee’s not afraid to enter byo-yomi
it’s conceivable a white move at f4?
Yes, F4 is conceivable, and K5 is also possible.
Lee might play differently, because he wouldn’t like to play those normal moves…
I like how Lee decided to invade so quickly. In 2 of his previous games he allowed alphago to generate big territories which caused him major problems at the end. I believe if alphago is territorial leading, she can play more freely. It will be interesting to see how she plays when she is down in points (from rough counts) after this invasion.
I wonder if Lee had invaded the bottom earlier in the last game, if he could have had a better chance. Maybe he is testing that out.
Lee’s opening in game 3 wasn’t good, and even if he invaded at the bottom earlier, I don’t think he would have had a better chance.
Yes, right. I also want to see how AlphaGo plays when she’s behind in terms of territory.
Lee’s play at the top looks premature, and he has difficulties to continue now.
If there is one thing AlphaGo is showing me…it’s that shoulder hits are good.
I agree. She has played a shoulder hit in each of her last 3 games
Anyway, the game is still well balanced, but that doesn’t sound promising for Lee.
Michael Redmond is saying that the computer may lead to a new golden age of creativity in go. Do you guys agree?
At first I disagreed. Now I’m not sure. It’s worth noting that even if DeepMind releases the source code, it’s doubtful that most people have hardware that is sufficient to reach it’s current level. Certainly it’s possible, but the costs would have to be worth it to whoever is running the program. I guess we will have to wait and see what happens in the future.
Even if Deep Mind released the code that is not the millions of games played into the Policy and value Networks. This might not be a database in the sense of stupid relational databases, but it still, a database in the A/V sense. It is the data which allows AlphaGo to play a good game.
I think so, especially if AlphaGo is made available in the public domain. Its revolution is not so much in its precise calculation in the endgame or midgame, but its broad strategic considerations in the opening, leading to a radical revision in fuseki theory. AlphaGo will be the Go Sei Gen of the 21st century.
Opening theory and book, definitely.
Computer-aided competition (“Man-Plus”), where a computer is shared by the players displaying foreseeable game tree.
Actually…the revolution is most probably in terms of the late beginning.
AlphaGo loses very badly, and I can see where that will take a whole lot of “corrections” to make it lose more gracefully. It seems to know what to do when the game is completely undecided, and when it knows it is winning, but if it is falling behind it simply loses it and doesn’t know how to recover.
let ‘s see what happen when alphago plays against another players with other styles. For example, Gu li has a very good fuseki…
He is saying there have been three great players that changed the game – Dosaku, Go Seigen and AlphaGo.
Poor Kitani never gets any love.
Because he got trashed by Go in the jubango, like all the other top Jap pros. But Kitani is widely recognized as the greatest Japanese teacher of the century, while Go had only one student – Rin Haiko.
Gu Li for the fuseki, Lee Sedol for the tesuji in midgame and Lee Chang Ho to close up the game.
I read that they’re looking to re-train alphago, without looking at previous games, to only play against itself – this would probably provide the most opportunity for new creative moves.
He mentioned Kitani too, along with Go Seigen.
Go was the far superior player, but Kitani gets credit also for collaborating with Go on the new (shin) fuseki. They were very close friends right till their death.
alpha go got a kind of a symmetry between the right and the left side… that was on purpose maybe?
Yes, I think so. If White didn’t cut at M9 (56), Black would connects her stones with a tiger mouth as well.
Computer-aided analysis even competition, yes I agree. For example, a computer is shared by the players showing foreseeable game tree.
Wow, Lee is going to be in byoyomi so much sooner than AlphaGo.
For the next match I want to see AlphaGo vs. a team of top pros. The great thing would be that we could all hear what they were saying to each other during the games, but AlphaGo would not be able to hear them.
Also the next match should be one game kadoban. That is the way to push the machine to its limit.
could be a good idea for white to play around j15 or j14 before black manage to play in that area?
Yes, that’s a good idea, and the timing is important. If White plays around J14, Black wouldn’t answer, because the top area is very urgent.
Since AlphaGo likes big territories, maybe someone skilled at invasion like Cho Chikun would be a good match for him ?
There’s a fighting in the center area, and a big trade between White’s group at the top and Black’s stones on the right side is possible. However, White’s top is bigger, so Lee might try to save his stones at the top.
In the next game, if I were Lee I played the first move at the center. Maybe that’s the key: center control from the very beginning.
That’s a good idea, and it will be interesting to watch.
After Black K11 (69), is White doesn’t do anything at the top, Black’s territory will be too big, so White should try to reduce Black’s area somehow.
Giving AlphaGo a big territory seems to be a fatal mistake.
I’m interested in seeing what Lee will do.
Hi Younggil !
Is it for you the better beginning for Lee in this tournament ?
Hi Bobby, what do you mean?
The trade between M11 and L8 looks terrible for white. The white stones at the top look very dead at the moment
I don’t think Lee would agree that it’s all of Black’s territory.
White tries to reduce the area, but Black’s response at (J9) is calm and solid.
J9 looks like a move that says Black is ahead easily.
Looks like Lee is trading the top for reducing black’s potential territory in the centre.
Yes, I think so. But Black’s territory at the top is still quite big.
This game seemed to go bad very early, when black plastered the outside of white’s position on the left.
Lee seems to be avoiding the toughest fights —and for good reason, as we can saw yesterday—but that just leaves him nothing at all.
This opening was terrible for white again. When black built the thickness outside white’s position on the left, the game looked over.
I think the opening was balanced with white getting territory early in bottom corners and both sides. But maybe the invasion at the top was premature. With a komi of 7.5 black was always having to seal off the centre.
@Younggil : “mirror Alphago” (game 2) at the beginning is a good strategy so far ?
I’m not sure. I think Lee thought the opening in game 2 was fine for him, so he played like that. However, he played differently in the bottom right (12), and he wanted to force Black to extend at the bottom (13).
Alphago is playing very skillfully to give white territory with clearly defined limits. We saw this in game 2 also.
White got big territories but had little chance to expand them. I think in the last three games Lee was completely lost before move 50.
I wonder if it would be more fair if humans can have more time than the computer. I mean computer can iterate far more possibilities in less time and with zero error. It seems like there is no way a human can ever match a computer if they are given equal time.
I’m inclined to agree.
Lee will be going to byoyomi far to early in the game and will be under time pressure for the rest of the game.
More interesting to me is instead of a handicap, give the human maybe 2x the time of the computer to compensate. It would make it more fair to me.
AnonCow: computer thinks during the opponent’s time too, so it may not help all that much.
I realize that but more time would really help the human far more than Alpha. Once you’re in byoyomi while your computer opponent is not, its over anyway!
I highly doubt any professional will take a normal handicap, but perhaps a time handicap would be reasonable.
The computer is allowed to think on the human’s turn, remember.
Following this line of reasoning though to its conclusion – is it really interesting to find how fast AlphaGo can play well? Would we care if it can beat humans in normal games but not in rapid games? Are we humans that desperate to maintain our pride?
The more interesting question is to discover how strong the program really is under normal conditions. What is its win rate at various handicaps?
And beyond that, the interesting points are how quickly it learns, whether it can learn to discover new openings, etc.
Problem for human is that he cannot add hardware to buy time as AI can. So the time limitation is real for human but not for AI.
This is why a true human vs computer should really be human TEAM vs AlphaGo. Consider than with AlphaGo’s thousands of CPUs and GPUs all running in parallel, its really more analogous to a “team effort” on the part of the computer.
Not really true. The human brain is a massively parallel processing unit similar to AlphaGo’ss neural network. To distinguish from brains it is actually called artificial neural network. It is certainly possible that they could create several independent sub-routines which expertise in some areas which could be analogous to “team effort” but what the DeepMind team has told us it doesn’t work that way.
Problem is computer thinks during human’s time. Human needs helpers or handicap.
White’s wedge at K9 (78) was brilliant, and the game looks complicated up to J8 (82), but Black’s nose attachment at M12 (83) was interesting. Black’s management in the center is very important now.
there is a limitation for the computer’s time also – after a while it will only make incremental changes to its estimated probability for a particular move. I think this computer improves more with its experience
The more time it has the deeper (and broader) it can search its potential move space. More time is always better, up to the memory limits of the hardware of course.
For a computer, I think there will be rapidly diminishing returns with more time. Whereas a human has lots to gain from more time.
Exactly my point as well.
More than 1 hour left for black ! incredible !
It’s very tricky for Black to play in the center, but if Black captures White’s stones at the top, the game would be still good for Black.
K9 was a move I think few could have thought of playing. Brilliant!
It looks like Black tries to sacrifice her stones on the right side, but it’s difficult to guess what Black’s plan is.
Agree! We human can utilise opponent’s time.
However for AI with their algorithm built, they just need enough time to process. Which means if 60s or 60 minutes given for AI it would return the same result.
alphago seems to have finally shorted a circuit with move 89
Lee Sedol is in byoyomi now, and only 90 moves played.
Maybe alphago is trying to win by 1/2 point as usual.
Lee seems unnerved, as he was trying to play before losing his first byoyomi and hesitated.
Black R11 (93) was to exploit White’s bad aji on the right side, and let’s see how much Black can gain over there.
If White can defend without any big loss, the game would still be playable for both.
Wow, Black’s wedge at C16 (97) looks amazing. I don’t know if it’s for a bigger ko threat…
It’s gone crazy 😉
Maybe it thinks it’s behind suddenly? This is exactly what Monte Carlo AIs do when their winning percentage goes down suddenly, usually after you play a tesuji that they didn’t see (as Younggil knows from reviewing a few of my games with them).
AlphaGo is playing strangely!
The fact that LSD is in byoyomi while AlphaGo has 1 hr left of time tells me that time limitation is far more a hindrance for a human than a computer. Since AlphaGo has so much time left, it means adding more time won’t help it anymore but would help LSD immensely.
Yes the computer can think on human’s time but as it has a 1hr left, more time for the AlphaGo isn’t really a factor.
Time to celebrate! GO LEE SEDOL!
What’s up with S11?!
I am probably one of the worst players here but that move is equivalent to “PASS”. AmIright???
My god, alphago is completely crazy.
Maybe they dumbed down AlphaGo for this consolation match?
HAHA, I WISH. BUT obviousely it is because of the Ko. I believe this Ko initiated a bug.
Maybe AlphaGo is going up in smoke, as Michael Redmond puts it.
It looks as if AlphaGo thought she’s behind, so went crazy…
That’s really good news for Lee, and let’s wait and see!
I have no idea what Alphago is doing with the black’s group at the right side, Younggil sense.
I don’t understand either. Maybe it’s a kind of bug?
I’m highly worried that Lee will be playing with 60 seconds a move for the rest of the game now, but things do look promising!
Yes, right. I think so too.
Possibly, AlphaGo found a complex winning line for Lee that we don’t see, and goes into _futilely playing against itself_ if you know what I mean.
That is a clever explanation actually, you very well might be right.
Yes this makes sense to me with my understanding of monte carlo tree search. I think what happens is the following.
First black tries a normal move in the center and it will have a high initial win rate. Because of this AlphaGo will spent more time investigating this move. Now there might be a hard to find counter but AlphaGo finds it because it’s spending all this time here. As a result the win percentage of the move in the center will start dropping dramatically. Now other variations will be explored more.
Now AlphaGo will try moves like she played on the right or in the lower left corner. While these are bad exchanges if she then plays in the center again it will look like that will still have a high win percentage. After all the bad exchanges might lose a few points but the fight in the center is too big. Now if the counter to the center move is indeed hard to find it will take a lot of time to find it in all variations where AlphaGo plays an inbetween move.
So AlphaGo could try dozens of inbetween moves (like C16) and in most she will find the counter in the center and all those will get low win rates. However there is probably one variations where she didn’t find the counter. So she just ends up playing that one.
Interesting analysis. This would mean that the AI would need some way to reason about subtrees that stay intact regardless of intermediate moves. That’s exactly what I’m trying to figure out with high-level reasoning in computer vision as well. Move objects as a whole in your mind, not just the individual lines or points that make up the object.
I suddently have a thought that Google is trying to decrease the AI’s level in this game. Alphago makes a move very quick. What do you think, Younggil sensei?
I dont think so.
That would be against their purpose of testing AlphaGo and an inappropriate act toward Lee sensei.
I trust Google values and ethics.
As I said, maybe we will find out that AlphaGo was dumbed down. Maybe fewer CPU/GPUs running or something in this consolation match. It is playing strangely compared to previous 3 games.
I’m not sure, but I don’t think they would do as you said.
I am not Younggil, but DeepMind has repeatedly stated that AlphaGo is not being tweaked (it is a part of the match contract), and I would trust them.
The game is good for White quite clearly, and Black C8 (113) and D7 (117) were the tricky moves to answer correctly.
maybe the computer as its gets smarter also gets paranoid. It starts to see so many possible winning moves for the human – that even the human doesnt see – that it freaks out
Does AlphaGo have a Byoyomi mode?
I’m so sorry. I hope so.
If Black can live on the left side in sente, the game will be close, but if Black has to live in gote, White can save his stones at the top, and White will be ahead quite comfortably.
Definitely it is because of the KO. Do not believe alphago is the god of go. It made a many mistakes. If Lee had played well in the first 2 games, now the score would be 3:1 rather than 1:3.
What I think is happening:
When you play against strong Monte Carlo Go bots, they usually play quite well and (at least for me) the best ones are challenging opponents and fun to play against because they play strange moves sometimes and test my Go knowledge.
However, if you can get them to think that they’re behind they gradually start to overplay. The more they’re behind, the more they overplay. Usually when that happens I know I’m going to win if I can just respond well to their moves.
One way to push them into that kind of situation is to play a very sharp tesuji which they didn’t ‘see’. When there’s a move that only barely works and seems like it won’t work, perhaps there’s only one line of play, one move order that will make it succeed.
For that kind of move, it’s easy to miss it in partially random simulations. That means, when you play it, the AIs estimate of winning suddenly changes dramatically and it starts to play moves like we saw AlphaGo play in this game. Basically it goes on tilt. Younggil and I call this ‘going crazy’ in honor of Crazy Stone.
I had mostly assumed that AlphaGo might not have this problem, being as its much stronger and doesn’t rely exclusively on Monte Carlo, but perhaps a player whose skills are as razor sharp as Lee Sedol can sometimes still upset its balance. That’s exciting if so. It’s just a theory, but let’s see 🙂
David, that is all very convincing, but it is very strange if DeepMind has not noticed this problem and tried to address it. There’s always one more bug!
That’s interesting David.
By the way, Black saved her left side with B14 (125) and White played at I6 to save his upper side stones.
Black is still in the lead on the board, but it’s difficult to pay the komi (7.5 points) at the moment.
Thanks, David, for the information regarding tendencies of simple MC Go programs.
I really need to set aside some time to read their papers…
is it just me or is Lee is in good advantage for now….alpha go seems behind…anyone can comment
HAHA dont mind this comment Mr. An already answer it…LOL my internet connection is bad…=D
Oh, your internet might be fine. You have to refresh time to time, and there’s some lagging I think.
By the way, AlphaGo’s endgame is tricky, and started to catch up.
I thought that was what they said too, but I think they just meant “it is able to resign by itself,” since the clock hasn’t stopped.
Looks like Redmond’s call was a misunderstanding. It is still playing on.
Didn’t think it was possible.
Tremendous game by Mr.Sedol.
Congratulations Lee Sedol!!!!
Well, that was premature: \
It looks like the game is still on…
Game is still going on. There was a misunderstanding by the announcers. But nevertheless it looks like White has the game in command.
Regardless, this looks to be the most interesting game yet (from the POV of interest in ml—-sorry, but I’ve only JUST started learning about Go…. and Hex… and I wish I could appreciate the genius of Mr. Sedol).
Did you guys see this tweet from Demis Hassabis?
“Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now…”
Not sure if Demis is saying it based on his sense of Go, or is he getting real-time analysis of AlphaGo’s “thoughts”?
I assumed it was based on whatever data AlphaGo displays for the team.
This is why Google put on this event.
It’s more interesting for the developers to find these areas where AG is weak.
I still believe they need to “hard code” ( as much as I hate it) some basic rules of inference to limit its “stupidity”.
does w 2 stone on line L and M connected with the other w stone on the top board?
Just wanted to say thank you to Younggil An for this text feed. In the first three games, I could not follow the live video stream, but the comments left here by you were very helpful.
Today I could check the stream, and the guy on the left is quite annoying.
Everyone is complaining about Chris Garlock in the English video stream!
AlphaGo’s endgame is very skillful so far, and the game is getting closer.
However, White is still in the lead, and the only worry I have is Lee is in his last byoyomi. I hope he can concentrate every single move for maintaining his lead!
I think it is a good tradeoff under current rules. Better to focus on early to midgame rather than conserve time and rush earlier moves. Because poor earlier moves means it doesn’t matter how much time you have in the endgame. I think it was the right choice. If you are behind even slightly in the endgame, it is over. If you are slightly ahead, you have a better chance even in byoyomi.
The execs at google get occasional texts from the dev team that update them on AlphaGos estimate of winning chances.
Alphago still has nearly 30 minutes, and catching up furiously. However, Lee’s responses’s been calm and solid to solidify his advantage.
I am worried about Lee not spending the entire minute thinking of follow-ups, even though Q18 is the only move. In fact, I think that would be the perfect time for extra thinking time.
But maybe I underestimate Lee Sedol?
That makes sense. Lee could save his time in the last byoyomi. I think he’s quite confident although the game is getting more closer.
It appears that White is still ahead by 2~3 points at the moment.
I hope that for the next event Google will show us AG’s running count.
Where are you reading this?
I can share my experience as a person who works a lot with applied AI. So the problem is that if you have trained a model using some specific objective function (in that instance – probability of winning) AI only cares about making a move that maximizes this function. If you have big variance in terms of how potential moves can affect the game this works just fine. However if you have a marginal situation in which no matter what move you make – it’s not going to make a big difference (in our case – if you seriously winning or trailing, or maybe in some specific endgame situation), then it struggles to differentiate between moves that looks good and bad from human eye. The practical solution to this problem is to just have a separate model but trained with different objective function and switch to it in this marginal situation to cherry pick the correct move. In our case it would make sense to optimize for expected value of winning/losing points. It would not necessarily make the program better (although in many cases it would because it works around overfitting), but it would make it more sensible from human perspective.
This seems totally reasonably. If AlphaGo thinks it’s far behind, then(remember AlphaGo wants to maximize winning percentage not score) if moves have 2.5% vs 2.8% chance of winning, that .3% might just be statistical noise, that happens to be an incomprehensible move.
Thanks for explaining this Dmitry, that’s really interesting.
Yes, this is what I thought was happening too. The objective function here is the probability of winning. It’s surprising that this probability is not altered by another move more though. The moves it did was equivalent to doing nothing.
The computer can’t “cherry pick the correct move” because its model results are the only information it has about how good the moves are.
At best you could preestablish conditions under which you’d use the different models – for example, when the probability of win model shows less than 50%, use the net territory model instead.
I would also note that the use of win percent rather than territory has resulted in much stronger play against human opponents when the computer is ahead, because the computer starts playing appropriately conservatively, making it almost impossible for the human to catch up.
I think the intent of what Dmitry said was that the second model should be used when the first isn’t coming up with very different success probabilities.
That would be primitive if somebody had to text the number. Why wouldn’t they be able to transmit it in real time?
Another tweet from Demis: https://twitter.com/demishassabis/status/708928006400581632
“Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87”
Yes, sorry for posting the same below. I guess that’s an obvious weakness! How come such a late realisation of that mistake, which causes it to go crazy!
No worries. I’m not sure how Demis knows it was a mistake as opposed to Lee playing really well. It’s not like they would be simulating it now is it? Unless they have another AlphaGo cluster doing analysis for them.
In any case, White 78 was Lee’s brilliant tesuji, though my friend just told me that Myungwan Kim thought that Black had a response to it on the other AGA stream.
Demis Hassabis: “Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87”
The game seems to be closer than I thought, because Black is thicker than White, and the thickness will pay in the endgame.
If Lee makes a mistake, it will be crucial, so he should be very careful in the last byoyomi.
Lee played a lot of simple moves at the end, quite clearly the best approach given the game. I am a lowly 4kyu so I don’t understand many things, but Lee’s calmness at the end was very impressive.
Attaching at B6 for Black is quite a big endgame in sente, but Black doesn’t play there somehow.
If White can jump at B5, that would be a big reverse sente endgame.
Alphago played a mistake at A8 (161), and the game would be decided by now.
Black’s lost at least 1.5 points there…
Jing says I can’t start writing the article yet in case I jinx Lee 🙂
Michael Redmond almost called Alpha Go “Avocado”, haha!
His brain is probably toast at this stage. I wonder if it will stick? 🙂
If Lee wins it will be important to analyse how Avocado got confused & whether it can be pushed into the same position again (or at least increase the probability of it)
The game is practically finished due to Black’s weird endgame moves.
Demis Hassabis: “When I say ‘thought’ and ‘realisation’ I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87”
Unexpected tesuji at white 78 seems to have thrown Avocado off track. Will she learn to expect the unexpected for the last game?
Great commentary, by the way, for all the games. Keep doing your excellent work!
now watch AlphaGo fly into a rage and start burning its own servers back at google HQ
I just love your way of thinking =))
it’s on aws
If AlphaGo loses because of kyu level mistakes it will be a shame.
As Michael Redmond put it, maybe AlphaGo will “go up in smoke.”
Lee Sedol Won! Congratulation! 🙂
Congrats, Lee Sedol! There’s hope for humanity – yet.
Yes, congratulations for fortitude in the face of huge pressure
Congrates Lee Sedol! I am glad I stuck around 😀
Amazing game by Lee Sedol who was under time pressure for nearly TWO HOURS.
Thank you for your commentary Mr. An!
Interesting game. We learned more about AlphaGo in this game than we learned from the last 3 games combined.
The prior three games made this game possible. Lee Sedol is to be commended for studying them.
I hope they didn’t intentionally lower AlphaGo’s strength so that Lee can at least win as a compensation, because if they did, that would be the greatest insult and they should all commit ritual suicide. kidding
IA makes such errors often inexplicable movement against Kasparov (the system had an error and made a random play) or the question IBM Watson (thought that Toronto is a city of USA) in jeopardy.
The systems are optimized to have fewer mistakes than a human, but the lack of common sense, sometimes making mistakes a person never would.
There is a Toronto in Ohio: https://en.wikipedia.org/wiki/Toronto,_Ohio
It is a good win but not a great win. In football terms, it would like winning because the game was close but your opponent accidentally scored an “own goal” and then you simply maintained that lead by not making an equivalent “own goal” to balance it out.
AlphaGo should not have lost but played inexplicable moves that even a beginner can spot.
I think it was more Lee playing a different game that threw Alphago. This game was a good test as in the previous games Lee did not have initiative.
basic computer go wouldn’t make that mistake.. lol.. now even Lee Changho has a chance against AI..
AI: I just pretend to lose so that you people will not be too concerning about me
Was AlphaGo already losing when it played the weird glitchy moves?
Yes, I think it was trying fairly desperate things hoping Sedol would make the sort of mistake it had already made.
Thanks, An, for the commentary.
And my heart goes out to Lee for holding out so courageously in byo-yomi to eke out a win, having suffered tremendously in the first 3 games. I never thought I’d live to see the day AlphaGo resigned.
Got to hand it to Lee Sedol anyway, he gutted it out. How long did he think before he came up with the winning tesuji? We all thought he was a goner.
Yup, move #78 was the divine move. The move that short-circuited AlphaGo’s neural networks.
I think the early “questionable” J7 turned out key. A “Hex” move.
From then on, the words “OMG!” should be changed “Oh my Sedol!”
So when AlphaGo resigns, it shows…
The result “W+Resign” was added to the game information.”
The screen looks like a Linux-based UI!
That’s seems to me like Ubuntu
I would not criticize the computer’s strange moves out of hand. Chess AIs do similar things in lost positions. If “correct” play leads to a loss they will often try to play moves that will win with opponent oversight. Simply, the AI believed the position was lost when it made the poor moves, and didn’t think that they were “obvious” because a computer has no way of discriminating between obvious and subtle. Based upon it’s sample games, AlphaGo thought Lee might play an aggressive response in some scenario, leading to a human loss. At the pro level this is ridiculous, but it doesn’t actually matter how much you lose by when the game is lost.
seems to me, if u r losing, u should try and “hold on” in the hope your opponent makes a mistake. Surely they can program a “least bad” move
It looks like Lee has no chance of losing the next game. Alphago’s weakness has been spotted. Lee just has to reproduce it. Fan Hui was 3-2 against Alphago and I guess he spotted the same weakness too.
Fan Hui won two friendly matches, but subsequently lost the 5 official matches.
And, as Redmond mentioned, it’s probably not that meaningful to compare this AlphaGo to the one that played Fan Hui.
To be precise, there was one formal and one informal match each day. Fan Hui won the informal matches on Days 1 and 5.
Not really, Fan Hui was 5:0
Fan Hui was 5:0 in the official games, but they also played 5 unofficial ones with 30sec per move timesetting, in those he won 2 and lost 3.
Fan Hui won 2 games in a friendly series of 5 blitz games – lost 2:3. He lost the official (longer) games 0:5.
It seems that for very quick games AlphaGo is put at a disadvantage because it severely curtails its Monte Carlo search depth, whereas human players respond based on learned joseki, intuition and pattern recognition, especially in the opening sequences.
@Sammy you do realize that while Lee is sleeping, I am repeating the game again and again. Probably 100,000 times. Right?
You are evil !!!
I think I know what AlphaGo’s “weakness” is: AlphaGo does not know how strong or how weak its opponent is. It has no realistic estimate of what moves its opponent is likely or unlikely to play given his or her strength. People have asked whether AlphaGo was designed to specifically defeat Lee Sedol or not, as if that is an accusation of wrongdoing. However, in reality, all human go players mentally prepare themselves to defeat an opponent of a certain rank. In pro games, they specifically study the games of their opponent. AlphaGo, however, does none of these things. Therefore, it does not know that its opponent is someone of Lee Sedol’s stature. It cannot therefore predict what level of responses its moves will elicit, regardless of whether playing them has a high probability of winning in its games against itself.
The question is: To what extent can such a weakness be exploited? If not much, then we might just have witnessed one of the last high-profile matches in which a human beats a machine.
alphago has learned thousand games include lee sedol’s games. CMIIW
They said before that it has no database, and just now that they didn’t use any of Lee Sedol’s games.
AlphaGo doesn’t need to know its opponent’s strength. It simply plays its best move based on the value (optimized for winning rather than points gained) its value net assigns to each possible move from the Monte Carlo search tree – limited by time constraints of course. Problem is moves like #78 are marginal moves – with maybe a winning percentage of 50.1%, i.e. it only works with a specific line of play in perfect sequence. These moves confuse AG’s value net because the range of possible response moves all register very marginal winning percentages, making it difficult for AG to select the right response. Hence the mistake at #79 as Hassabis has tweeted.
I am not sure I understand why you think the probability of a certain move leading to victory does not vary greatly depending on the skill level of the opponent.
It certainly varies, but it’s not part of AlphaGo’s considerations. It’s like a little kid who doesn’t know Lee Sedol, so it isn’t giving him any respect. :p So if there’s no good move available, it will even play cheap tricks that we know Lee Sedol will instantly see through. But even if it had played straight up, it would have lost at that point, so that in itself is not really a weakness of the AI.
There is an interesting question involved here: is there a single ideal way of playing Go, which would defeat any opponent not already using it? Or does it depend on your opponent’s skill and play style? It’s possible that AlphaGo might have turned things around if it knew not to use ‘cheap tricks’, and concentrated on searching for tricks falling in just the right skill range. In fact it might be that as the estimation of the opponent’s strength gets stronger and stronger, play style should keep on changing. If you think your opponent is many times stronger than you, then you should be extremely conservative, only making plays where you can be sure of the outcome. (And I suppose, you would probably resign the moment you got behind.) But such a conservative style might lose an otherwise easy game against an opponent weaker than you.
I think you’re right. A big part of AlphaGo’s loss here was that she does not reason as well when ahead *or* when behind. But another part was that her “probabilities” that she uses are based mostly on playing games against herself, so she hopes for opponents to make the same sorts of mistakes as she would. As I think someone else said, this means she will trade territory for win probability in ways which don’t match the opponent’s actual behavior.
AlphaGo’s play is beautiful, but it seems it loses around 25% of the time against a professional. Taking it to the next level will be hard.
Hard? It is inevitable that it will improve very quickly. Its proved that it can improve very quickly just based on the improvements it made since the Fan Hui match. All it needs is computing power which it has plenty of.
It’s true that she has been improving steadily by playing against herself. While I think that can go a long way further than it has, I fear there are fundamental issues which this doesn’t help much with. If she continues to be erratic at the high and low end of the probabilities, she will have trouble learning anything useful from herself about these situations.
I heard in one commentary that the developers have said AlphaGo cannot get much smarter by running on more computers, either.
If AlphaGo tried to have a model of her opponent, she would surely play better; but that’s such an inelegant thing to add! Part of the thrill of seeing the AI play is the alien genius.
I disagree that improving it will be hard. This is a minor hiccup. And I work for Google (not Deepmind though). Its models need to be tweaked but I think within 2 years it would be godlike — at least four stones ahead of every human player. Especially now that DeepMind wants to train a new Alphago without any reference to human games. This means it will play even more unexpectedly.
Redmond is highlighting 78 in the press conference – he seems to be saying that whole sequence was unexpected. I think that means it probably had a low policy network score, and wasn’t prioritized by AlphaGo as a possibility to explore.
New here…great to read all these comments after watchin the game streaming…
I think Jorik Mandemaker’s above makes sense. This might be a sort of horizon effect particular to Monte Carlo search. In chess, for instance, early engines suffered from extending the searched line by forcing moves so that some inevitable bad thing won’t happened before the search of that line aborts at A certain depth. For instance, they wouldn’t mind a trapped piece or a fork if they just had enough checks to push the eventual capture over that search depth horizon.
Now with MC search, I at least it is very conceivable that in close situations where there is no clearly best move that brings a good result, the search would try a lot of stuff and in one of those exchanges the refutation of the actual target move played later isn’t found (it’s still statistical sampling…). This results in a much better score for that exchange than everything else and the bad move would be chosen.
The only thing that can be done against weaknesses like this, which are intrinsic to the algorithm, is to throw more processing power at it. So you see, it’s still a matter of algorithms – the AI doesn’t see it through human eyes. Humans could dismiss moves like this a logical reasoning alone.
By the way, even though chess engines are nowadays superior to humans, they still are able to come up with atrocious moves at times. Vladimir Kramnik lost a famous game against Peter Leko in the 2004 World Championship match because his team didn’t check a prepared line suggested by the computer thoroughly enough. Leko then found the refutation over the board…
“Humans could dismiss moves like this a logical reasoning alone” – Really? I think humans can miss narrow continuations as well.
Yes sure, but there are things that top human players usually wouldn’t do. I was thinking about some moves by AlphaGo where it seemed to throw good money after bad – the attempt to make a dead three for White on the left side towards the end of the game, for example. Redmond said that this was (another) nonsense move.
So I think it’s possible that in situations where there are no really good moves, a move where the opponent sometimes dies in the MC simulation causes such a swing in the evaluation that the move might get chosen.
I take it that a top human player would not try stuff like that, reasoning that an able opponent would not make a mistake to cause such a swing.
Lee played for ca. one and a half hour in byo yomi, one move per minute…this must say a lot about speed analisis etc…and this is what makes his win more remarkable
I don’t think its fair to say that Lee Sedol won because of AlphaGo’s mistakes, or a bug of some sort. It is true that it made a few moves that were `obviously weak’, but it only started doing that once it became of the opinion that it was losing. In that sense, any weird plays it starts making are somewhat besides the point, since in AlphaGo’s view, the game is already lost, and that is entirely due to Lee Sedol’s credit. As such, I think it is as strong a win as can be desired.
But what about those terrible seeming moves after AlphaGo realized it was losing? I don’t see any way it could have thought that those were increasing its win percentage. The whole win percentage thing is not how a human player thinks. There are plenty of moves in go that if you play them you will win 100% and if you don’t you will lose 100%.
Check out Dmitry’s response above.
My guess is that when it thinks its losing, suddenly “all” moves seem to have an almost equal probability of winning(remember it’s not going for most points, which is how I suppose a human would try to catch up). If one move, that happens to be a bad move, has a predicted winning percentage of 2.8% vs the “better” move with a 2.5% win rate, that 0.3% could conceivably be statistical noise.
Or I should say…..whether you win or lose is 100% determined by the follow up moves. In go problems it is often the most unlikely looking move that works, and it will only work if a specific sequence comes after it. It’s not a matter of probabilities.
And I’m not sure the game was totally lost after Lee Sedol’s tesuji in the center. Even after AlphaGo’s blunders the commentators were saying the game was close.
Well, the game was close in their minds, not necessarily in AlphaGo’s. And the point isnt whether the game was lost or not, it is only whether AlphaGo thought it was lost.
On a side note, I think Michael Redmond dropped a total of 9 stones throughout the game!
I’ve been wondering why AlphaGo didn’t hit the 10% resignation threshold earlier than it did.
A guess: Perhaps AlphaGo sees “too many” forcing moves left. And perhaps the fast rollout network might not be accurate enough, so it (mis-)estimates that the opponent would blunder at a relatively high rate, say once in 500 moves (0.2% blunder rate). Then AlphaGo won’t hit the 10% threshold until it reduces the number of forcing moves remaining.
Probability of win = (1 – probability of blunder)^(blunder opportunities)
0.1%: AlphaGo will keep playing until there are fewer than ~106 forcing moves
0.2%: …fewer than ~53 forcing moves
0.3%: …fewer than ~35 forcing moves
Might be part of why it did such desperate/silly looking things for so long.
I have expanded on these comments here: https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/#comment-13678
Why Alpha Go resigns? I want to know how strong the program looses. I’m interested of the territory calculation at the end of the game. If the program resigns, an exact calculation of the territory is not possible.
The program is not aiming for territory anyway whether it is winning or losing, so it doesn’t make any sense to compare.
DM Team has said that AlphaGo resigns when it finds its winning rate below 20%, as being respectful to its opponent.
Let AlphaGo give Lee Sedol 2 stones handi. And I’m sure Alpha go will still win
And you say it after today lose? Maybe you meant that Lee Sedol should give to stones to Alphago?
Wasn’t Lee essentially getting crushed? It seems that he got very lucky through a tricky move—a pattern his opponents know all too well. Yes, he won, but if the tricky move was objectively bad, this was not a convincing win.
It wasn’t a tricky move. It was the best move in that situation and Alphago didn’t manage to find it. In other words there is positions in which Lee has better understanding than Alphago.
I don’t know where you get the idea that Lee was essentially getting crushed prior to that move. The game was considered by analysts to be close and tight prior to that. Nor have I heard that move described as a “tricky move that was objectively bad”. At the time it was made, all the analysts praised it immediately when they saw that it opened up the game.
Michael Redmond did not consider the game close in territory. At that moment, black seemed to have an overwhelming lead in territory. If Lee’s move was objectively correct–great. If not, it’s not a convincing victory. Also, it seems black could have avoided the potential problem of 78 by giving way a bit at the top, while still maintaining a large lead.
No, Redmond was saying that black had the edge on territory on the board but would be hard-pressed to give komi. That was before the atrocious mistakes towards the end. It was always going down to the wire with komi.
Lee Sedol essentially took the initiative with 78, which effectively put paid to black’s claim in the centre. It was a near-perfect game, all the more remarkable when you consider he played half the game in byo-yomi.
The Chinese pros considered 78 a brilliant move, notwithstanding the weak 79. Even Redmond called it an “incredible move”. That was long before anyone was aware of any glitch on AlphaGo’s part.
I’m not going to re-examine the video, but what you are saying is flatly wrong.
Redmond thought the trade of top for right was terrible for white and left him far behind. The count I recall (from Redmond) was a 20 point difference on the board.
I’m a bit disappointed by the people who don’t want to retain objectivity about the game, simply because they are happy that Lee Sedol won.
Objectively, my understanding is that 78 was a terrible move which happened to have tricky lines which the computer did not analyze in time.
Anyone who has paid close attention to Lee’s go understands that he is the master of finding tricky moves which are not necessarily good, but which leave the opponent many opportunities to go wrong.
I am not impressed by this victory. I don’t believe Lee demonstrated superior play in this game, overall.
Sorting out the less than optimal commenting software here….
What does this have with how AlphaGo moved after the fact? I’d agree with you on the whole, but just as AlphaGo doesn’t care by how much it is winning, it doesn’t seem to care by how much it is losing.
Then your understanding is wrong. I have heard NO ONE say that 78 was a “terrible move”. But sure in hindsight it was not a perfect move with no possible counters. But no move is ever going to be “perfect”. Even if a moved were perfect, you can just point to prior moves and dismiss it as being setup by bad prior moves and then claim Lee did not demonstrate superiority. In short, you are imposing an impossible standard for any player.
It was just a matter of time when Lee would find the key to Alphago. Thats the strongest part of any human. They have ability to adaptation. If you fix this version of Alphago and give it to pros then in one year they all would beat it easily.
That’s like saying you have to fix Lee Sedol at the level of knowledge he had before the match. The point of this type of AI is that the program adapts almost like a human. This last gae showed up the differences — and weaknesses in the computer approach.
I felt the match win was very psychological. Lee seemd at first overconfident, then too tense and concerned with ‘having’ to beat a machine. Once he relaxed and the pressure was off, he mastered it. Programs don’t have that worry.
It won’t affect me. My toaster beats me at go!
The point was that Lee had absolutly no knowledge about Alphago. Even worse — he’s been mistaken about its strengh. But as human has weakneses as AI has its own weaknesses. So its a matter of time when pros will exploit them.
You will probably be moving into Toaster 10.4 to hide from Fridge 7.5, but that is a different story and will b told a different time.
Basic fact is that they seemed to say that neural network was held constant during the match; which I think a bit unfortunate…but this is much easier for the humans involved on AlphaGo’s side.
Do you have a lot of money? Do you like to make bets on things you are certain of?
The problem with your argument is that AlphaGo is two years old. How well did you play Go when you were two?
1. I like to congratulate Lee Sedol for his excellent performance beating AlphaGo in Match 4.
2. I also want to congratulate AlphaGo for its willingness to behave like human with some kindness and gracefulness after winning the competition price.
3. I am happy to learn that AlphaGo in addition to relying its intelligence on standard neural networks algorithms it also has a logical deduction module to guide its strategies.
4. Being an AI researcher my self I would like to recommend AlphaGo to try using a more advanced theory of Neural Networks called Neural Logic Networks which combines both pattern matching functions and logical deduction functions in a single theory. This will make AlphaGo’s intelligence much more flexible and versatile and hence able to solve much larger class of problems like intelligent human beings.
Much too soon. Please write a program using a logic net that beats AlphaGo, and then come back to us.
(Personally, I think logic has too much of a tendency to run away to be provably useful….and the only criteria I consider valid are utility. That is, beat AlphaGo.)
Well, the original poster is literally the textbook author on Neural Logic Networks, so Professor Teh may actually try.
Based on my limited understanding, I worry that DeepMind’s approach is kept relatively simple partly because the model must fit within certain restrictions – and these restrictions allow highly optimized computational methods (ex. training is via GPU-accelerated backpropagation and gradient descent). Those methods may not yet be available for Professor’s Teh’s field.
Actually, I was able to find a preview of Prof. Teh’s textbook and it does look like there are some learning algorithms available. Still, it would take some work to “catch up” to the architecture DeepMind has already been building.
As it is, the neural net approaches DeepMind uses are state of the art; the world is only just learning how to train deep networks, with most of the advances having occurred in 2015.
thank you for sharing
My heartfelt thanks to Younggil An and David Ormerod for hosting these articles, commentaries and video in on Go Game Guru. I look forward to the last match. Well done.
I would like to see a 5 game re-match between AlphaGo and Lee Sedol in 2018. Let both players train and learn from this experience and ultimately see who is the best. Considering that both AlphaGo and Lee has lost to each other, I consider this tournament even.
As a former practitioner of another “impossible task”, natural language web search, I would like to piece together my few observations here and on another forum Boardgamegeek interpreting the strategies and reversal that has happened in this match, together with a few proposals for further research in game AI.
It is conventional wisdom that to beat AI one must win the strategic contest from the very beginning, and not get drawn into tactical dog-fight. But here it is the AI that’s winning opening variations, by virtue of having explored brute-force more opening variations than the sum of human knowledge. _That_ is AlphaGo’s real strength. I was excited to see Lee put up a good tactical fight especially the ending ko-fight, but I thought move 23 Lee should have taken G13; with move 30.G13 AlphaGo effectively blocked him off for good; Lee had already lost strategically and then it was just “a matter of technique” for AlphaGo. Is human Go out of balance favoring edge-play tactics? Perhaps Go players should also learn the connection game Hex (yet unconquered by AI), to re-emphasize connecting to the center. My hexagonal Go/WeiQi variant called Weilian (“surround-connect”) that mixed Go edge surround-game with Hex center connect-game, may even be relevant. In these 3 games I really felt an affinity to AlphaGo which clearly put more emphasis on this than Lee did.
Demis Hassabis: “… #AlphaGo value net was around 70% at move 79 and then dived on move 87. Lee’s move 78.L11 was credited with confusing AlphaGo leading to a series of errors. My interpretation (fantasy): it was set up by Lee’s earlier, “questionable” move 46.K13, at the center of, but not close to, the surrounding action. At the time I cheered “A Hex move!” while it was roundly dismissed by commentators. After three defeats at the board-middle connection game Lee figured it out and deliberately adjusted his strategy. Possibly, AlphaGo found a winning line for White (disconnecting Black down the middle) and, always assuming its opponent would play perfectly, started futilely playing against itself, unable to choose better moves in a (it thinks) losing position.
This is being prepared as game 5 progresses; Lee seems to have a good balance between edge and center.
1:33 I’m not getting real-time but it seems to me (a lay person) if Lee gets a stone or two to around H11 first he would win the connection fight and the game.
I think on boards larger than 19×19 connecting to or through the center becomes more important strategically, and AlphaGo would enjoy a temporary advantage over modern players who have grown more focused on edge tactics than center influence. Witness Lee’s first three games, in each he lost the center fight by neglecting to play an extra stone when he had the chance. In one game AlphaGo even sacrificed the upper right corner to win the center connection fight. But Lee adjusted and won game 4. In other words, strategy-wise the center plays more like the connection game Hex, especially on a larger board. Human players making the adjustment will defeat AlphaGo until it learned to play Hex better, so to speak.
Next post will be a few proposals for further research in game AI.
— William Chang (former Baidu chief scientist, Infoseek chief architect and CTO)
William Chang, while the strongest Hex player is still a human, only a fraction of the effort has been spent on making strong Hex players compared to Go – and even if hex were a more “human friendly” game than Go (which it might well be) we humans haven’t spent enough nearly as much time at getting good at it as we have on Go. So a program will probably defeat the strongest player the moment someone adapts Go-AI approaches well.
But by all means, I love Hex, so if either Go players or Go engine programmers gave it attention, I would be quite pleased 🙂
Harald, your reply is very sensible indeed, though I think Hex still presents quite a challenge for AI, no less than Go, and as you say humans have room to get a lot stronger. But please see my proposals for game AI research and classic-game variations that may be interesting to play and more difficult for AI. — William
4:38 Well, he didn’t make a strategic “Hex move” near H11 early enough as I had hoped, eventually lost the connection fight there at the left-center and could not recover.
Here are some tasks for AI still within the games realm:
(1) Learn to play Hex expertly without external training in the form of coded strategies, tactical rules, game databases etc. Playing against human possibly allowed.
(2) Invent bidding conventions for Contract Bridge, and learn to play expertly even with disinformation (allowed but governed by Bridge rules), against or partnered with “real” players with various attributes.
(3) ZillionsOfGames has a general game-playing AI and over 1000 different games with codified rules; learn to play them better, and be required to demonstrate the learned/discovered strategies and tactics in a succinct form, not just brute-force “solving them” with a game tree.
(4) Consider variations of classic games that are more strategic than tactical, retaining the basic nature of the games so players can build upon known deep strategies, while increasing their computational complexity; will AI be able to adapt? I will give a list of specific proposed variations.
A gamut of classic-game variations:
(1) Go 23×23 or 25×25, center connections and influence gaining importance.
(2) Weilian (“surround-connect”) hexagonal Go with a sparse ring (every other point) removed from the third line, reducing liberties and forcing single-eye edge groups to try to connect through the center.
(3) Othello/Reversi: capture has to be made in at least one direction, the remaining directions are by choice, capture along each direction is all or none; large branching factor multiplier, fewer unwanted board changes adding clarity. Also, can be enlarged to 13×13 or 18×18 with 4 or 9 initial islands evenly spaced.
(4) Equalizing Mechanism Proposal: Free-move option, instead of pie-rule or komi.
Go has komi to compensate for first-player advantage. Hex has what’s called the pie-rule or swap-rule so the tentative first player has to propose a bad first move or else gets swapped. Go-moku (five-in-a-row) has a solved first-player win so Renju places several restrictions on only the first player. Chess white (first) plays to win, black (second) tends to play for draw. Besides equalizing the first-player advantage, this mechanism shifts gameplay slightly from defense toward more offense, and adds interesting strategic and tactical considerations. Here is the proposed mechanism:
(*) The player who has made the fewer moves has the option to make a free move subject to game-specific constraint.
If taken, the free-move prerogative then goes to the other player — say a token changes hands — and the game continues; if not taken, the option can be indefinitely kept from the other player. The game-specific constraint on the double-move is key, yielding a tempo but not spoiling the base game’s tactical play.
(a) komi-Free Go: double move cannot be adjacent (this preserves ko) or diagonal or together capture an enemy group.
(b) Free Hex: double move cannot be adjacent; this preserves so-called “bridge diagonal” connection.
(c) Grey Chess: double-move cannot be made by the same (or promoted) piece; so no capture-then-retreat.
(d) Connect5 (Go-moku): double-move cannot together create a free-4 or block a free-4.
I believe these classic-game variations have the potential for intuitive play and deeper strategy while presenting additional challenges for AI.
In the interest of fairness and enhanced performance for all:
(1) If the AI can add hardware to buy time, then the human player should not be clocked.
(2) Humans should play as a team.
(3) Any opening book should be shared, i.e. open book.
On the game Weilian (“surround-connect”): hexagonal Go with a sparse ring (every other point) removed from the third line, reducing liberties and forcing single-eye edge groups to try to connect through the center.
(Originally posted to rec.games.abstract in 2001, the “rules” is one word “Go”. )
My not-even-amateur Go edge-play consists of stones battling for survival racing to surround enemy stones before getting surrounded. In Weilian this tactical aspect is if anything even sharper. However, two eyes are hard to make at the edges, as groups are bottlenecked from simply growing toward the center by the sparse ring of obstacles. Edge groups (“armies”) then have to try to squeeze through gaps (“mountain passes”) to join up in the middle (“central plains”).
Shortly after writing the above on Boardgamegeek.com, I had a moment of epiphany or pure fantasy. To me, Weiqi (Go) is the closest we have to a perfect abstract game. And I firmly believe games are abstractions that simulate social competition or phenomenon. Yet, there are aspects of Weiqi that have made me uneasy, half subconsciously. On the surface, why is the board so large as to confound novice players (especially children but also de novo AI) where to start playing. Deep down, was there ever a proto-Weiqi that more closely represented the expansions and movements not only of armies, but also of tribes and peoples in ancient times? Are there any clues of such a theme in our quintessential abstract game?
In Search of Proto-Weiqi
A recurring, defining characteristic of China’s ethnic-cultural and geo-political history over at least 3000 years, is (simplistically) given in the quoted text. Mountain ranges and their Great Walls (added to each dynasty) kept a myriad of nomadic invaders “Outside the Passes” away from the Central Plains. Often the famously-named passes and fortresses guarding them were breached, a dynasty fell, peoples and cultures assimilated one another.
The square (“sifang”) Weiqi board is a fair representation of Four Directions under Heavens (also “sifang”). The board middle is commonly called the Central Plains “stomach land”. Invasions come from the edges, or “borderlands”. But where are the mountainous obstacles impeding them, where are the passes that must critically be controlled, first and always? These are not found on the Weiqi board, but there is left a clue. Historically in China, the Weiqi board initially has four stones placed at the Four Corners star points, there guarding the passes or initiating battles for controlling them — even if the mountains have long been abstracted away.
Maybe, just maybe, obstacles and passes as found on the Weilian board were there at the beginning, on the proto-Weiqi board, that has long gone with the wind….
this is really nice, thank you for sharing!
Notify me by email when someone else comments on this page.