A lot of Go players are interested in computer Go.
I had questions too and I also didn’t know how to answer some of the questions that readers were asking at Go Game Guru.
I’m certainly no expert on computer Go, but I talked to someone who is: Martin Müller from the University of Alberta in Canada. Here’s what Martin had to say…
An interview with Martin Müller
David Ormerod: To start with please tell us a bit about yourself and your research interests. How did you learn of Go and how did you become involved in computer Go?
Martin Müller: I am a professor in the Department of Computing Science at the University of Alberta in Edmonton, Canada.
My research interests are in heuristic search, studying how to solve large, complex problems using computer searches.
The main application areas studied in my research group are games such as Go, and automated planning.
In recent years, Monte Carlo search methods have been our main focus – both for games and for planning. As part of my game-related activities, I am the leader of the team developing the open source software Fuego, which was the first program to defeat a top professional in an even game on 9×9.
I learned Go when I was 15 years old and played a lot in my teens and early twenties. I am a 5, 6 or 7 Dan amateur player, depending on the country. My biggest success was probably taking 2nd place at the US Go congress open tournament in 1998.
I became interested in computer Go as an undergraduate in my home country of Austria, through my supervisor. This was around 1985. I have stayed with the topic ever since, doing a Diploma thesis, a PhD and a few postdocs, before getting my current job.
What’s Monte Carlo?
Most people with any interest at all in computer Go know that the strongest programs these days use a ‘Monte Carlo’ algorithm, but many people don’t know much more about it than that.
Could you briefly explain where the term Monte Carlo came from and what it means in this context?
The term Monte Carlo refers to an affluent suburb of Monaco which is famous for its Casino. Monte Carlo methods use statistics collected from randomized simulations as a way to analyze complex systems which are too hard to ‘solve’ by other means.
They were first developed for nuclear physics and atomic bomb research in the 1940s. Nowadays they are very widely used, but their application to games such as Go took off just a few years ago.
Now that computers are powerful enough, Monte Carlo methods are used across a wide variety of disciplines.
For example, I’ve used them at work to help with risk analysis. It’s often difficult to explain to people why this approach works though, because it seems so counterintuitive at first.
Do you have a good analogy to explain how a large enough number of random simulations can provide a useful answer to a question?
Statistical sampling, which is at the core of Monte Carlo methods, is a very powerful technique.
For example, think about opinion polls. Any single random person who you ask about their opinion may be completely crazy, but if you ask one thousand people, who are carefully selected to represent the overall population, then you get quite a good idea of the general mood and can use that to make informed decisions.
This is why we keep getting more and more of those pesky phone calls doing surveys at dinner time!
How computer Go programs improved
It’s been more than five years since UCT (an extension of Monte Carlo search) was first applied to Go, but the strongest programs were still at the kyu level not that long ago (at least on 19×19 boards).
In contrast, the strongest programs these days are dan level and they seem relatively sharp, even in tactical situations.
To what extent do they make use of heuristics for shape, tesuji, life and death, the opening and so on?
Many programs use learned local patterns such as 3×3 for simple shape, and they modify the playouts to avoid some bad tactical moves.
Also, when there is a single important fight going on, the full board search will be able to analyze it quite deeply, and do well in the tactics. The problems start when there are several fights going on at the same time.
For the opening, some programs simply use large scale patterns to imitate popular openings played by human experts. But usually those are not absolute rules. These moves simply get a bonus, but the search can override them. So it is better than the hard coded ‘expert systems’ of the 1980s.
What other changes and improvements have helped computers get to their current mid-dan level on larger boards since then?
I think many factors are involved. Better patterns and rules as above, better search, better parallel scaling, several years of testing, debugging and tuning the programs, and better hardware all help.
What are the pros and cons of combining a knowledge based approach with a Monte Carlo approach?
Pros: it can patch up the weaknesses in the simulations to some degree. It helps the search focus on ‘probably good’ moves.
Cons: you lose speed, and may introduce other biases in the search.
Computers vs humans in the game Go
What are the top Go programs currently good or bad at compared to human players of a similar level?
For example, they seem to be better at fast games (which surprises many people). They also seem to have problems with wasting ko threats and with fighting ko in general.
Do you have any theories about why this is?
The programs are generally good in overall balance and counting. They know what it takes to win and will not lose quietly or be overly aggressive when they are ahead. They will also avoid most blunders and never get tired.
There are many situations where humans have specialized knowledge, but current simulations are not good enough to solve them.
If these remain out of reach of the search, then the programs get into trouble. Examples are seki, semeai, ko, nakade, or situations with several weak-looking but settled groups of stones on the board.
And yes, the problem of wasting ko threats is particularly annoying for the programmer.
Will computer Go strength scale?
Since the strength of modern Go programs scales much better than it did before, does that mean it’s just a matter of waiting for hardware to improve enough to challenge professional Go players?
Or do you expect the current approach to plateau at some point, until a new breakthrough is made?
Experience in many other games tells us that steady progress in algorithms combined with faster hardware will eventually be enough. I don’t think that Go is ‘special’ in that regard.
Still, I would like to see one or two more ‘big ideas’ emerge in order to make faster progress. Maybe one more big idea is enough to seriously challenge humans on 19×19.
(The graph above was created by Don Dailey in 2008.)
What sort of problems or improvements are Go developers and researchers currently working on?
Many people are working on practical, step by step improvements such as better knowledge for playouts, or better parallel scaling.
Personally, I am very interested in adding a third dimension, local search, in addition to global (full board) search and simulations. This is what I do as a human player, and it allows me to keep track of several simultaneous fights.
However, it is a very difficult problem and it’s hard to come up with effective algorithms that work in a similar way.
What do you think we can learn, or have learned, from AI research into Go, that can be used in other areas?
The main surprise for me is how it works so well, with comparatively little domain-specific knowledge. I believe this is very encouraging news for people working on other hard problems where domain knowledge is difficult to get or to formalize.
In the last few years we have seen an explosion of new applications of Monte Carlo search methods, not just in other games, but also in areas such as energy management, optimization and automated planning.
What can people (particularly young people) do if they have an interest in computer Go and want to learn more about it?
There is a lot of information out on the internet, but it is not particularly well organized and much of the material is outdated or of poor quality.
The best way to learn would be to study with one of the active research groups, such as the MoGo group in Paris Sud, the Dutch groups in Maastricht and Tilburg, or of course our own group at the University of Alberta.
Thanks for taking the time to talk to us today.
You are very welcome.
For further reading you might also want to check out Martin’s computer Go blog, results and analysis from Zen19’s recent match with John Tromp and Chris Ball’s recent article: Computers are [now] very good at the game of Go, which was published last week.
As usual, please feel free to leave comments and questions below.