Chinese chess, also known as Xiangqi, is a different game:
https://en.wikipedia.org/wiki/Xiangqi
I remember years ago Larry Kaufman talked about how it would be almost impossible for a computer to play Go at the Grandmaster level.
Same thing used to be said about chess.
But most engineers knew it was only a matter of time (for both games).
---
One of the biggest stories about alphaGo though, is that this level of play wasn't expected for at least 10 more years, and the method "deep learning" (or something like this) will be applicable to many things outside of go. Unlike chess playing programs.
Not really accurate to describe Go as Chinese chess.
Anyway, AlphaGo won the five-game match against Lee Sedol: 4-1.
Go is not as difficult as some people want it to be.
If computers didnt beat the world champion before, it's because nobody cares about go
Go is not as difficult as some people want it to be.
Yeah, I think that's true.
Read an article "go is many times more complex than chess"
And I'm thinking, so is basketball 10x as complex as golf?
Articles written by people who don't play either game, for people who don't play either game.
Go is not as difficult as some people want it to be.
If computers didnt beat the world champion before, it's because nobody cares about go
Difficult to whom or what?
The point is that Go is a huge challenge from a computational perspective. It's difficult to come up with algorithms that allow computers to play the game proficiently with current hardware.
The only possible solution is to create a program with the ability to learn the game. This is a huge accomplishment alone because of the game's complexity.
Now that the feat seems to have been accomplished, it's tempting to say that the game is not that difficult. But that not really for AI. AlphaGo still had to run on 1202 CPUs and 176 GPUs to beat the human champion as you can confirm on the link above. This is not a program you can just run on your PC to full strength like chess engines.
The complexity of a game can be objectively measured by the number of classes that separate a beginner from the champion. Where a class is defined as the group of people that can meaningfully play each other without the outcome being almost a certainty. (Say a core expectation of less than 85%, or about 300 Elo.) For Chess there are about 7 such classes. It seems for Go there are some 40.
And, as mentioned before, Go has nothing to do with Chinese Chess. You might as well have written: "computer finally beats World champion Chess"...
Wait 6 to 8 months lol :)
The fact that Lee won one game shows that Go programs haven't quite reached the level of Western chess programs like Stockfish because I haven't heard of a single game where a human GM has even gotten a draw against a top program in the last five years.
I have brought this up a couple of times but I've read that Correspondence grandmasters play several hundred points above OTB GM's because they have days to make a single move. Why is there no interest in seeing the top Correspondence GM take on Stockfish or some other program? Has it ever happened?
The complexity of a game can be objectively measured by the number of classes that separate a beginner from the champion. Where a class is defined as the group of people that can meaningfully play each other without the outcome being almost a certainty. (Say a core expectation of less than 85%, or about 300 Elo.) For Chess there are about 7 such classes. It seems for Go there are some 40.
Ok, but how many months does it take to go from 30 kyu to 15 kyu...
I mean, I guess there are chess playing kids who play at level of 100 elo (somehow).
But I'd like to use more realistic ranks.
Also it's a bigger board and a longer game. More opportunity for a weaker player to be outplayed. This isn't necessarily a good comparison of the complexity of the play. You could simply relate it to the length of play.
For example, imagine every chess game was a best of 4 (two games with each color). Or, e.g., every time you play a certain rating, you have to play 3 more of that rating. Ratings would spread out.
AlphaGo still had to run on 1202 CPUs and 176 GPUs to beat the human champion as you can confirm on the link above.
Sorry, failed to clarify that this version of AlphaGo was the one that played Fan Hui.
The version that played Lee Sedol ran on 1920 CPUs and 280 GPUs according to an article in The Economist. Not sure how accurate it is, though.
http://www.economist.com/news/science-and-technology/21694540-win-or-lose-best-five-battle-contest-another-milestone
I remember years ago Larry Kaufman talked about how it would be almost impossible for a computer to play Go at the Grandmaster level because it was so much more complicated than Western chess. Well folks it has finally happened.
Google’s AlphaGo program beat grandmaster Lee Se-Dol at Go.
Lee, who has been declared world champion of Go eighteen times, entered the match predicting he would win all five games… and ended up hoping he could beat the odds to win a single game.
http://www.breitbart.com/tech/2016/03/14/computer-defeats-gaming-grandmaster-ten-years-ahead-of-schedule/