AlphaZero vs Stockfish Games


In Game 8, after 53.Ke3, why would Stockfish throw away a P with 53...h5? The Stockfish engine here immediately picked the three BK moves as all better than 53...h5, and the eval for this P move doesn't improve from 4th position even after 10 minutes, when the game was played at 1 minute per move.
because that stockfish was evaluating 80 million positions per minute as compare to few thousands on your machine. Give him 4 to 5 hours he will come to that conclusion
With one engine limited to 1GB RAM, and 1 minute per move, they are closer to the definition of stupid games created to promote a Google product.
Do you mind linking to your source? I can't find the actual paper---and I can't find anything that tells me the specifications.
Well,we just have this information presented in the paper:
This is causing an issue. One more issue is that the use of an opening book. There is an argument that since AlphaZero learnt chess on its own,it has its own set of opening book however stockfish which is essentially a brute-force calculating engine needs an opening book to level the playing field. Also, the team at Google in their paper has not mentioned explicitly which version of Stockfish was used to play against AlphaZero.
Can you please explain the opening book argument? I don't get it! Does it mean that it "memorizes" hundreds of openings and does not evaluate other streams of moves? How does it help?
Can you please explain the opening book argument? I don't get it! Does it mean that it "memorizes" hundreds of openings and does not evaluate other streams of moves? How does it help?
Brute forcing opening positions is very demanding - lots of pieces, lots of possible moves. Standard computer engines like Stockfish rely on a preprogrammed opening book so there is no need to use brute force, because it can just pick one of the moves from the database. AlphaZero apparently doesn't have a preprogrammed opening book, but it kinda created one for itself, through machine learning. Obviously, Stockfish cannot do that and is severely hampered without an opening book.

AlphaZero apparently doesn't have a preprogrammed opening book, but it kinda created one for itself, through machine learning.
Gives new meaning to "opening preparation".
hello,
4r1kq/p2prp1p/5RpP/2p5/7Q/1B4P1/P4PK1/8 b - - 0 49
could you explain me this bad move of stockfish 8 (rook F8 evaluation -50) vs( king F8 equality) ?
thanks

The idea that Alphazero has taught chess by itself is really intereting. If I would do the same I wouldn't succeed in such a way :-) Really interesting to to find why. Except the fact that silicium is not brain cells, what is the mecanism that makes this possible for AZ ?

I wouldn't be surprised if stockfish didn't break plydepth 20-30's 1 minute turns what a joke. alphazero versus iccf champions in a game of correspondence chess = alphazero losing everytime
True, that's just some chess gimmick. Also, StockFish was only limited to 1 minute per move and AZ was running on a SUPERCOMPUTER compared to SF which was running on something as small as a laptop.

I wouldn't be surprised if stockfish didn't break plydepth 20-30's 1 minute turns what a joke. alphazero versus iccf champions in a game of correspondence chess = alphazero losing everytime
It was running 64 threads which I infer means 32 cores. I am not sure what they used, but it could have been one of Intel's new massively multicore Xeon processors. (This is not a "laptop", raghavsan). Thus 1 minute was equivalent to quite a long time per move on a typical fast multicore machine.
Also, AlphaZero was getting stronger with extra time per move at a much faster rate than Stockfish, according to the data.

But still a supercomputer is certainly bigger than that
Yes. These days the top supercomputers are like a million PCs, not like a hundred.

This is much bigger than chess. The DeepMind team doesn't care if it was a fair match or what anyone in the chess community thinks. They proved their point. What's next? Click Here