How good is your computer in your opening preparation with Stockfish.?

Sort:
drmrboss

See the difference between Amateurs vs Pros.  Haha. 

buffqueen

preparing with a supercomputer
damn

drmrboss
buffqueen wrote:

preparing with a supercomputer
damn

He will know it will be Mate in 60 or draw after 25 move deep opening preparation, lol! 

DasBurner

his computer is bigger than my apartmentfrustrated.png

drmrboss
LaryMulya wrote:

Chess engines are tactically strong and have pretty decent positional understanding in most positions, but we know that they are pretty bad at evaluating closed positions and shouldn't be trusted in closed center openings like the King's Indian Defense.

Your assumption is completely wrong in 2021.

Top engines like Leela, Stockfish (similara to  Alpha Zero) are all Artificial Intelligience Neural Network evaluation based engines. These AI engines have knowledge/ statistics of win/ or loss based on millions to billions of training games that include both open and closed positions. (These engines dont use material evaluation anymore!)

None of these top engines will perform poorly in closed positions such as King Indian Defence, in comparion to open positions like Sicilian.

 

Show me a game where Stockfish played poorly in King Indian Defense opening. 

 

drmrboss

It is server Stockfish which is not a full strength Stockfish. Period. 

I will bet x100 times odds to any human being that can beat vs common 4 cores desktop running under the developer's recommanded configurations.( I will tell you what setting you use and what version of Stockfish to use)

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

samuelebeckis

In fact beating Stockfish means nothing at all, much depends on what engine you run the program, that will make the difference. Stockfish is a normal program not an AI. Billions or even trillions of positions recorded are nothing, after 4 or 5 moves you'll be off your 'giant' book. The engine used is the point. Yes, the flaws of 20 years ago have been fixed (mostly).

drmrboss
samuelebeckis wrote:

In fact beating Stockfish means nothing at all, much depends on what engine you run the program, that will make the difference. Stockfish is a normal program not an AI. Billions or even trillions of positions recorded are nothing, after 4 or 5 moves you'll be off your 'giant' book. The engine used is the point. Yes, the flaws of 20 years ago have been fixed (mostly).

 

Stockfish is AI, or Neural Network since August 6, 2020.

Read this. 

 

https://en.wikipedia.org/wiki/Efficiently_updatable_neural_network

 

Yes, stockfish evaluations are the based on Statistics.  Learning and winning chances,

e. g.  +1.0 means Stocfish is not seeing +1 advantage materials, she is evaluating her winning chances like 55% based on her training experiences. 

slimshady
drmrboss wrote:

 

See the difference between Amateurs vs Pros.  Haha. 

hell yeah

drmrboss
LaryMulya wrote:
drmrboss wrote:

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

I believe that human players can still outplay strong chess engines in certain types of positions.

See this game.

 

 

Wake up bro, I am talking about engines in 2021. Not engines from 10 years ago. 

Do you know how much apple iphone progressed within 10 years? How much internet speed improved within 10 years. Technologies improved very fast!

drmrboss
greatswindler wrote:
LaryMulya wrote:
drmrboss wrote:

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

I believe that human players can still outplay strong chess engines in certain types of positions.

See this game.

 

 

Interesting game, but Rybka is an old chess engine not even as strong as Stockfish 8, which lost two matches against AlphaZero.

The same Hiaru lost 10-0 to Komodo in 10 years later, (komodo is still much weaker than Stockfish).

The problem is that it is hard to change the mindset for some people! For example, if someone saw automatic driving car had a crash, they will keep that knowledge forever in their life that "automatic cars are worse than human driver".

 

drmrboss
LaryMulya wrote:
drmrboss wrote:
greatswindler wrote:
LaryMulya wrote:
drmrboss wrote:

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

I believe that human players can still outplay strong chess engines in certain types of positions.

See this game.

 

 

Interesting game, but Rybka is an old chess engine not even as strong as Stockfish 8, which lost two matches against AlphaZero.

The same Hiaru lost 10-0 to Komodo in 10 years later, (komodo is still much weaker than Stockfish).

 

The problem is that it is hard to change the mindset for some people! For example, if someone saw automatic driving car had a crash, they will keep that knowledge forever in their life that "automatic cars are worse than human driver".

 

Well, he was playing chess and reading the twitch chat (multitasking) at the same time. No wonder, he became distracted and played worse against Komodo. If Nakamura had played this match without streaming and commenting at the same time, then he would have performed much better against Komodo and drawn a couple of his games. If he had to comment just to impress his Twitch followers then it's no wonder he lost all his games.

These server engines are not engines in full strength.

 

If you wonder how chess engines developed over 20 years, I will show this thread.

"

15 Years of Chess Engine Development

Fifteen years ago, in October of 2002, Vladimir Kramnik and Deep Fritz were locked in battle in the Brains in Bahrain match. If Kasparov vs. Deep Blue was the beginning of the end for humans in Chess, then the Brains in Bahrain match was the middle of the end. It marked the first match between a world champion and a chess engine running on consumer-grade hardware, although its eight-processor machine was fairly exotic at the time.

Ultimately, Kramnik and Fritz played to a 4-4 tie in the eight-game match. Of course, we know that today the world champion would be crushed in a similar match against a modern computer. But how much of that is superior algorithms, and how much is due to hardware advances? How far have chess engines progressed from a purely software perspective in the last fifteen years? I dusted off an old computer and some old chess engines and held a tournament between them to try to find out.

I started with an old laptop and the version of Fritz that played in Bahrain. Playing against Fritz were the strongest engines at each successive five-year anniversary of the Brains in Bahrain match: Rybka 2.3.2a (2007), Houdini 3 (2012), and Houdini 6 (2017). The tournament details, cross-table, and results are below.

Tournament Details

Format: Round Robin of 100-game matches (each engine played 100 games against each other engine).
Time Control: Five minutes per game with a five-second increment (5+5).
Hardware: Dell laptop from 2006, with a 32-bit Pentium M processor underclocked to 800 MHz to simulate 2002-era performance (roughly equivalent to a 1.4 GHz Pentium IV which would have been a common processor in 2002).
Openings: Each 100 game match was played using the Silver Opening Suite, a set of 50 opening positions that are designed to be varied, balanced, and based on common opening lines. Each engine played each position with both white and black.
Settings: Each engine played with default settings, no tablebases, no pondering, and 32 MB hash tables, except that Houdini 6 played with a 300ms move overhead. This is because in test games modern engines were losing on time frequently, possibly due to the slower hardware and interface.
Results

Engine
1
2
3
4
Total
Houdini 6
**
83.5-16.5
95.5-4.5
99.5-0.5
278.5/300
Houdini 3
16.5-83.5
**
91.5-8.5
95.5-4.5
203.5/300
Rybka 2.3.2a
4.5-95.5
8.5-91.5
**
79.5-20.5
92.5/300
Fritz Bahrain
0.5-99.5
4.5-95.5
20.5-79.5
**
25.5/300
I generated an Elo rating list using the results above. Anchoring Fritz's rating to Kramnik's 2809 at the time of the match, the result is:

Engine
Rating
Houdini 6
3451
Houdini 3
3215
Rybka 2.3.2a
3013
Fritz Bahrain
2809
Conclusions

The progress of chess engines in the last 15 years has been remarkable. Playing on the same machine, Houdini 6 scored an absolutely ridiculous 99.5 to 0.5 against Fritz Bahrain, only conceding a single draw in a 100 game match. Perhaps equally impressive, it trounced Rybka 2.3.2a, an engine that I consider to have begun the modern era of chess engines, by a score of 95.5-4.5 (+91 =9 -0). This tournament indicates that there was clear and continuous progress in the strength of chess engines during the last 15 years, gaining on average nearly 45 Elo per year. Much of the focus of reporting on man vs. machine matches was on the calculating speed of the computer hardware, but it is clear from this experiment that one huge factor in computers overtaking humans in the past couple of decades was an increase in the strength of engines from a purely software perspective. If Fritz was roughly the same strength as Kramnik in Bahrain, it is clear that Houdini 6 on the same machine would have completely crushed Kramnik in the match."

https://www.reddit.com/r/chess/comments/76cwz4/15_years_of_chess_engine_development/

 

It is Houdini 6 only, not talking about current Stockfish 13 NNUE version. (which is another +100 elo +

Asezen
drmrboss wrote:

It is server Stockfish which is not a full strength Stockfish. Period. 

I will bet x100 times odds to any human being that can beat vs common 4 cores desktop running under the developer's recommanded configurations.( I will tell you what setting you use and what version of Stockfish to use)

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

Why you are so angry ? grin.png

drmrboss
LaryMulya wrote:
GMofAmateurs wrote:
LaryMulya wrote:
drmrboss wrote:
LaryMulya wrote:
drmrboss wrote:
greatswindler wrote:
LaryMulya wrote:
drmrboss wrote:

 

Let me tell you something. Do you think that people like Nepo are idiots who spend thousands of $$$ per hour for super computers whereas you still believe human are superior in opening? . 

 

I suggest you get rid of your outdated assumption from 20 years ago and see how much engines progressed in the last 20 years. 

 

I believe that human players can still outplay strong chess engines in certain types of positions.

See this game.

 

 

Interesting game, but Rybka is an old chess engine not even as strong as Stockfish 8, which lost two matches against AlphaZero.

The same Hiaru lost 10-0 to Komodo in 10 years later, (komodo is still much weaker than Stockfish).

 

The problem is that it is hard to change the mindset for some people! For example, if someone saw automatic driving car had a crash, they will keep that knowledge forever in their life that "automatic cars are worse than human driver".

 

Well, he was playing chess and reading the twitch chat (multitasking) at the same time. No wonder, he became distracted and played worse against Komodo. If Nakamura had played this match without streaming and commenting at the same time, then he would have performed much better against Komodo and drawn a couple of his games. If he had to comment just to impress his Twitch followers then it's no wonder he lost all his games.

These server engines are not engines in full strength.

 

If you wonder how chess engines developed over 20 years, I will show this thread.

"

15 Years of Chess Engine Development

Fifteen years ago, in October of 2002, Vladimir Kramnik and Deep Fritz were locked in battle in the Brains in Bahrain match. If Kasparov vs. Deep Blue was the beginning of the end for humans in Chess, then the Brains in Bahrain match was the middle of the end. It marked the first match between a world champion and a chess engine running on consumer-grade hardware, although its eight-processor machine was fairly exotic at the time.

Ultimately, Kramnik and Fritz played to a 4-4 tie in the eight-game match. Of course, we know that today the world champion would be crushed in a similar match against a modern computer. But how much of that is superior algorithms, and how much is due to hardware advances? How far have chess engines progressed from a purely software perspective in the last fifteen years? I dusted off an old computer and some old chess engines and held a tournament between them to try to find out.

I started with an old laptop and the version of Fritz that played in Bahrain. Playing against Fritz were the strongest engines at each successive five-year anniversary of the Brains in Bahrain match: Rybka 2.3.2a (2007), Houdini 3 (2012), and Houdini 6 (2017). The tournament details, cross-table, and results are below.

Tournament Details

Format: Round Robin of 100-game matches (each engine played 100 games against each other engine).
Time Control: Five minutes per game with a five-second increment (5+5).
Hardware: Dell laptop from 2006, with a 32-bit Pentium M processor underclocked to 800 MHz to simulate 2002-era performance (roughly equivalent to a 1.4 GHz Pentium IV which would have been a common processor in 2002).
Openings: Each 100 game match was played using the Silver Opening Suite, a set of 50 opening positions that are designed to be varied, balanced, and based on common opening lines. Each engine played each position with both white and black.
Settings: Each engine played with default settings, no tablebases, no pondering, and 32 MB hash tables, except that Houdini 6 played with a 300ms move overhead. This is because in test games modern engines were losing on time frequently, possibly due to the slower hardware and interface.
Results

Engine
1
2
3
4
Total
Houdini 6
**
83.5-16.5
95.5-4.5
99.5-0.5
278.5/300
Houdini 3
16.5-83.5
**
91.5-8.5
95.5-4.5
203.5/300
Rybka 2.3.2a
4.5-95.5
8.5-91.5
**
79.5-20.5
92.5/300
Fritz Bahrain
0.5-99.5
4.5-95.5
20.5-79.5
**
25.5/300
I generated an Elo rating list using the results above. Anchoring Fritz's rating to Kramnik's 2809 at the time of the match, the result is:

Engine
Rating
Houdini 6
3451
Houdini 3
3215
Rybka 2.3.2a
3013
Fritz Bahrain
2809
Conclusions

The progress of chess engines in the last 15 years has been remarkable. Playing on the same machine, Houdini 6 scored an absolutely ridiculous 99.5 to 0.5 against Fritz Bahrain, only conceding a single draw in a 100 game match. Perhaps equally impressive, it trounced Rybka 2.3.2a, an engine that I consider to have begun the modern era of chess engines, by a score of 95.5-4.5 (+91 =9 -0). This tournament indicates that there was clear and continuous progress in the strength of chess engines during the last 15 years, gaining on average nearly 45 Elo per year. Much of the focus of reporting on man vs. machine matches was on the calculating speed of the computer hardware, but it is clear from this experiment that one huge factor in computers overtaking humans in the past couple of decades was an increase in the strength of engines from a purely software perspective. If Fritz was roughly the same strength as Kramnik in Bahrain, it is clear that Houdini 6 on the same machine would have completely crushed Kramnik in the match."

https://www.reddit.com/r/chess/comments/76cwz4/15_years_of_chess_engine_development/

 

It is Houdini 6 only, not talking about current Stockfish 13 NNUE version. (which is another +100 elo +

I know that Stockfish 13 NNUE is a bit stronger than Rybka 2.3.2a, although I think 5+5 blitz results are a bad predictor of classical chess results, so using them to prove that chess engines have progressed remarkably in the last 15 years is iffy. As an example, consider two American grandmasters Fabiano Caruana and Hikaru Nakamura. Caruana's blitz FIDE rating is "only" 2711 compared to Nakamura's rating of 2900, but Caruana is overall a better chess player because his long time control (classical) results are better than Hikaru's. It's not clear that chess engines rated high in short time control would also be high-rated in long time control.

I agree. There's a possibility that older chess engines (despite being lower rated at STC) scale better than newer chess engines. At longer time controls.

Yes. And even if the latest version of Stockfish is higher rated than Houdini when tested against other chess softwares, it doesn't mean that Stockfish would perform better against strong GMs compared to lower rated softwares. A relatively lower-rated engine with an aggresive, dynamic playing style might be programmed to play tactically complex moves that perform better against strong grandmasters compared to Stockfish's moves. Chess ratings are given arbitrarily for their own pools, we can't really compare two different pools to make a claim. That is, you can see that a chess engine at the top of a typical computer rating list might have a rating of 3500, but this number means nothing, only that you can make predictions around nearby computer opponents to predict performance. In addition to that, these engine lists use randomized non-drawish opening books whereas human grandmasters do not; in general the weaker player often aims for drawish openings and avoid sharp tactical positions as much as possible.

Your logic is like Carlsen rating of 2800 is nothing as his rating is based on beating his peers like Giri and it is not related in beating against me. Your understanding of statistics and maths is far away from us. 

 

But we know that computer program had massively improved based on massive amount of investments from volunteers in the last 11 years. 

These numbers are saying that " nearly 3 Trillions (3,000,000,000) of games were being tested with 100,000 patches. Probably 5,000 to patches has been modified for improvements of stockfish that include testing of almost all of known human theories and knowledge of king safety, double pawns etc. 

 

These are the reasons why new programs beat 99% to old programs. And the programs that played comparable level to Kramnik and Kasparov with 2800 rating is just Amateurs to Stockfish.  It is the product of probably $100, 000 to $ 300,000 worth of electricity in running 500 cpu super computer power for 10 years ( or 5000 cpu year) .

 

https://tests.stockfishchess.org/users