Are there any supercomputers analysing the start position?

Sort:
Oldest
KinkyKool

Or analysing some basic starting positions trying to find the best possible game?

mariners234

There are two parts to a chess playing machine, the hardware and the software.

A "super" computer is defined by the hardware, i.e. it's very fast. But if the software was not designed to be useful in the opening (or "basic starting positions") then it doesn't matter how fast the hardware is.

Which brings me to my point:  chess engines aren't designed to play well in the opening.

All chess professionals use engines practically 24/7 to analyze opening lines (which to pros can mean moves even beyond 15 or 20), but no one is using engines in the very early stages with a perfect game in mind.

mariners234

In fact the "perfect game" is being approached from the other direction via EGTB (endgame tablebases) which give truly perfect play for any position with X number of chessmen on the board. IIRC they're up to 7 men, and such a database is 140 TB in size.

KinkyKool

What about the future? In say 10 years time when we have quantum technology, faster hardware and even stronger and more sophisticated software engines? How about 40 years time?

What I'm getting at is, presuming as time goes on hardware technology and software engines will just keep getting faster and better, will chess ever be "solved"? Will a engine ever be able to say from the start position "ok, here is the perfect line for both white and black"?

mariners234

AFAIK quantum computing will never be a replacement for classical computers. They only offer a speed increase for certain types of operations. A program like a chess engine isn't going to be available on a quantum computer in the foreseeable future.

More interesting are learning machines like alpha zero and leela paired with classical AB engines like stockfish. And yes, in the future, chess engines will be a lot stronger. There's still a lot to look forward to.

---

As for perfect play, it depends on what we mean exactly. A strict definition would mean we've solved chess, and what will come from that is likely to be an enormous number of perfect games that all end in a draw. But to do this we'd have to store every position and its evaluation e.g. a 32 man EGTB. With some simple math, we can show that such a database will never exist because even with a storage device as large as the earth itself, where each position could be stored on an atom, there wouldn't be enough space.

But a looser definition of solved, and a looser definition of perfect will be achievable. In a few decades we might start talking in terms of probability e.g. this move or this game is perfect with 95% probability.

Caesar49bc

I think Monte Carlo engines, for both single computer and distributed computing, over time, will inexorably propel more chess theory in all phases of the game. Excluding 3-4-5-6-7 man tablebases, in which every position possible have already been calculated.

Currently Leela Chess Zero is the only distributed engine, and as far as I know, Kommodo 13 is the only single computer engine on the market using the Monte Carlo system.

You don't need a super computer per se for chess engines to find new theory, since there are enough different engines that when looking at the entirety as a group, that the randomness of engines selecting different moves will assure that new lines will be tested. Over time, that makes for a lot of new theory.

The real trick is for humans to sift through all that data to determine which lines represent new theory worth publishing and adding to the cannon of verified and vetted chess theory.

Prometheus_Fuschs
mariners234 escribió:

In fact the "perfect game" is being approached from the other direction via EGTB (endgame tablebases) which give truly perfect play for any position with X number of chessmen on the board. IIRC they're up to 7 men, and such a database is 140 TB in size.

Correct although Sygzygy are "only" 17 Tb.

Forums
Forum Legend
Following
New Comments
Locked Topic
Pinned Topic