Can some programmer or some genius people please tell me how algorithm works for chess and other games!
How computer works in playing chess?


How does a program know where to move? I get the "calculating moves" thing when there's checks and takes available, but small moves to improve position seem more like a "princple" than simple math.?.
Since it works, there's obviously some sort of math solution with the highest value being the best move. But I don't understand how they can program that.
Yeah, neither do I really, ask some computer scientist on the Rybka forums :p
But yes, honestly, they will do things like give 0.01 pawn's value to ____ condition... like controlling a central square. They have hundreds of conditions like this (thousands?) Then they do tests like having it play thousands of quick games and then compare results to when it had the old values. The change doesn't have to make it play better in every position, just in most positions. If it preforms better overall, then they keep it.
There's also lots of technical stuff on how the search is streamlined so the engine doesn't waste time looking at literally every possibility.

Can some programmer or some genius people please tell me how algorithm works for chess and other games!
See above...lol

How to think like GM? or in what way should I think like targeting the King or something else. Please teach me some chess genius!

It's like asking how to build a quality modern building.
If you want to know every detail, so that you can even do it yourself, it will take years to learn.

How to think like GM? or in what way should I think like targeting the King or something else. Please teach me some chess genius!
A grandmaster once said that in order to become a GM you must first lose a 1000 tournament games. (Ones where you actually played seriously, not just for fun)

Look up alpha-beta pruning for games. That type of tree search saves search time since it doesn't consider bad moves since it assumes nobody is going to play those. Look up rule-based expert systems for programming heuristics. Basically the computer is performing a tree search, where each branch of the tree is another possible move from that position, most likely using pointers in the program. At each position is a static evaluation score, collected from mixing the pros and cons according to expert heuristic advice, as I mentioned, the most obvious of which is whether the sum of your material value has gone up or down, relative to your opponent's material. That's why programs are so materialistic, therefore they tend to fall into the trap of hanging onto gambited material even though it destroys their position: they understand the pieces' values much better than they do positional subtleties. The heuristics provide not only the good-bad scores, but long-term considerations to some extent. The tree search provides the tactics. Tree search is exactly what humans do when solving chess puzzles, though humans save time by using memory of similar positions, which computers don't do well, and by ignoring a lot of worthless moves that the computer would probably still consider.

You're welcome. I always like to talk about computer chess, since that's what got me interested in artificial intelligence!
Well, I am an engine programmer, and engines are basically very stupid, but very, very fast. They just generate possible sequece of moves of a certain length (the 'search depth'), and at the end (if there is no checkmate, which usually there isn't) use a very simple method to 'evaluate' the resulting position as a number indicating how desirable it is for one side or the other. This evaluation usually contains material (addition of piece values, P=1, N,B=3, R=5...), and as a much smaller term centralization (for pieces that benefit from it, assigning points to each board square), actual mobility (i.e. number of moves, taking account of blocking by other pieces), Pawn structure bonuses (advance, passers, isolated, doubled and backward Pawns) and King safety (presence of a Pawn shield, number of squares from which the King can be 'seen', number of squares next to the King actually attacked by enemy pieces).
That is basically all. The evaluation usually fails badly in non-quiet positions (i.e. when one side is a Queen 'ahead', but it is not his move and the Queen is attacked and not protected), and to solve that engines always consider grabbing material at the end of a line, even if it is beyond the intended depth ('quiescence search'). So they just trade until no captures are possible anymore, and evaluate then. If no captures gain material (or other advantages), the position is by definition quiet, and they trust the evaluation for it.
Then they simply make the move that forces the best outcome for them (evaluation-wise), assuming what they consider optimal defense. Modern engines do not search all possible lines to the same depth, but 'reduce' branches they think are not promising (e.g. because they lose material early on), meaning they subtract some number of half-moves from the nominal search depth, until it is proven that the loss can be recouped (i.e. was a sound sacrifice rather than a blunder), at which time that branch gains back the full nominal depth.
That is basically it. With current computer speeds millions of positions can be searched per second, and a simple program of about 100 lines of computer code, not having much more than material and centralization/pawn advance in its evaluation (like Fairy-Max) can already be a very tough opponent for a good club player.
Then How Do He Defeat The Computers And World Champions
Magnus doesnt win against computer. My phone could probably beat him.

HGMuller,
Thanks for the additional detail. Can you tell us, if you know: (1) About how many heuristics do commerical chess programs use, and what is their method of finding a proper balance of those weights? (2) How are those heuristics usually implemented in commercial programs? (Rules, neural networks, linear algebra formulas, or other?) (3) Do programs usually use pointers for their tree searches, or is there some other method? (Vectors, variable-length lists, etc.)
There are many ideas I'd like to try programming in a chess program, but I just don't have the time to play with them: (1) recognition of miniature cyclic patterns that might arise in a game; (2) associative memory for recognizing similarities to previously encountered patterns; (3) general understanding of clusters/obstacles/objects, and paths around those obstacles; (4) smooth transition to general plans in the middle game as soon as the opening book runs out; (5) heuristic networks of observations (hanging units, potential skewer alignments, potential fork positions, etc.) for reducing search time; (6) goal-directed search instead of brute-force lookahead, also to reduce search time.

Interesting solution about having them trade until no captures are available then if the position is quiet they trust the eval function.
I'm sure the modern searches get fairly technical, but it's interesting to know only 100 lines of code could be something like 1600-1800 strength.
Commercial Chess programs are of course closed source, so it is not known what exactly they do. In general pointers and linked lists are avoided, as simple arrays work faster. Most top engines represent the game state by so-called bitboards, using 64-bit integers as bitmaps to indicate board squares with a particular property (like occupied, white pieces, black pieces, white/black Kings, Knights, Pawns, diagonal sliders, orthogonal sliders) and generate move sets from there.
Most heuristics are in the evaluation (e.g. how exactly do you put a number on King safety). The actual evaluation parameters are tuned by playing millions of games with different parameters, and see what works best.
There are a few heuristics used in search, such as that when passing your turn already refutes the opponent's previous move, a less deep search is needed to prove that. And that moves that have already seen to be good in very many other positions in the tree are more likely to be also good in unrelated positions, and thus deserve to be searched first.

Thanks for the information.
Yes, that's what I heard in the '80s: that chess programs paradoxically weren't advancing AI or science because the methods the programs were using were proprietary secrets, so nobody outside the company could benefit by the techniques. I was hoping by now more information had become available about such commercial programs.
I know about bitboards. Those are supposed to be good for efficiency, especially if you're using a language like C/C++ that accesses bits directly, but I never needed to be so concerned about efficiency that I thought they were worth the trouble to deal with.
It's interesting about the way that the evaulation parameters are done, but that makes sense: rather than involve a human expert to make a decision about what went wrong, then to code it, you can just run the program against itself for days on end, silently adjusting the parameters on its own, then hardcode those parameters when you decide it's good enough. What must make it harder is that as the game progresses, the nature of the positions change, so the heuristics should change. For example, in the endgame you aren't trying to hide your king anymore but rather to centralize it.
Arrays to represent search trees? I can see arrays would be much faster, but how exactly would you store a tree in an array? Let's say you have the tree:
1st ply:
1. e4
2nd ply:
1. e4 e5
1. e4 c5
3rd ply:
1. e4 e5 2. Nf3
1. e4 e5 2. f4
1. e4 c5 2. Nf3
1. e4 c5 2. Nc3
Would it get stored something like this 4x4 array of strings...?
e4 xx xx xx
e4 e5 xx xx
e4 c5 xx xx
e4 e5 Nf3 xx
e4 e5 f4 xx
e4 c5 Nf3 xx
e4 c5 Nc3 xx
Or maybe...?
e4 xx xx xx
xx e5 xx xx
xx xx Nf3 xx
xx xx f4 xx
xx c5 xx xx
xx xx Nf3 xx
xx xx Nc3 xx

Sqod, the idea of programming heuristics in is tempting, but it's probably barking up the wrong tree. You're asking the computer to duplicate your thought process, but frankly your thought process isn't terribly good. You use heuristics as a guide because you can't calculate very far, but computers are simply calculating machines.
1) Even with no heuristics at all, a brute-force computer search is already good enough to beat most humans.
2) The big problem with heuristics is assigning weights to them. How do you choose between violating "a knight on the rim is grim" and violating "don't push the pawns that protect your castled king"? (Another problem is making the heuristics flexible - as somebody mentioned, "protect your king" changes its meaning during the endgame.)
Where a heuristic might be useful is in situations where the computer is forced to evaluate a quiet position, after searching to a given depth (based on time limits) and finding no clear best move. In situations like that, imagine a competition between two position-evaluating functions. One says move A is better (based on some weighted set of heuristics, possibly informed by information from the searches that were done), the other says move B is better (based on the same computation done with different weights). The better function (the more accurate weights) will win the game. So let the program play itself with slightly different weights, and let the winning weights be selected for the next round of play, and repeat until the computer stops improving - basically let the program learn what weights to apply. In fact, the computer could even learn to identify positional patterns (i.e., heuristics that could be applied) without any hints from the programmer.
The final results won't necessarily tell you the ideal weights for various heuristics. You might get better results from other heuristics you (or the computer) hadn't considered, or by adjusting the ratio between sophistication of heuristics and time reserved for searching. Just running on a faster or slower computer might change the set of situations where the heuristics need to be considered (because the depth of coverage of a search will change), resulting in different weights or different heuristics performing better.

nartreb,
I don't believe you can omit heuristics completely. As HGMuller said, you need at least piece values and centralization heuristics, and I'd be surprised to see decent play from any program without a few more heuristics, though admittedly I haven't experimented with such modification of settings. He also mentioned that evaluation parameters (= heuristics) are determined by commercial vendors that run millions of games. Companies wouldn't be spending that kind of time and effort if heuristics weren't important.
As for exactly how heuristics are handled by a program, there are different ways. Signature tables were very effective in early checkers and chess programs, for example. Another way I imagine is a numerical array where each element is a positive or negative number corresponding to the estimation by a heuristic, so that the task to be solved is optimizing those weights so that their interaction with specific positions (probably represented as vectors that get multiplied by that heuristics matrix) gives the maximum possible value. Then it's just a math problem.
At any rate, we can see now that the OP wasn't really asking about computer programs, but rather how GMs could be so good as calculating so many moves ahead when the position required doing so. The answer is that GMs still do tree search, but they do it differently than computer programs: they don't search as many possibilities, they are matching against previously learned patterns, they're reasoning at more abstract levels than the computer can, and they probably use heuristics more than computers. Also, you have to realize that most chess positions don't call for that kind of deep search, only in situations where there are certain characteristics like high pressure, unusual piece alignments, overworked units, etc.
GM reasoning processes are probably very different than computer processes altogether. For example, when I see a chess puzzle where there is an empty fianchetto hole where the side to mate has a bishop and queen on the color of the diagonal of that hole, I start looking for how to move those two pieces into their typical mating positions. In doing so, I cut out huge numbers of possible moves, and I'm not doing forward search, but rather reverse search using goals. A computer would merely try every possible move, even moves that obviously have no bearing on the situation, and do it in a forward search manner, without considering goals, abstracted knowledge, stored knowledge, associative memory, or anything else. If we knew more about human reasoning processes, we might be able to put those into a computer, whereupon it would be *really* good since it would combine extremely rapid search with extreme efficiency in pruning, but we don't know how to do that yet.
What Sqod said.