Can computer learns from mistakes to increase its level??

Sort:
Oldest
pdela

AI – Computer Learns From Mistakes – Becomes ‘Chess GrandMaster’

456
1
Share on Facebook
Tweet on Twitter
Glass Chess
 

A computer program called Giraffe has taught itself to become an international grand master after playing itself for just 72 hours. The software was able to learn from its own mistakes as it played against itself and is now able to beat most humans who have spent a lifetime learning the game.

The software can play with itself and perfect the moves by detecting previous mistakes. It can also find alternative combinations for every set.

To be crowned as “Grandmaster” in chess, a player must be able to have a rating of more than 2,500. The world’s current number one player, Magnus Carlsen, was rated at 2,853.

It’s been nearly 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard event rules.  Since then, chess-playing computer systems have developed considerably powerful, leaving the best humans little likelihood even against a modern chess engine running on a smartphone.

Gary vs IBMGary Kasprov playing against Deep Blue – 1997 – Source: Reuters

However while computers have become faster, the way chess engines work has not changed. Their power depends on brute force, the process of looking through all possible future moves to seek out the most effective next one.

Of course, no human can match that or come anyplace close. While IBM’s Deep Blue was searching some 200 million positions per second, Kasparov was probably searching not more than 5 a second. And yet he performed at essentially the same level. Clearly, humans have a trick up their sleeve that computers don’t (yet).

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational process because it prunes the tree of all doable moves to only a few branches.

Computers have never been good at this, but that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine, which he calls Giraffe that has taught itself to play chess by evaluating positions much more like humans and in a completely different technique to conventional chess engines.

Lai generated his dataset by randomly choosing 5 million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before utilizing it for training. In total he generated a hundred and seventy five million positions in this method.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are robust and those that are weak.

However this can be an enormous task for 175 million positions. It might be carried out by another chess engine but Lai’s objective was more ambitious. He wanted the machine to be taught itself.

As an alternative, he used a bootstrapping approach in which Giraffe played against itself with the aim of enhancing its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether or not the match is later won, lost or drawn.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top 3 ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

Source: http://www.lkessler.com/brutefor.shtml

In a report from Daily Mail, Mr. Lai admitted that his software was not as good as the best chess software.  “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

Source – Giraffe: Using Deep Reinforcement Learning to Play Chess 

Pulpofeira

Seems like not, Computer Hard still allows you to play it again and again.

pdela
Pulpofeira wrote:

Seems like not, Computer Hard still allows you to play it again and again.

Yeah, Computer-Hard let me take advantage of the same blunder forever

pdela
[COMMENT DELETED]
Sqod
pdela wrote:

In a report from Daily Mail, Mr. Lai admitted that his software was not as good as the best chess software.  “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

Then why does the article title say "grandmaster" if the program's rating is only master? The article never even mentions the program's rating. Thanks for posting this, but really this is a stupid, sensationalistic article that shows great naivity about A.I. and machine learning, which the best researchers in the world still don't understand and can't program, at least not for human-style learning.

mutualblundersociety

Possibly the software can learn to improve it's book openings if it doesn't just trot out moves but rather analyzes them too.

DannyReed123

good gob

EscherehcsE
Sqod wrote:
pdela wrote:

In a report from Daily Mail, Mr. Lai admitted that his software was not as good as the best chess software.  “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

Then why does the article title say "grandmaster" if the program's rating is only master? <snip>...but really this is a stupid, sensationalistic article...

I think you just answered your own question... Laughing

To give a serious answer, in another forum, Mr. Lai stated that his paper only said that the evaluation function is at the level of top programs. The writer of the (Technology Review) article apparently misunderstood and thought he meant the entire program. (Mr. Lai wasn't consulted for input to the article.)

pdela

@sqod

Maybe you prefer to read this, enjoy it

http://arxiv.org/abs/1509.01549

Benedictine
Sqod wrote:
pdela wrote:

In a report from Daily Mail, Mr. Lai admitted that his software was not as good as the best chess software.  “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

Then why does the article title say "grandmaster" if the program's rating is only master? The article never even mentions the program's rating. Thanks for posting this, but really this is a stupid, sensationalistic article that shows great naivity about A.I. and machine learning, which the best researchers in the world still don't understand and can't program.

This is about the standard for a Daily Mail article.Smile

EscherehcsE

If anyone wants to take giraffe for a spin, have at it:

http://talkchess.com/forum/viewtopic.php?t=57558&postdays=0&postorder=asc&topic_view=&start=0

(The first link in the first post, engine version 20150908.)

Ziryab
pdela wrote:

However while computers have become faster, the way chess engines work has not changed. Their power depends on brute force, the process of looking through all possible future moves to seek out the most effective next one.

Deep Blue was brute force. The engines that I run on my laptop look at fewer positions, but play stronger. There are perhaps a dozen or so pruning methods that programmers have developed that lead to stronger performance than brute force.

A few years ago, Hiarcs 12 could beat Houdini 1.5 looking at half the number of positions that Houdini saw. Hiarcs had better positional algorithms. Then Stockfish rose up and beat all the commercial engines. 

 

See https://chessprogramming.wikispaces.com/Pruning

As for learning, this too is built into the programming of certain engines. Several years ago I played the same set position several times against the same engine. To achieve the same result that I achieved the first time, I had to play better. The engine improved its ability to set problems in my path.

LoekBergman

@pdela: you might be interested in some comments in this thread:

http://www.chess.com/forum/view/general/how-good-do-chess-engines-play-chess?page=8

The same article was used in this thread I started.

The claims in that article are not very precise nor first of their kind. And as Ziryab already showed, but what can be found on numerous other pages as well, it is incorrect in its description how the established chess engines are working. It is just as precise as a senator in America telling in election time that she/he does not belong to the estabishment.  

kl19442

(Comment Deleted)

Forums
Forum Legend
Following
New Comments
Locked Topic
Pinned Topic