"Moves" do not have objective strengths. This system would be inherently awful for that reason alone.
New chess rating system based on single moves evaluation

@Aquarius - I have to ask - why is 1 d4 only a 1200 move? Is it weak?
With regards to this as a concept, if the game is relatively simple or one-sided, then it is relatively easy to play moves that are more or less the strongest available, and hence players would score highly under these circumstances. And conversely, in difficult positions any player would struggle to consistently play the strongest moves

I believe Dr. Ken Regan uses a system somewhat similar to this for his anti-cheating program, although he doesn't really try to come up with an actual rating for each move. It was in the June 2014 issue of Chess Life. Great article...
And you can read the article here:
http://www.uschess.org/content/view/12677/763/
Regan's web site with supplementary information concerning the article:
http://www.cse.buffalo.edu/~regan/chess/

@Aquarius - I have to ask - why is 1 d4 only a 1200 move? Is it weak?
With regards to this as a concept, if the game is relatively simple or one-sided, then it is relatively easy to play moves that are more or less the strongest available, and hence players would score highly under these circumstances. And conversely, in difficult positions any player would struggle to consistently play the strongest moves
1200 means baseline. The system I was using was tracking the baseline as steadily increasing. I abandoned it because it was tedious to track and probably wasn't doing what it was supposed to.

So after a good move the player's score goes up, whereas after a bad move it goes down, is that right? If so, I don't see why White's score goes down on moves 6, 10 or 12

"Moves" do not have objective strengths. This system would be inherently awful for that reason alone.
Well, they do actually. Tactics trainer is based on this and every tactics problem has its rating based on the success rate, taken time and rating of solvers. If you don't beleive this, try to consistently find the right first move in tactic problems rated 1000 points over your rating within required time. You don't have to tell me how it went :-)
To continue with clarification, check out the graphs from the following link:
You can see the signature graph made by engine game evaluation of rookies, intermediate players and Fischer's game. This is the kind of data that is not gathered or stored automatically on chess.com for every game played (this is mostly done on demand or manually) but if this data gets collected, analyzed by 4000 ELO engine, compared with large databases of chess positions and evaluated the similar way as chess tactics does, we could get Chess Business Intelligence as I suggested previously.

Your game annotated with proposed system
Aquarius, that was a blitz game, what did you expect?
Anyway, I would mop the floor with anyone rated 400 points below me

Yes I couldn't work out Aquarius's comment about the first few moves of your game being at the 1100 level either. Even after I went through the moves with an engine

Your game annotated with proposed system
Aquarius, that was a blitz game, what did you expect?
Anyway, I would mop the floor with anyone rated 400 points below me
Is that a challenge? :)
If so what time control? ;)

The problem is, this would just come out as a centipawn loss graph.
Which is not very useful for determining strength, at least not nearly as useful as the system we have now.
The proposed system of measuring advantage doesn't work either. Suppose my opponent makes very bad moves- then it would be very easy to get a big advantage wouldn't it? You might say "aha, but you can modify out much the advantage counts based on the opponent's rating"
...but then you are back to square one, because basing things on rating is what you are trying to avoid.
My other issue is that if you played something like the scotch gambit, the system would give you penalty and dismiss you as playing bad moves right out of the opening- this isn't really fair, as while the scotch gambit is shown to give up white's advantage with perfect play from black, it certainly doesn't mean only ameaturs play it.
Large amounts of games have strength, moves don't.

The problem is, this would just come out as a centipawn loss graph.
Which is not very useful for determining strength, at least not nearly as useful as the system we have now.
The proposed system of measuring advantage doesn't work either. Suppose my opponent makes very bad moves- then it would be very easy to get a big advantage wouldn't it? You might say "aha, but you can modify out much the advantage counts based on the opponent's rating"
...but then you are back to square one, because basing things on rating is what you are trying to avoid.
My other issue is that if you played something like the scotch gambit, the system would give you penalty and dismiss you as playing bad moves right out of the opening- this isn't really fair, as while the scotch gambit is shown to give up white's advantage with perfect play from black, it certainly doesn't mean only ameaturs play it.
Large amounts of games have strength, moves don't.
Hi xman,
I was not suggesting to replace existing system, but to adding more information so we could have greater opportunities for analysis, learning, choices of which opponent to play ,...
Wouldn't you like to have your games rated so you could know which games are your best ones and which are junk? To have your games rated, you need to have them analayzed (not by novice who can't rate scotcsh gambit but by 4000 ELO engine) move by move and compared with other games played by you and other people within rating domain.
Imagine if you could have advanced search options for games with opponents based not only on their overall rating score (whether they won, lost or drew versus someone), but also on ratings that include info on each and every move made in their games.
This way, you would get your overall rating for game type (blitz, bullet, standard, or maybe include more custom fine tuning, like different ratings for 5I0 blitz or 3I0 blitz for example) BUT you would also get your rating for phases in the game like opening, middle game, ending – or whatever criteria set by which move ordinal in the game, or move made with what number and type of pieces in the game
With all these information you could:
· Pinpoint your weaknesses in the game by comparing them with average of other players and have excellent base for learning and improvement of your chess skills
· Eliminate cheaters, or have the tool that will allow you to avoid players that have suspicious ratings for parts of their games
· Choose to play against player type that is perfect for your training schedule or for your playing style –
o for example you could choose to play against someone rated around 1800 ELO overall in blitz 3I0 but you would also have additional info for that same player like if he is rated say 1600 ELO or below average for move strength in openings or say first 10 moves of the game, above average or say 1950 ELO for middle game and 2257 ELO in endings.
o Or you could make custom search that would only choose Titled player rated 400+ ELO above you overall, but those that tend to make big blunders (“From wining to losing in 1 move” J) in 10% of their games and hope to get lucky and get trophy to give you bragging rights about how you destroyed much stronger player J
· You would know in which games you played near perfect chess from start to finish, because otherwise you only may have a good or bad feeling about how well you played and that can be very far from the objective evaluation. Final game results only are poor indicator about how good your game was.
· You would know which human player plays best chess moves in the world for which part of the game and have separate new trophy categories.
To get these type of ratings on sites like chess.com you would need each and every game to be evaluated by God rated say 4000+ ELO engine (3400+ ELO engine would have to do these days J) , and comparing those evaluations with moves made by other human players and determining rating score for each and every move.
What do you think about this?