
"Game Reports" of the Silicon Beast: Engage, Question, Challenge
I came across an interesting video by IM Andras Toth ("Game Report - Does It Help?") where he introduces the interesting topic of the efficacy of game reports. Chess.com spits out an insta-report after each game (or however many your membership will permit) with a bunch of !'s, ??'s ?!'s, and talking-toy responses. Lichess.org does a quantitative version that doesn't even pretend to be (awfully) qualitative whereas DecodeChess does something similar to chess.com -- though it came earlier, was probably slightly better, and I don't know where it's gone. This is the evolving field of "explainable AI." AI, machine learning, neural networks...this is the stuff of AlphaZero, the greatest Silicon Beast unleashed thus far. But explainable AI is a whole different dimension...
It's one thing to play like a beast; it's quite another to communicate like one. Whereas previously computers could only play as well as their human commanders, I think we're in a maybe slightly more evolved version of this with game reviews / explainable AI. The program can't intrinsically explain its moves like a GM would. Even if a GM could hypothetically imbue the bot with its own linguistic tendencies, the computer would still need to attach meaning to those explanations and figure out when they apply to each novel, nuanced position.
Nuance is the operative word here; humans are very good at explaining it to other humans, but computers simply haven't solved this richly complex qualitative problem. This is like solving the problem that Google Translate has grappled with, combined with trying to ascertain how the computer "thinks" and converting it into comprehensible language that applies unambiguously in an unfathomable order of highly nuanced positions all while delivering meaningful didactic value....Keep in mind, too, that although there are a myriad of nuanced versions of translations and that it's quite difficult to pass the Turing Test here while channeling the faculties of a sophisticated human translator, the universe of possible moves and inherent nuances are exponentially greater in a game of chess than a translated page -- not only in explaining them convincingly and comprehensively, but in a way that delivers value to the bewildered student that's even in the ballpark of a reasonably good chess coach. And if you think that's a mouthful, leave it to Grammarly to set us straight. Again, just a slightly easier problem for the bot to solve!
Clearly the strongest GM is far weaker, in the aggregate, than AlphaZero, but an explanation as good as a strong GM, or even a solid NM or FM who's a great coach, would at this point just be amazing. But we're far, far from this reality. Instead, the language that the bot spews at you is grossly over-simplified and sometimes just plain confusing. Why is that a blunder?? Also, there's little distinction between first and second and third best moves (I imagine the latter is a favorite of the devious cheaters out there). Often the human move would be the second best and that would do just fine. (One of my students mentioned that some of the computer moves are just plain bizarre. Those are the ones that really throw human "consumers" of these reports off, and as I suggest below and in the video, challenge those silicon moves with less ridiculous ones that look reasonable to the human eye and see how the engine reacts; you might be surprised that your move is just fine after all--in your ultimate judgment, not the diety's +.5(!!) validation.) Certainly no explanation for those fine distinctions is provided. No positional analysis. No discussion of planning. No narrative of the game which is essential for human understanding.
Ultimately, here's my beef with those who say that computers invariably make us stronger: if abused, engines make us really good at analyzing one particular line, or to be able to find one incredible move, yet that's often completely divorced from the broader narrative and understanding of the game. So we can end up becoming, to apply this to careerism or academia, hyper-specialists who have no understanding of how other fields affect our own, no notion of the value of interdisciplinary thinking.
So what's the solution to all of these problems? Well, for one thing, coaches aren't out of business. This is what Andras mentions (not for his own sake, he mentions, but because it's simply true). Though beyond a coach, as I suggest in the video, if you can recognize the area where you need work (e.g., an isolated pawn, a pawn sac for development, the nuances of the Nimzo-Indian structure, etc.) based on the mix of the computer's nit-picking and your own self-critique, you can also independently direct your self-learning by getting a book, reading an article, or watching a YouTube/Chessable video on the given topic. Plus, you can engage with your peers (i.e., kibbitz at your local chess club).
I think the suggestion that I'd emphasize most is to grapple with the computer lines yourself -- don't just allow the engine to spoonfeed you as if it's Caissa herself. So be curious. You see a weird suggestion; ask yourself, "why is that? What if my opponent makes this reasonable reply instead?" Curiosity is at the core of chess mastery. Chess is about the search for the truth of the position. You have to bring that curiosity with you into all aspects of your training and play. It'll inoculate you from becoming an engine zombie. (Think of it like sitting on your couch watching movies all day, getting wasted, and passing out...or sitting on your couch watching movies all day, chewing on ginseng Larry Christensen style, then writing a really good, self-congratulatory post on Reddit about them.)
The recording below on this topic is from my adult international class, a free resource that I offer every Saturday at noon eastern. I love the discussion of my awesome students here. Check it out and sign up for the weekly email with the info at chessprofessor.net.