
An Interview with Dr. Ken Regan, IM and Anti-Cheating Expert
Dr. and IM Ken Regan was kind enough to sit down with me over the summer for an interview. We discussed his research and perspectives as they relate to topics such as computers, cheating, and performance in chess. Unfortunately, due to a hectic move from NY to SC, it took me several months to finish the transcription. Dr. Regan has spoken about his research and life in many other contexts such as Chess Life and NPR. The New York Times has also covered his work as well. In this interview, we discussed some related topics from those pieces, but I tried not to duplicate topics from earlier interviews.
I have had the pleasure of knowing Dr. Regan for the past several years through the chess community in Buffalo, NY. In addition to finding his research fascinating, I have found Ken to be an excellent ambassador of chess. He is always welcoming to travelling chess players of all skill levels, and he has been very supportive of local chess players and organizers. My thanks to Ken for sharing his time and perspectives!
Audio (apologies for the poor audio and interviewing quality ) is available on Sound Cloud (Part 1 and Part 2).
____________________INTERVIEW____________________
Sam: I'm happy to to be here with you, Ken. I should probably introduce you - You're an International Master and a Doctor of Computer Science [correction - Ken's doctorate is in mathematics, but he currently teaches computer science.]. You were a child prodigy or a teen prodigy...
Ken: Yes, I was one of many who were the youngest master since Bobby Fischer in the 1970's.
Sam: Right! And you had success at the Student Olympiad or the "Student Olympics".
Ken: In 1976, I was the only non-Russian to win a gold medal at the student Olympiad. Incidentally, I beat everybody I played, but lost a game, and it was a played game. How could that happen?
Sam: ...I don't know.
Ken: Ah, the answer is that two teams from our qualifying group advanced to the finals so I played two games against Yehuda Gruenfeld of Israel, beat him once and he beat me the other time. So I did beat everybody I played.
Sam: Right, and I know that Lone Pine in 77 was a big success for you as well.
Ken: That's right - last round win in an endgame against candidate Laszlo Szabo, where however, the computer shocked me by revealing that what I thought had been a good move was in fact a roll of the dice. It enabled me to win, but I could have lost.
Sam: Was that new analysis when computers came along or was that a revelation of the time period?
Ken: Oh, it was when computers were already good - about, uh, six years ago that I noticed this. I was going over the endgame, and I've long believed that it was tantamount to solving a composed study over the board, but my key move actually loses.
Sam: Out of curiosity, does that affect the game for you? Do you appreciate it more or less because of the computer's insights?
Ken: Well... yea... that's a difficult question. In some things, I think the computer deepens it. I really don't know. I hear through the grapevine that the computer's ability to find flaws in what are humanly brilliant combinations does take a lot of the luster off them.
Sam: So I have a few questions here for that are sort of general questions - sort of related to your research just a little bit because maybe your perspective is a little different. So, it seems that your relationship with chess is really unique. You're both a really strong competitive player and you are also able to mix research with chess. That's not entirely unique, but it's close to unique in the chess world. I was wondering how you were introduced to the game and what it is about chess that resonated with you in both a competitive and a research vein?
Ken: Ok, well a broad question... I was introduced to the game when I saw my father playing my uncle at age 5 and wanted to learn and my dad and I played a lot of games. It took me 6 months. I used to like to give up my two minor pieces for a rook, but eventually I won and then at age 10 I discovered that there was such a thing as a chess tournament. It was the NY area Under 12 championship, and I won it in a walk off because I was the only entrant. So Bill Goichberg put me in the Under 16, and I held my on with a score of 4 and 1 which was a tie for second.
Sam: So in the same tournament, you won one section and scored second in another.
Ken: That's right. Yes. So this was in August 1970, just before my birthday. And then I got good fast and by two summers later during the Fischer-Spassky match I was already briefly a master - it was never an official rating - and I was one of the commentators on the games. Bill Goichberg brought me up to Albany twice for two of the games and that was great fun. So anyway, I loved the game. There's a lot about it that appealed to the inner workings of my mind. I like the fact that it's not just coldly mathematical. There's color and vitality to the pieces and how they interact. There's a lot of literature and lore to the game as well, but I never intended to play it professionally. I made up my mind about that by age 13. I wanted to go to Princeton and be a double math/physics major. I went to Princeton and was math but more on the side of computer science. Although...now I've just uh... what's on the blackboard is related to a textbook on quantum computing which I've just been correcting galleys of with Professor Lipton. So I'm getting my physics in at last. So while I was an academic - I've been on the faculty here for almost 25 years, I received lots of interest from people wanting to do computer chess, meaning write a computer chess program. I never wanted to do something that's “me-too”. I didn't think that would be a real use of my time. So I always said no. From 1996 to 2006, maybe I thought about chess relatively little except for the Kasparov vs. the World match which is a story unto itself.
Sam: You did an extensive amount of analysis on the match.
Ken: Right, I meant from 1986 to 1996 is where I didn't play and never thought about the game. That was my time to publish or perish, get tenure, do postdoc, do work in computational theory which I am still doing very actively. So I had all that going on.
Sam: So this is completely unrelated to your research, but as a strong player, you have an interesting approach to the opening. You play sort of offbeat systems. You aren't going to be found 26 moves deep in the Dragon or something like that.
Ken: Right.
Sam: How would you characterize your approach to the opening and why is it your approach to the opening?
Ken: No time to read a book, um, and I play a few things I'm comfortable with. Mind you I'm on the “Chess Variants” website as advocating something called “Baseline chess with Fischer Rules” which basically takes David Bronstein's idea of allowing people to arrange their pieces how they see fit on the back row, combines it with Bobby Fischer's brilliant castling rule, and I think that eventually, maybe we're talking 20 or 30 years, well more 30 or 40 years, that's how competitive chess should be played. The opening lore is getting to the point that you can find Nigel Short saying that the Sveshnikov Sicilian is a drawing line because it has been worked out so much.
Sam: It was only, I think, about a week ago during the Norway tournament that I heard Jan Gustafsson during commentary saying that about the Botvinnik variation, which is of course remarkably complex, but he just sees it as a drawing line, and he was wondering why people aren't playing it as Black to draw.
Ken: Ok, well goodness, yea. These things take time, and I think they will take time, but I think with baseline chess with Fischer rules you are multiplying not just by 960 as in Fischer Random chess. You're multiplying by the square of 960 divided by a little bit because for instance I'm toying with the idea of requiring at least one rook to be in the corner, to cut down on the automatic fianchettoing of bishops. Although someone pointed out that chess programs think the position is about two tenths of a pawn worse, other things being equal, with the bishops in the corner. I'm not sure if chess players would agree.
Sam: That's an interesting question. I don't know. Actually, let's switch to a different question because you've just touched on one. Your research as you've just mentioned deals with complexity. Obviously chess is a situation in which you are dealing with problems of complexity in many, many cases. You've just raised some. There's been a lot of talk that chess is being exhausted. It kind of ebbs and flows. When there's some super-tournaments where there are a lot of draws, chess is boring. And then there's a really great tournament or two and we feel better about it. So do you foresee a point where chess players have solved chess to such an extent that it's almost impossible to win at the highest levels because of the width of the drawing margin in chess?
Ken: Yea, I honestly don't know. I mean Magnus Carlsen maybe has found a way to push that specter back quite a bit. One thing that I am researching is the notion of creative challenge in chess. That's a very difficult and deep topic, but it may be that if it becomes established then we'll reward players for seeking challenges. That might preserve the vitality of the game that several players are instilling.
Sam: That's interesting. It's been really interesting to me in the past ten years that Carlsen's style has proven to be really effective at creating challenge when you typically think of a player like Aronian or Tal as being the definitive sort of challenge creator at the board. I think...
Ken: Yea, I tried to capture it with the word “nettlesome”. Now, that has two meanings. One just being someone who is grumpy and uncooperative, but the other meaning of nettlesome I would say is trappy or making things difficult for you to get through. Yea, strewing golden apples and briars in your path. So that's what I meant, and I originally saw that because I noticed that opponents played with markedly higher error against Carlsen and the question that arises is is that because of Carlsen fear the way there used to be Fischer fear or is that mathematically quantifiable as a result of Carlsen generating positions that have a higher curve of respective error in the opponent's move. I haven't even yet solved however, what is the analogy of the famous “Brachistochrone” problem in physics. So the Brachistochrone problem in physics is to find the arc that makes a ball roll down it and get the bottom with a certain horizontal travel in the fastest time. You're not dropping it straight down. It has to go horizontally too. So the first thing you might think of would be to roll it down a diagonal plane, but that doesn't accelerate it fast enough. You want something a little steeper than a diagonal so that it drops right away, picks up steam and then converts it as it goes around. Isaac Newton famously solved the problem independently of one of the Bernoullis, I think. So the mathematics of the same problem in chess are very analogous. You have a chess position and the question is what drop off in quality of moves... So if you have a lot of moves that are nearly good, a player is going to find one of them and not be too much at a disadvantage. So you have a few good moves and then a few moves that aren't so good, what's the right curve. If you have a lot of moves that are obviously bad then the player's going to find the good one. So what's the distribution that maximizes the expected error? I haven't even solved it mathematically, but I'm wondering if Magnus Carlsen and some other people are finding a way to solve it in large scale over hundreds of positions.
Sam: Ok, so your impression then is that a narrower range of choice with difficult bad moves mixed in is more likely to result in errors than a wide range of slight choices.
Ken: Yea, there's some combination of it. You know, you want a few nearby moves and then the moves drop off in quality but they don't drop off too precipitously and that therefore gives a fair chance that you'll play a move that's judged at two tenths or three tenths of a pawn error. Which is a significant error considering that the average error is judged at about one tenth of a pawn? If you make one tenth of a pawn error per move then by my equation you're playing about 2500 strength.
Sam: Ok, so something to aspire too.
Ken: Ok, I'm sorry 2100 strength. Since 3475 minus 1.4 times the average error, right now, is a whole number multiplied by 10,000.
Sam: As a 2200, I'm not quite sure I'm getting down there yet.
Ken: That means that you are making less than a tenth of a pawn of raw error per move. I call it pawns in equal positions because one thing I have discovered mathematically is that the error scales up as the position becomes more unbalanced in value as one side has a greater advantage, and the question is do you scale away that effect - do you subtract it out - I believe the answer is yes. It's a thorny question. It's addressed in the latest paper of mine with high school student Jason Zhou and my graduate student Tamal Biswas which one of them, Tamal, will be presenting at a conference in Quebec at the end of next month.
Sam: So if I could change tack a little bit, part of your research has dealt with looking at historical players and you've published d research showing that in general chess strength has been increasing across history, but there's certainly some sort of blips in there.
Ken: Sure, like Capablanca performed at 2950 in New York 1927 and the Capablanca-Alekhine match was way over 2700 strength in quality and that stood up for a long time as at least the most accurate. Right now my work may be measuring accuracy more than challenge.
Sam: So I was wondering if there were any particular players that may be under-appreciated that you thought were playing at a really high level - that you thought their play was particularly interesting.
Ken: No, I haven't gotten into that aspect of it yet. I pegged Morphy in his most important games at 2350 which generated a bit of controversy. Steinitz grew... I mean the intrinsic level of high 2400s - 2500 was definitely established by the second half of the 1800's - about 1880's, 1890's. Yea, I don't know. Lasker has played a few performances above 2700 so these guys could play. It's really what's happening in the population on the whole - the players about 20 or 30 away from the top.
Sam: I recently heard, and I've not been able to find a source for this, I heard that Kramnik once said that Lasker was the first 2700 level player.
Ken: My work certainly agrees with that. That's absolute. My compendium is still there. I've pulled it back because I'm in the process of transiting my system from relying on just Rybka to using a troika of three other engines - Houdini, Stockfish, and Komodo, but I haven't had time to really finish the job. Then I will redo my calculations and put my historical compendium back on line.
Sam: I know that John Nunn very recently, maybe last week, published a book on Lasker where he looks at a hundred games because he was really interested in his ability to pose challenge. He found him to be one of the players who was best able to pose challenge and so he looked at a hundred games and found a bunch of different lessons and such. I've not got my hands on the book yet, but I hear it's very good.
Ken: It is very interesting for me to see how well my numbers agree with assessments of players during the games. For instance, in the latest Candidates, I mean I think my math picked up Sergey Karjakin's surge, and I certainly picked up that Anand played at a completely steady 2900+ level and really deserved it, but it's interesting to see how my numbers jive with people's impressions, grandmaster analysis, of the games.
Sam: So you obviously have been largely involved in all the work relating to anti-cheating measures, particularly after Topalov - Kramnik, which really brought alot of allegations and discussions of cheating to the forefront of the chess community, and you've been, sort of, at the forefront of that discussion since then. I know that you're working on this at the moment, so you can't really say...
Ken: Really in the past two years actually, I've been at the forefront.
Sam: Ok, I know that you've been working on committees and such discussing anti-cheating measures and you can't necessarily discuss all of that, but I was wondering if you could share any personal thoughts on where the balance lies between effective anti-cheating measures and, sort of, convenience for players because as you raise the number of anti-cheating measures you...
Ken: Yea, we've attended very much to that. In fact, in another interview I was asked a question, "So, you're going to make your presence felt in tournaments?" and I replied right away, "No, we're trying not to make our presence felt. We're trying to be effective while minimizing the alteration and conditions for players." and umm, so this has informed a fair number of our rules. I mean people have justifiably on the English message boards said it's treating players like crooks if you're going to ban them from having their cell phone in their vest pocket. But we're trying to establish rules of behavior that are fairly clear and simple to follow. Our committee decided that what we desire positively is for a person to stow the cell phone somewhere. It can be in a bag by the table. It can be on the table if that's deemed safe; that depends on local conditions. If the organizer is providing a stowing service for cell phones, that's great. We didn't feel we could mandate something like that on organizers at all levels. The important thing is that you're not carrying it with you during the game. We note, for instance, the fairly sensible and practical and nicely brief stipulation for the World Open by the Continental Chess Association saying you are specifically not allowed to carry your phone into a bathroom, and you should check it with the TD if you want to go to the bathroom, and they're able to provide that service because they're a nice large organization, but that gets at some of the finer points as it still doesn't prohibit you wearing your cell phone during the game and that was evidently the modus operandi at the Dortmund Sparkassen Open last August where an ostensibly switched off cell phone was still found to give vibrations and the player had his left hand in his pocket operating it. Also, even while you're carrying your cell phone in the playing hall you could receive messages and we don't want to accuse people and say that's what you are going to do, but on the other hand it's perception by the opponents and the players in general. The type of incident that we definitely want to curtail is what happened at last month's Iasi Open where a player who had been getting up from the board evidently was suspected by his opponent and followed into the bathroom and confronted there. So we don't want any kind of physical confrontation nor like what happened in Dublin Ireland last year to take place. So we feel that we can forestall that if you have a clear rule against carrying your cell phone. So we've tried for that to be the Goldilocks, meet-in-the-middle type of criterion that's clear, and we give leeway to the tournament directors like they should announce this before the round and if players are forgetful I hope it’s obvious the player is forgetful. I hope that common sense and reasonableness will prevail, but this will still become a criterion that people will understand and be able to follow naturally.
Sam: Right, so you already just touched on this in your previous answer, but it's very interesting that while you've been at the forefront of this discussion of chess cheating, there's been maybe a little bit of a pitchforking, if I can say that, a little bit of a mob mentality you see in discussion boards and such that you see sometimes where people feel that cheating is fairly rampant and your research seems to indicate that it's not necessarily the case. There's certainly cases of cheating, but in many case you've disproved false allegations of cheating. I understand that there's been more cases where you've shown allegations to be false than true.
Ken: That's right. It really depends on the multiplier effect. So let's speak in terms that are just being talked about with the rating system. There's a “K factor” of a cheating case. Ok, so they're fairly uncommon, thankfully. But on the other hand, they generate a lot of bad perception, and unease among players, and you know, there is the view that one case is too many. So what's the multiplier, what's the K, on the damage to chess in the public sphere? I know that tournament organizers are very, very sensitive to the appearance of cheating at their events, not least with their sponsors. So that's very hard to quantify. So if you read the document that FIDE has posted, we address this in the opening paragraph. If I were online, I could read it. You will see very clearly from a few sentences that there's the both sides of it. It's not as rampant as some people would have you believe, but on the other hand, there have been several noted cases and we feel, too many of them.
Sam: What do you think should be done about false allegations? It seems that in some cases before anyone has a chance to weigh in, such as yourself, that someone's name can be dragged through the mud on message boards. It's relatively easy for someone's name to get out there and for people to, somewhat recklessly sometimes, slander a player.
Ken: Yea, that's a very difficult question. We had so much on our plate while we were meeting---the meetings we had were very intense---that we didn't get to details of it. So we're looking for a track record on the positive side of dealing with incidents. The other side - that's still in the future. I will say that between the summer of 2007 and January of 2011 - that's three and a half years - I did not have any scientifically solid case of cheating by a player with an established rating of 2400 or above. So I really thought that my market was going to be completely in this debunking, explaining side. For instance, at the 2009 Aeroflot Open, Shakhriyar Mamedyarov, withdrew from the tournament after losing in 21 moves as White to Igor Kurnosov who was observed getting up from the board, and incidentally, going out to smoke a lot, but because 9 of 10 moves is what people who have reproduced this say have matched Rybka. He, pretty much, in a letter that he wrote, accused his opponent of cheating "which allowed my opponent to win the game" were the words he used in English. Now, my model reproduces this. If you leave the computer on all night, with one of the moves you get 10 out of 10 matches, but my model also notes that the positions were very forcing and generates an expectation for a 2600+ player of 8.35 matches out of 10 so 9 is right adjacent to that. So obviously from 10 moves you are not going to get anything solid, but you do get an explanation of sorts. This was a forcing game. There was always a clearly advantageous move and a respectable opponent, strong player, 2600+ is going to find these moves more often than not. I mean my model is created, parameterized, by working over training data from games actually played by people at these levels. So I'm not just saying this is what I think a 2600+ player could find.
Sam: The last question I have is just that there's a lot of really interesting discussion and applications for statistics and good research methodology in sports and games these days. I think a good example is "Moneyball" with sabermetrics where you are applying some new rigorous statistics to baseball and achieving some really great results with that, and I think that your research is in that field to some extent. You’re finding new ways to look at games and to understand games with research methodologies.
Ken: Yea, it'd be interesting to see how much is “commonality”. I mean it's more the mindset. One thing that impressed me at a New England sports statistics conference that I attended last September, which incidentally was co-chaired by Mark Glickman of the Glicko rating system and the USCF rating statistician, was the idea that it's not just statistics. It's analytics. It's the way that quantified, structured data is influencing alot of our thought processes, not just compiling aggregate statistics.
Sam: So perhaps you would disagree, I would love to hear other examples, but it would seem that you have possibly had the greatest success in applying research methodologies to chess and finding new ways to look at chess problems.
Ken: I'm not the first. I mean obviously Ivan Bratko and Matej Guid published a study with what I guess I would call my screening test methods. Not as thoroughly as I did. For instance, when they used Rybka 3, they stopped at depth 10 whereas I went to depth 13 which is about 12 times as much work per move. I think that's about the right tradeoff point for time versus the value of the data that you're getting. And I've done many millions more moves. So I'm not the first, no, in fact there's a fairly large research literature of approaches on this, and I've just been named to the editorial board of a newly formed journal called the Journal of Chess Research which was created with the help of Susan Polgar's Foundation. So that's an excellent development.
Sam: So is there any research that you are particularly interested in seeing in the coming years of chess. Possibly out of your field, but just topics and questions that you'd like to see addressed.
Ken: Oh yes, there's a lot, and I guess the most interesting thing for me, actually, of general significance, is the question of variation in performance and motivation. This is going to be very difficult to quantify, but one of the things my model does is it provides intrinsic estimates on the variance of a performance. So, for instance, if I say that a certain player performed at a 2450+ level in a 9 game tournament, very often my model will say that's plus or minus 150 points or 200 points over 9 games. Using these intrinsic error bars which come about from a model where probabilities were assigned to each move as if you were rolling dice - which may not sound nice, but I've way of adjusting factors to make that work - and the thing is I regard this as a kind of natural variability, a kind of quantum fluctuation that we can't avoid. There's less difference, perhaps, in performing at the 2450 level and performing at the 2250 level or a 2650 level than we think. That's why we do get upsets where I'll lose to a 1900 player who may be performing at a 2300 level, naturally, and as a 2400, I was performing at a 2100 level so I lost. So if there's less of a difference than what is it that distinguishes those that are able to keep their performance at a maximum. What motivational factors are there? You know, can we get 200 ELO in strength from some personal consideration. How much are our chess games affected by just the exigencies of, you had a very busy work week or other things that naturally happen.
Sam: It's a bit like identifying the term "clutch player" that gets thrown around in football. They'll say that there's one player who's "clutch" and another player who's not. Tom Brady versus Peyton Manning maybe or something like that.
Ken: Yea, that's possible too. Who rises to the occasion? The thing is I'm sort of saying both ends of the sword though, because I'm saying that there's a lot of variation that's going to come just from random factors and that this has been the message of a lot of studies for instance in Basketball, saying the hot hand phenomenon is mostly an illusion, but one of the papers at the New England Sports conference said that it's not completely an illusion. So, but, it's all going to be very difficult to quantify.
Sam: Is it related to this idea that some players have their nemesis whom they always perform badly against? Like Shirov vs. Kasparov for instance.
Ken: That might be. Right. That might also be quantifiable, yes.
Sam: Thank you so much, Ken. Thank you for answering these questions and providing all of your excellent insight.