Great idea!!!
Anybody want to do a science project on chess?

1. Some of those characteristics are measurable.
What characteristics? How are they measured? If you can think of several examples, lets hear it.
Some examples of values for each opening:
(1) drawishness
This is directly obtainable from the middle value in online chess databases that show the percentage of draws between the percentage of wins for White on the left, and the percentage of wins for Black on the right. The Petroff will have high drawishness, the Sicilian low drawishness, for example.
(2) decisiveness
decisiveness = % of wins for White + % of wins for Black
(3) unit denudation rate
A simple but imperfect way to calculate this is to look at how many of the starting 32 units have been lost in the first 30 moves, then divide by 30 for average linear unit loss per move. The Ruy Lopez will have very low denudation, the Austrian Defense high denudation, for example. Problem: This curve is not linear. You'll have to figure out the mathematical function that describes this curve.
(4) forcingness
1. Nf3 openings would have a low forcingness value, 1. e4 openings would have a high forcingness value, for example. You'd have to figure out how to calculate this. I'm pretty sure I know how, but I haven't tried it yet.
2. Machine learning. Machines DO NOT learn. A computer is just an abacus. It's millions and millions of abacuses with the beads that move very very fast. That's it.
Examples:
(1) unit placement
Using database statistics, find the most common next square for each unit on the board. This will help with proper piece placement to avoid opening errors. Humans know when a move looks right or wrong, but computers don't because they haven't been taught to do so.
(2) B+N mates
There is probably a simple geometrical pattern or numerical value to use in order to play the center-of-the-board phase of this tricky endgame mate, and a computer could probably find it. Catches: there may be a *combination* of patterns to use, and there is an astronomical number of possible things to look for.
(3) common temporal motifs
For example, the move sequence Bb5 ...a6 Ba4 ...b5 Bb3 is a very common temporal motif a program could learn. That's an overly simple example, but better and more useful examples could be found by the computer. This would also help players with learning an opening, especially one with which they are unfamiliar.
7. Style. Style is NOT an objective thing. Computers have poluted our thingking, we're thinking like computers.
What works for one person doesn't work for another person based on each person's ability and preference.
Don't think of it as a science fair project. It can be a lifelong pursuit. It still is for me. It's not something trivial. Chess is really a martial art, like boxing, judo, wrestling, karate, etc . . . People always compares things to chess, MMA is like a chess game, coaching football is like a chess game, etc...
I claim otherwise. For example, Petrosian is known for keeping all his pieces coordinated, never hanging. That means the heuristics/criteria that Petrosian uses for selecting his preferred continuation is influenced by his personal taste. Coordination is another attribute than can be measured (you'll have to figure out how), so a player's style could be summarized by a few values of a small set of variables. This could even allow someone to recognize who played a certain game, in the same way that programs can now determine if a newly discovered play had been written by Shakespeare.
Another idea related to this is a table that shows the main attributes of an opening, and by weighting a persons tastes against the weights in the table, a ranked set of openings can be selected for that person, with the best recommendations at the top of the list. That's a very common FAQ in this site's Openings forum, in fact, which immediately shows how useful it would be. Such table matching is actually almost trivial to do, but the reality is that I've never seen anybody do it. That's another example of how primitive the state-of-the-art is in chess, compared to what it could be with more research, more applied math, and more computer tools.

Confirmation bias, and it's impact on decision making in chess is an interesting topic. But apart from one old paper on this, I haven't seen any other work (not that I have looked deeply).
One way to look at this in a science project would be to get some chess beginners, present them with positions, and ask them to pick a move. Use three groups: one is asked something like "looking for reasons why a move is bad, choose your best move", the second is asked "looking for reasons why a move is good, choose your best move" while the third is asked "choose your best move". Then examine the responses and see if there is any significant difference between the objective strength of the moves from each group.

Confirmation bias, and it's impact on decision making in chess is an interesting topic.
possibly you could show a video clip of say three queen sacks and predict that that would influence a player to solve a puzzle with a queen sack.
In Eastern Europe you could probably get some interesting results. In North America the percentage of people who play chess in miniscule.
Maybe you could show a video on horses and predict how many people call the Knight an N.

m interested in probability theory and want to connect chess with that!!
Are you interested in statistics, or just probability? Mathematical statistics is based on, and is a generalization of, probability theory, and there are a lot more chess problems of which I can think based on statistics than on probability theory.
I'm reluctant to post any more lists of ideas. I did so for long_quach since I feared people might not know what I was talking about in my generalities, but now that I posted those ideas, a student choosing one of those topics would theoretically rate 0 points in originality, which would theoretically drop their project evaluation 13% (per our local science fair scoring system), which theoretically would disqualify them from a sweepstakes award. Worse, now that the idea is posted publicly, it's very possible one of the 13 million members here might start thinking about it and post their solution by the time you could finish your project, which would negate everything you'd done, which would theoretically render your entire project inadmissible to a science fair since it could be claimed you didn't do anything except copy work and results that had already been done and posted on the Internet. There's a good reason I (later) decided not to post lists of ideas. That's part of where creativity will come in: thinking of an interesting and novel idea either by extrapolating the examples given, or thinking of an idea that is similar but not mentioned, or combining or interpolating between examples given.
One of my main points is that chess is very fruitful ground for good projects for several reasons. I believe one of the most valuable things I could do under these constraints would be to give an educated opinion about the worth of an idea for such a project. As I mentioned, lack of a good idea as a foundation is the biggest and most annoying problem I've seen in science fair projects, so if you can clear that hurdle, then you're on your way and the project will largely run itself, and will have great future potential for applying to other fields of science, which is partly what makes a science project valuable to society, whether that project is in a science fair, a commercial setting, an academic setting, or other.
An idea just hit me this morning: Since I've been active as a judge in our county science fair for the past three years (I was an entrant myself when I was in high school) and since I have many interesting ideas about projects/studies involving chess that I don't have time to investigate myself, and since a lot of members here are probably young enough to enter science fairs, this would be a really good opportunity for some middle/high school student here to get some top level advice to help them win in a science fair. At the same time you could learn more about your favorite recreation: chess. Of course science fairs don't have a "Game" category, but this would usually qualify as a Math project, or possibly a Computer or Psychology project, depending on the specific project.
(I deleted my list of ideas that was here.)
There are several problems with my posting a list of project ideas that I think are good: the ideas are supposed to come from the students themselves, judges frown on projects from students who just pick ideas off of online lists, students are supposed to be demonstrating creativity (often important in science, at least at the higher levels), it wouldn't be ethical to be helping somebody working on my own idea, competition may also be using that list, etc. In place of a list of specific ideas, here is a list of some areas of chess where I believe many promising research ideas exist:
(1) Openings. Each opening has a different set of values for certain characteristics that make a player choose vs avoid that opening, according to taste. Some of those characteristics are measurable, but nobody seems to have attempted to measure most of them. I can think of several examples.
(2) Machine learning. Computers can notice patterns that no human has noticed before. Some of these patterns would be very useful for humans to know explicitly, and could aid them in their play.
(3) Representation. Any time you play with different representations of a system, you're on very fruitful ground. This general idea spans the type of notation used, how the board is represented, Informant-type symbols, chess concepts, names for those concepts, and more. Look up knowledge representation online to convince yourself of that, especially in conjunction with A.I.
(4) The physical board and chessmen themselves. Think not only about chess but games similar to chess, why one game is more popular than another, if popularity can be improved by making modifications, etc. Also think about improvements to boards and the chessmen.
(5) Software tools. This includes cheater detection, hints like key squares or hanging pieces, unit placement, and much more.
(6) Engine fundamentals. Until somebody finds out how humans think, there will be a huge amount of fruitful ground in attempting to get a digital computer program to do better than its predecessors.
(7) Style. There are many ideas here: computer analysis of a given player's style, comparison of the effectiveness of different styles, selection of the "best" opening based on a person's style, etc.
I'm surprised there hasn't been more interest in such areas. In my opinion, opening books and chess terminology are still in a primitive state, and improvements in chess can involve ideas that reach far into many other fields.
One of the main problems I see with most science fair projects in every category is that the fundamental idea is often very weak. Starting a science project with a weak idea is like starting to write a song with a weak chord progression, or starting a chess game with a weak opening: the end result is almost guaranteed to be disappointing. It's very frustrating to me to see a lack of creativity in project ideas every year. It's equally disappointing to realize that their teachers or mentors didn't recognize that the idea was weak and steer the students into a more productive direction. Maybe nobody cares anymore.
My advice and guidance is free, although I can't spend a lot of time at it. Message me if you are interested. There may or may not be time to complete a project before the spring science fairs, but it's an idea.