Summary: I describe how the TrueSkill algorithm works using concepts you're already familiar with. TrueSkill is used on Xbox Live to rank and match players and it serves as a great way to understand how statistical machine learning is actually applied today. I've also created an open source project where I implemented TrueSkill three different times in increasing complexity and capability. In addition, I've created a detailed supplemental math paper that works out equations that I gloss over here. Feel free to jump to sections that look interesting and ignore ones that seem boring. Don't worry if this post seems a bit long, there are lots of pictures.
[...] Skill is tricky to measure. Being good at something takes deliberate practice and sometimes a bit of luck. How do you measure that in a person? You could just ask someone if they're skilled, but this would only give a rough approximation since people tend to be overconfident in their ability. Perhaps a better question is "what would the units of skill be?" For something like the 100 meter dash, you could just average the number of seconds of several recent sprints. However, for a game like chess, it's harder because all that's really important is if you win, lose, or draw.
It might make sense to just tally the total number of wins and losses, but this wouldn't be fair to people that played a lot (or a little). Slightly better is to record the percent of games that you win. However, this wouldn't be fair to people that beat up on far worse players or players who got decimated but maybe learned a thing or two. The goal of most games is to win, but if you win too much, then you're probably not challenging yourself. Ideally, if all players won about half of their games, we'd say things are balanced. In this ideal scenario, everyone would have a near 50% win ratio, making it impossible to compare using that metric.
Finding universal units of skill is too hard, so we'll just give up and not use any units. The only thing we really care about is roughly who's better than whom and by how much. One way of doing this is coming up with a scale where each person has a unit-less number expressing their rating that you could use for comparison. If a player has a skill rating much higher than someone else, we'd expect them to win if they played each other.
Older article from 2010, but still interesting.
(Score: 0) by Anonymous Coward on Friday November 27 2015, @02:38PM
I agree with what you are saying. It did get me thinking of statistics and measuring in general, be it "who is a good programmer," "which is a good company to invest money in," and "who is a good baseball player."
Everything you say about how the statics reflect truth, but can easily be gamed or misrepresented based on circumstances (e.g. a good player has high APM, but high APM doesn't necessarily make a player good) equally applies to the other fields I listed above. A company could have a good cash flow because business is good... or because they just took on a massive loan. A programmer could write a lot of SLOC because they are really talented and working on a hard feature... or because they are shoveling out unoptimized garbage.
They've mostly figured this out for evaluating business worth, in that there are many well-paid accountants who can judge how good a company is. They've mostly figured this out for baseball, as demonstrated by Moneyball. I imagine they could do the same for any given video game, only there are so many of them and they are so short lived that nobody is willing to dedicate the time and resources to do so.
It does make me wonder how come they haven't figured out any good mechanisms for calculating how good a software program or a software programmer is, though.