Summary: I describe how the TrueSkill algorithm works using concepts you're already familiar with. TrueSkill is used on Xbox Live to rank and match players and it serves as a great way to understand how statistical machine learning is actually applied today. I've also created an open source project where I implemented TrueSkill three different times in increasing complexity and capability. In addition, I've created a detailed supplemental math paper that works out equations that I gloss over here. Feel free to jump to sections that look interesting and ignore ones that seem boring. Don't worry if this post seems a bit long, there are lots of pictures.
[...] Skill is tricky to measure. Being good at something takes deliberate practice and sometimes a bit of luck. How do you measure that in a person? You could just ask someone if they're skilled, but this would only give a rough approximation since people tend to be overconfident in their ability. Perhaps a better question is "what would the units of skill be?" For something like the 100 meter dash, you could just average the number of seconds of several recent sprints. However, for a game like chess, it's harder because all that's really important is if you win, lose, or draw.
It might make sense to just tally the total number of wins and losses, but this wouldn't be fair to people that played a lot (or a little). Slightly better is to record the percent of games that you win. However, this wouldn't be fair to people that beat up on far worse players or players who got decimated but maybe learned a thing or two. The goal of most games is to win, but if you win too much, then you're probably not challenging yourself. Ideally, if all players won about half of their games, we'd say things are balanced. In this ideal scenario, everyone would have a near 50% win ratio, making it impossible to compare using that metric.
Finding universal units of skill is too hard, so we'll just give up and not use any units. The only thing we really care about is roughly who's better than whom and by how much. One way of doing this is coming up with a scale where each person has a unit-less number expressing their rating that you could use for comparison. If a player has a skill rating much higher than someone else, we'd expect them to win if they played each other.
Older article from 2010, but still interesting.
(Score: 2) by darkfeline on Friday November 27 2015, @12:19AM
I think this kind of thing is common knowledge now (TFS mentions it's from 2010 too). I wouldn't really call it measuring skill, though; it's more like using historical data and statistics to predict the future.
What's the difference? Consider a player who is skilled at using weapon A and another player who is skilled at using weapon B who are both ranked about the same, however weapon A in general trumps weapon B (and B trumps C, C trumps A, so the game is balanced). Player A would beat player B every time, but that nuance not something that can be captured by a statistical prediction rank and would be captured by a true skill ranking system.
Join the SDF Public Access UNIX System today!
(Score: 2) by mhajicek on Friday November 27 2015, @06:53AM
I had this in SCA swordfighting a number of years ago. A friend and I both specialized in sword and shield, but he could beat me 2/3. He had trouble defending against another friend who used two-sword, who could beat him 2/3. I didn't have that problem and could beat the two-sword friend 2/3.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek