Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 04 2015, @03:08AM   Printer-friendly

It's always hard to take your eyes off Serena Williams. But it’ll be especially tough at this year’s U.S. Open, where the tennis champ is currently working toward a single season Grand Slam. She’s just so darn good. But what is it, exactly that makes her so good?

Sure, we can all speculate—it’s her power, her serve, her stamina, the way she controls a point. But we can’t calculate precisely what makes her game so special. IBM believes it can.

Since 1990, IBM has been working with the United States Tennis Association to support the technological infrastructure of the U.S. Open. Back in the day, that meant generating scores and keeping the website up and running. Today, it means doing those things while also analyzing millions of data points about every player, every stat, every point, in every tournament, extending back for decades to derive insight about how a given match—or career—will play out.

http://www.wired.com/2015/09/ibm-us-open-serena-williams-data/


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Touché) by Anonymous Coward on Friday September 04 2015, @03:29AM

    by Anonymous Coward on Friday September 04 2015, @03:29AM (#232112)

    TFS sucks - a content-free clickbait.

    • (Score: 2) by tangomargarine on Friday September 04 2015, @03:47AM

      by tangomargarine (667) on Friday September 04 2015, @03:47AM (#232117)

      I was waiting for the punchline where she was wearing an IBM product smartwatch or something.

      Snatching victory from the jaws of defeat

      Well, less victory than not looking like a *total* asshat

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 0) by Anonymous Coward on Friday September 04 2015, @01:10PM

      by Anonymous Coward on Friday September 04 2015, @01:10PM (#232230)

      Yeah, the editors couldn't even find enough content for a dept. line. ☺

  • (Score: 4, Funny) by penguinoid on Friday September 04 2015, @03:31AM

    by penguinoid (5331) on Friday September 04 2015, @03:31AM (#232113)

    Fortune tellers will soon be replaced by robots. I bet none of them saw it coming.

    --
    RIP Slashdot. Killed by greedy bastards.
    • (Score: 2) by Bot on Friday September 04 2015, @05:43PM

      by Bot (3902) on Friday September 04 2015, @05:43PM (#232352) Journal

      I see less jobs in your future, humans.

      --
      Account abandoned.
  • (Score: 4, Insightful) by physicsmajor on Friday September 04 2015, @04:04AM

    by physicsmajor (1471) on Friday September 04 2015, @04:04AM (#232119)

    The article goes through how Watson simply knows a whole bunch of sports stats. This is just random correlations! They say it can predict the next Serena, but they're just extrapolating with no real basis.

    • (Score: 1, Touché) by Anonymous Coward on Friday September 04 2015, @12:30PM

      by Anonymous Coward on Friday September 04 2015, @12:30PM (#232211)

      If it really can predict the next Serena, they should bet a substantial amount of money on its prediction. It would be a sure bet. If they don't do it, it must be because the prediction is in reality nothing else than a guess which is no better than the guess by anyone who's knowledgeable in that sports (indeed, probably more so, because the humans can factor in other things not easily put into numbers (like how exhausted she looks after her games, or yellow press knowledge about things happening in her life which may occupy her mentally and thus affect her performance).

    • (Score: 0) by Anonymous Coward on Friday September 04 2015, @12:49PM

      by Anonymous Coward on Friday September 04 2015, @12:49PM (#232217)

      This. Give me the same data and Excel and I'll produce the same results. WTF?

  • (Score: 4, Insightful) by aristarchus on Friday September 04 2015, @07:23AM

    by aristarchus (2645) on Friday September 04 2015, @07:23AM (#232157) Journal

    The Holy Grail of programming is expert systems. If we could produce an algorithm that would replicate the best tennis player/diagnostician/strategist/McDonald's clerk, we could in fact replace all those professions. If we could. But this runs into an interesting point in philosophy. (OK, I admit that as a philosopher I am professionally insulted by some of the comments about math(s) and logic in previous articles, but bear with me.) Hubert Dreyfus published "What Computers Can't Do" in 1972. There is much written on his claims, I recommend some research on the topic. But for me, and many other philosophers, it comes down to this: can a skill be reduced to an algorithm? On the one hand, the idea that anyone, with the proper cookbook (algorithms), could be a master chef is appealing. This is the Democratic view of expertise.

    On the other side are the conservatives. (Not Republicans, because they are just insane, and do no enter into the discussion here.) The position here is that the skill involved in complex operations cannot be reduced to algorithms, but must be passed down by long apprenticeships, supervised by masters. Part of the claim is that the algorithms will always miss nuance and make trivial mistakes that an actual master, or competent practicioner, would not make. Classic example is machine translation, where well before "all you base belong to us!", there was the mythological CIA translating computer, designed to handle massive amounts of Russian language input, and give the Agency a synopsis in English. You may know this one. First, they put in an English phrase. The phrase chosen was from the New Testement, as befits the holly warriors of the CIA. "The spirit is willing, but the flesh is weak." OK, machine translates this into Russian. They take the Russian statement, and feed that into the translation program. What comes out? "The Vodka is good, but the Meat is rotten." So, point being, translation takes a human touch, a bilingual operator, and that expertise cannot easily be automated. Hey? it is just like Fox News!

    • (Score: 2, Interesting) by Anonymous Coward on Friday September 04 2015, @01:08PM

      by Anonymous Coward on Friday September 04 2015, @01:08PM (#232229)

      Of course skills cannot be reduced only to algorithm. You also need to have the proper hardware (if your taste is gone, you'll never be a good chef), and the proper data (part of becoming a good chef is learning how things taste).

      And of course, being a good chef is a special case because there' no objective measure (unlike in sports, where you can clearly say more goals win, or first in finish wins), but the goal is to match the unconscious evaluation done by human brains. A real chef has a real human brain that works the same as other human brains (with variations, of course, but then, such variations will also be between different guests of the restaurant, so unless the chef is an outlier, it still can be relied on). On the other hand, a computer program would need to reverse engineer it (or use a reverse engineered algorithm provided by the programmer), and with reverse-engineering, you are never sure when you hit an incompatibility (which in this case would equate to preparing a meal that doesn't taste well).

      The language translation problem is, of course, based on the fact that the translators simply lack of knowledge. Also humans can be trapped by lack of knowledge.

      Indeed, I myself got trapped into it:

      I once read about a nice example, where a computer meant to analyse sentences was given the sentence "time flies like an arrow", and then could not decide if it was a special type of flies that happen to like arrows, or if it was time that is flying, just like an arrow does.

      Now, when they cited that sentence, they didn't say what it meant, because "of course" everyone knows it anyway. Now I'm not a native English speaker, and hadn't come across that sentence before. However I didn't notice that I had no idea what it means because the meaning seemed clear to me: I thought it refers to the arrow of time, that is, the fact that the past is different than the future.

      Later, I read again about that, in a German text. There they explained what this sentence means, by translating it into the equivalent German saying (which doesn't feature an arrow). It was at that point that I recognized that the "clear" sentence wasn't that clear at all to me.

      Note that training is also just a way to pass on information. And it is indeed quite common that programs are trained (think of spam filters for example; no one would be that silly to try to write down a strict rule about what spam is, but also Watson is trained).

      Also, if you've even been in a language learning class, you certainly have experienced how different the translation attempts can be from the actually correct translation, even with a common human knowledge background aiding you. Indeed, I'd not be surprised if an American beginning learner of Russian or a Russian beginning learner of English would produce the very same translation you gave (especially if he happens not to be a Christian and didn't ever hear about this bible quote).

      Also I've once came across an example of double-translation by humans that also changed the meaning (although quite subtly). I don't recall the details, unfortunately, but I had read a German translation of a French book (the German title is "Zufall und Notwendigkeit"; probably the English title is then something like "Randomness and Necessity"), and then I came across a quote in the German translation of an English book where the author quoted a sentence from that book; being an English book, he surely quoted it in English.

      Now I was curious if the translator of the English book looked up the quoted sentence in the German translation of the French book, or translated it himself. So I looked up the sentence in my copy of the German translation of the French book, and compared. At that point I noticed the subtly different meaning. My conclusion is that the quote in the German translation of the English book is the translation of the English translation from the French book, and it was this double translation that caused the slight change in meaning compared to the direct translation. (I was, of course, acting on the hypothesis that all involved translators were up to the task; otherwise the discrepancy could have been explained by lack of competence)

      So my conclusion is that all those examples of where machines fail are nothing machine-specific, but just amplified versions of the same errors human make, with the amplification being caused by the fact that the computer programs and data are not yet advanced enough.