Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday October 03 2014, @06:12PM   Printer-friendly
from the over-here-we-use-moderation dept.

A coding scheme for interactive communication is the first to near optimality on three classical measures.

Error-correcting codes (ECC) are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”

But classical error-correcting codes work best with large chunks of data: the bigger the chunk, the higher the rate at which it can be transmitted error-free. In the Internet age, however, distributed computing is becoming more and more common, with devices repeatedly exchanging small chunks of data over long periods of time.

So for the last 20 years, researchers have been investigating interactive-coding schemes, which address the problem of long sequences of short exchanges. Like classical error-correcting codes, interactive codes are evaluated according to three criteria: How much noise can they tolerate? What’s the maximum transmission rate they afford? And how time-consuming are the encoding and decoding processes?

At the IEEE Symposium on Foundations of Computer Science this month, MIT graduate students past and present will describe the first interactive coding scheme to approach the optimum on all three measures.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by kaszz on Friday October 03 2014, @06:40PM

    by kaszz (4211) on Friday October 03 2014, @06:40PM (#101483) Journal

    MIT link with https protection.. New frontier in error-correcting codes [mit.edu]

    The technique can probably be summarized as:
    "Ghaffari and Haeupler adopted a technique called list decoding. Rather than iterating back and forth between message bits and extra bits until the single most probable interpretation emerges, their algorithm iterates just long enough to create a list of likely candidates. At the end of their mutual computation, each of the interacting devices may have a list with hundreds of entries."

    • (Score: 3, Interesting) by kaszz on Friday October 03 2014, @06:43PM

      by kaszz (4211) on Friday October 03 2014, @06:43PM (#101484) Journal

      Btw, newsoffice.mit.edu seems to have included some CPU sucking javascript from hell code. So turn it off before visiting.

    • (Score: 1) by mathinker on Saturday October 04 2014, @08:16PM

      by mathinker (3463) on Saturday October 04 2014, @08:16PM (#101763)

      The paper is on arXiv: http://arxiv.org/abs/1312.1763 [arxiv.org]

      Be aware that Haeupler's page at CMU claims he has a patent pending for "Error Correction for Interactive Message Exchanges Using Summaries". (Don't know how well that's going to fly in the current backlash against method patents, tho.)

  • (Score: 2) by cafebabe on Friday October 03 2014, @11:32PM

    by cafebabe (894) on Friday October 03 2014, @11:32PM (#101555) Journal

    Is linear decode especially important? As I understand Reed-Solomon decode is O(n^2) and that's an issue for a 9,000 byte Jumbo Ethernet Frame. However, rather than O(n), would something with the worst case of O(m log2 m) be too onerous? If n>14m in time and/or energy, linear decode may not be the best option.

    --
    1702845791×2