Help - Search - Members - Calendar
Full Version: Math enhanced web traffic
Dumpshock Forums > Discussion > Shadowrun
Rad
Interesting. Anybody got a link to the actual study?
Shortstraw
Linky

Would you like to know more?
Rad
Sweet, thanks for the info-dump, Shortstraw.
nezumi
Just glancing through, it seems like the idea is to reduce dropped packets by including enough data to easily rebuild missing packets. The cost here is I imagine the overall transmission size increases. So it goes faster at the cost of being bigger.
Draco18s
It's actually not "bigger" though. You have to include the resending of dropped packets as part of the size you're comparing against (that is: if a packet is sent twice, you have to count it twice).

That said, I don't really understand the math of what it's doing. It uses Markov Chains, which could allow for packet reconstruction, but that's about all I gleaned from the white paper.
pragma
I can shed a little light: I have some training in this.

Markov chains are a tool used to model series of random decisions. Consider a goofy example: a drunk guy at a bar walks randomly one step to the left or to the right each second. Five steps to the left is his home, and five steps to the right is a 3 step cul de sac. If you want to figure out how long it will take the guy to stumble home given the kind of weird positions he can get into (stuck looping around the cul de sac, for instance) you use a Markov Chain. This shows up a lot in communication theory (what the research paper is about at heart) because each packet transmitted goes through a few states -- initial transmission, receipt of an acknowledgement or error signal, possible retransmission, etc. -- which are randomly interrupted such that they look like a drunk guy wandering around a cul de sac.

The authors are planning to change how packets are encoded, using something called (I think) 2nd order Galois Field encoding, such that it is easier to catch and correct errors at the receiver. This is interesting because it apparently translates into a higher throughput in the demonstration shown to the reporter in the initial link. However, the tradeoff is that it pushes more computational effort to the receiver which, if it's your cellphone, might not be able to handle the increased power budget. So, nezumi, the right way to think about it is evaluating where the computational effort needs to be: the packet size probably will stay the same or shrink but you'll need to do more math to decode it.

I'm not sure this is really a big story. It's clear the authors charmed the journalist they were talking to, but Arxiv doesn't have peer review so the paper doesn't have the authority of a journal behind it; there could be obvious errors that an internet reporter or a glance from Shadowrun forum members won't catch. Even so, the technology is being popularized by a startup which doesn't necessarily have any traction. I'm in a wait and see mode on this one.
Sendaz
Indeed.

I had heard about something along these lines many years ago, but it never got far back then, so was interested to see if they have made any progress on this.
Shemhazai
I recommend this short lecture by Frank Fitzek, one of the study authors, before digging into the study itself.

Frank Fitzek, Aalborg University: Network Coding for Future Communication and Storage Systems

For me, it's fun to speculate that something like this partly underlies a revolutionary change from Internet to Matrix.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012