Help - Search - Members - Calendar
Full Version: Parasitic Computing
Dumpshock Forums > Discussion > Shadowrun
Crusher Bob
Going of some computer and ethics dilemmas, I found this neat article about parasitic computing. Anyone want to think about it's applicability to SR? I'm a bit too tired to do so. Parasitic Computing
Cray74
As I understand it, the architecture of the Matrix is designed for ultimate isolation of systems to prevent a wildfire-like spread of a virus, like what happened in the Crash of 2029. My first reaction is, "no, you can't do that."

On the other hand, all navigation of the matrix involves "interrogating" all objects within "sight" of the cyberdeck or cyberterminal and getting a response from that supplies the files containing the objects' virtual appearance and position relative to the viewer. One could imagine hiding a simple computation in the request for icon appearance information. So on second thought, I'll say, "Some kind of parasitic computation is probably possible."
Crusher Bob
You can also take a look at 'the drummers' in Stephenson's The Diamond Age for a sorta Universal Brotherhood seeks to solve np-complete problem. Why stop with taking over 'silicon computers' when therre are all those biological ones just wandering around?
Siege
Earlier concept even before "Matrix Reloaded" --> "Lich" programs from CP 2020.

Fun stuff.

-Siege
krishcane
That parasitic computing article was interesting to read, but it's fantasy the way the Internet is designed today. As it admits itself:

QUOTE

The implementation offered above represents only a proof of concept of
parasitic computing. As such, our solution merely serves to illustrate the
idea behind parasitic computing, and it is not efficient for practical
purposes in its current form. Indeed, the TCP checksum provides a series
of additions and a comparison at the cost of hundreds of machine cycles to
send and receive messages, which makes it computationally inefficient. To
make the model viable, the computation-to-communication ratio must
increase until the computation exported by the parasitic node is larger
than the amount of cycles required by the node to solve the problem itself
instead of sending it to the target.


Duh. It goes on to say:

QUOTE

However, we emphasize that these are
drawbacks of the presented implementation and do not represent fundamental
obstacles for parasitic computing. It remains to be seen, however, whether
a high-level implementation of a parasitic computer, perhaps exploiting
HTTP or encryption/decryption could execute in an efficient manner.


This is wishful thinking. That last sentence in particular essentially says, "We haven't thought of a way in which this is actually efficient, but that doesn't mean there isn't one."

There is a fundamental obstacle they haven't touched on here. Let's assume, just to be generous, that there is no overhead data going along with the computational data -- just the query and potential solution and almost no extra bits to get it there. That will never be true, but in the right protocol, it might be 99.9% payload, so let's call it 100% for fun.

The time to get an answer will be the time for the system to send all the queries/solutions out to the network (possibly plus the time for one computation, if the last solution sent was the correct one -- let's assume that's one computation is pretty fast and not meaningfully large amounts of time compared to sending all the solutions).

If this system is to be usefully efficient, then, the parasite computer must be able to send all the query/solution pairs out to the network faster than it could compute the solution. Even assuming the worst-case, that the parasite computer would have to do every possible calculation to get to the answer, we are then comparing computation-speed one-for-one with query/solution transmission speed. If other words, the system has to be able to send a given query/solution out to the network faster than it could compute the same query solution.

The problem is: If we are talking about any standard network protocol, then the computation in question is an algorithm that is part of the protocol. For the parasite system to send the data more quickly than it could perform the algorithm itself, it means it has to have more network bandwidth than it could actually use to receive data in normal operation.

Let me say that again. If the network bandwidth of the parasite host is fast enough to send the solution/query out faster than the parasite host could calculate the related protocol algorithm..... then that bandwidth is faster than the parasite host could receive and process data if it were say trying to just surf the Internet normally.

One could in principle build such a device -- say by hooking a 10 Gbps OC-192 to your PC. However, such a device would never exist as a normal consumer object, because it can't use its own bandwidth. Your PC will never have an OC-192 NIC and a Pentium 4 processor, because it's a total waste.

Routers are built this way, because they just push traffic around and don't open it up or process it. However, they accomplish this by using dedicated processors in the network interfaces in order to push all that traffic. For the same money and effort, you could just build a fast computer to do it. There is no engineering reason to custom-design a parasite host like this.

Of course, if you could compromise or steal an existing router, you could in theory re-program it to do parasitic computing. But if you could do that, why wouldn't you just compromise or steal a powerful workstation, which have more computational punch and less secured?

In summation, parasitic computing would require effective bandwidth greater than the processing power of the machines that use the bandwidth. Such devices only exist in infrastructure, which is not designed for complex uses such as parasitic computing. Therefore, an effective parasitic computer would require a custom-designed device connected to massive bandwidth -- in today's world, inside the core of one of the top tier ISPs.

---------------------------------------

For SR purposes, I can envision an imaginary protocol used intentionally for distributed processing (perhaps via some kind of incremented-payload multicast system for those of you in the industry) being co-opted for parasitic computing. It's unlikely that the corps would not realize such a security hole and avoid it like the plague, but it's possible.

--K
krishcane
Adding a conspiracy theory for ya:

In the CIA, NSA, FBI, Big Brother, etc wanted to....

They could custom design a parasite computer to sit in the core of the ISPs (where they already have scanner for various hot topics, called "Lawful Intercept"). That machine could be given the massive bandwidth required to make it useful and could occasionally take over the ISPs core with massive amounts of data for code-breaking or other large-problem solutions. It would pretty much blot out normal Internet use around the ISP core when it was turned on (or would require its own parallel infrastructure out to at least regional nodes, which would be hundreds of millions of dollars), but it could be done.

Although, it's probably cheaper and less risky to just buy a lot of Cray supercomputers and link them.

--K
mfb
i doubt you'd be able to set up a parasitic computation thingy that forged its own connections and hacked the CPU time by itself, in SR. however, it'd be very easy (and tedious) for a decker to simply hack into a hundred or so low-end hosts, set up a superuser account, and create a script that waits for processing instructions from the supernode.
krishcane
Said decker might also be able to write a strong smart frame to log in to various hosts and insert the script. I could see that within the realm of SR. That's basically a distributed-attack virus in today's terms.

--K
Crusher Bob
The main utility of parasitic computing seems to occur when the 'marginal cost' of bandwidth is less than the 'marginal cost' of more computing power. In fact, about 1/2 the 'cost', since you need to both send a recieve your answer. However, if you can arrange it so that the 'hosts' don't send you a response unless the answer is correct, then you could theoretically still come out on top if bandwidth was 'a bit' cheaper than processing power.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012