Help - Search - Members - Calendar
Full Version: Shadowrun and Computers: Real life squared
Dumpshock Forums > Discussion > Shadowrun
Sabosect
Ya, I'm back. Really, really long story I don't feel like sharing.

Well, this makes me wonder while looking at the history Shadowrun has with computers. Please excuse if this is an old topic.

First, we have the internet. We know how that ends. Gotta love viruses. Then, the Matrix, which promptly crashes about 60 years from now. Then, the Matrix 2.0. Okay, who here wants to bet the Matrix 2.0 is going to crash at some point?

Anyway, my topic? Wait, let me ruffle around... new Shadowrun notes... insane street sam... Ah, here it is.

Now, let's look at this who record. It seems that, in 65 years time, programmers won't have figured out what is necessary to keep a global computer style up and working. Now, after looking at it, it occured to me why it failed. Their internet is exactly like ours. Instead of being one coherent system, you in fact have multiple systems hooked together and attempting to run as one. Well, amazingly it works for us. For the moment, at least. In their case, they simply got the idea of throwing in regional divisions. Whoopie.

Anyway, this makes me wonder exactly how the Matrix is going to crash. Now, I'm not looking for sane, rational theories that make common sense. In my experience, those usually are the plans that fail while the insane and psychopathic typically succeed. So, let's see what is the most off-the-wall you can come up with.

Personally, I vote for an insane AI that takes on the form of a rabid Monty Python rabbit and a decker who writes a program called Holy Hand Grenade and decides to go after the AI. The program crashes every server its on, and they run all over the Matrix, destroying the entirety of it before before a random lucky shot kills them both. The rest collapse inwards into a continuity error.
Trax
Don't forget about Winternight, they despise the Matrix.
nezumi
They forgot to convert Metric to English.
Nikoli
The server guy was too busy downloading elf pr0n on his terminal and failed to make even cursary backup...oh wait that's the general hand wave for the first crash and subsequent loss of data.
Spookymonster
QUOTE (Sabosect @ Aug 12 2005, 07:58 AM)
Their internet is exactly like ours. Instead of being one coherent system, you in fact have multiple systems hooked together and attempting to run as one.

What you might see as a weakness is actually one of the internet's greatest strengths. One coherent system controlling the network means one ginormous point of failure; take out the master controller, and everything connected to it dies. The internet (or, more accurately, ARPAnet, the internet's 'grandfather') was designed with this weakness in mind. The distributed nature of the internet means that damage can be routed around; when one component fails, the slack is picked up by its sibling components.

Possible causes for a Matrix-wide collapse? Here's one suggestion. Remember the blaster virus from a few years ago? While the shutdown loop was annoying, the IP probe was the real troublemaker; even systems that were immune to infection (Macs, Linux, mainframes, etc.) were impacted by the resulting network congestion. I personally watched as my company's mainframe environment was brought to its knees from network traffic generated by the 40,000+ infected PCs in our global enterprise. Eliminating the virus required a twofold approach; the virus code had to be removed from the infected system, and the IP backdoor had to be closed to prevent reinfection.

But what if they couldn't prevent reinfection? What if the virus exploited a fundamental flaw in the network protocol(s)? Maybe (just maybe) you could eliminate the virus by destroying every infected system on the planet; you'd still be sitting on a ticking timebomb, waiting for some stupid script kiddie to reintroduce the virus on some public library's terminal.

You'd have to scrap the protocol, maybe even the hardware, too, depending on where the flaw was located. You'd have to rebuild your network architecture to compensate for the differences in the old and new systems. Why not build a better protocol while you're at it? More bandwidth, less overhead, increased functionality, etc...

Sound familiar?

[edit]
And if you think this couldn't really happen today, check this out.
Nikoli
Sounds like they might implement RFC 2795 to combat the problem.

After all it seems to work fine for "Project: Ancient History"
Westiex
QUOTE
After all it seems to work fine for "Project: Ancient History"


While Deus may have cause the problem that crashed the Matrix, I don't see why Ancient History would have. Unless someone made an error in compling him.
Nikoli
No RFC 2795 is the AH AI comunications and processing protocol. Mayhap that is what will be used in Matrix 2.0
Sabosect
QUOTE (Spookymonster)
QUOTE (Sabosect @ Aug 12 2005, 07:58 AM)
Their internet is exactly like ours. Instead of being one coherent system, you in fact have multiple systems hooked together and attempting to run as one.

What you might see as a weakness is actually one of the internet's greatest strengths. One coherent system controlling the network means one ginormous point of failure; take out the master controller, and everything connected to it dies. The internet (or, more accurately, ARPAnet, the internet's 'grandfather') was designed with this weakness in mind. The distributed nature of the internet means that damage can be routed around; when one component fails, the slack is picked up by its sibling components.

Which is, IIRC, later incorporated into the Matrix, with the groups of individual servers further separated by regions. The main problem is viruses become harder to deal with in this system. Instead of a single area you have to clean out, you have to clean out hundreds. It gets even worse when you are talking about the fact that you have to deal with technology differences between regions as well. And that's just the modern form. The Matrix has these technology differences as even more outstanding.

However, that isn't the major problem. The major problem is that, according to the canon timeline, the system has failed. Not once, but twice. I suspect the Matrix 2.0 either takes it to the next level, at which point even the regions and cities are divided further, or goes in the opposite direction and tries for less independency, much like the Matrix in the movie of the same name. Considering how Matrix 2.0 operates on airwaves. I won't be surprised it it's the second one.

However, this still isn't a silly answer as to how the Matrix falls apart.
wagnern
Didn't Ma Bell almost lose their entire network due to a single " ; " being out of place?

Off the wall Ideas about how this darn thing keeps crashing?

1: I alwies get a inner chuckle thinking of someone placing a 'Big Gulp' beverage on a crucial computer and waking away quietly wainting for fate to do it's work. .

2: Microsoft* does it to sell new versions of their software. (When they finaly get their stuff working properly, they cannot sell any further 'upgrades' and have to 'reboot' the computer market.

3: The AI DEEP THOUGHT comes up with the answer to the ultimate question in the universe and someone hires a Shadowrun team to crash the matrix before everyone finds out. If everyone would find out the answer, life would become meaningless and dull, like when someone tells you who won the game you taped. Unfortunatly, the cult of Bob keeps installing DEEP THOUGHT everytime the matrex is repaired.


*Yes, Microsoft, not whatever they call it's merged future self. Microsoft does not merge, it concours, consumes, destroys, stomps on things just to hear the snaping sounds. It is not the 800 lbs gorilla of the computer world, it is the Godzilla of the computer world.
hyzmarca
QUOTE (Sabosect)
QUOTE (Spookymonster @ Aug 12 2005, 08:48 AM)
QUOTE (Sabosect @ Aug 12 2005, 07:58 AM)
Their internet is exactly like ours. Instead of being one coherent system, you in fact have multiple systems hooked together and attempting to run as one.

What you might see as a weakness is actually one of the internet's greatest strengths. One coherent system controlling the network means one ginormous point of failure; take out the master controller, and everything connected to it dies. The internet (or, more accurately, ARPAnet, the internet's 'grandfather') was designed with this weakness in mind. The distributed nature of the internet means that damage can be routed around; when one component fails, the slack is picked up by its sibling components.

Which is, IIRC, later incorporated into the Matrix, with the groups of individual servers further separated by regions. The main problem is viruses become harder to deal with in this system. Instead of a single area you have to clean out, you have to clean out hundreds. It gets even worse when you are talking about the fact that you have to deal with technology differences between regions as well. And that's just the modern form. The Matrix has these technology differences as even more outstanding.

However, that isn't the major problem. The major problem is that, according to the canon timeline, the system has failed. Not once, but twice. I suspect the Matrix 2.0 either takes it to the next level, at which point even the regions and cities are divided further, or goes in the opposite direction and tries for less independency, much like the Matrix in the movie of the same name. Considering how Matrix 2.0 operates on airwaves. I won't be surprised it it's the second one.

However, this still isn't a silly answer as to how the Matrix falls apart.

Of course, with centralized control you don't need viruses. Just blow up one computer and the entire world comes grinding to a halt. Imagine what would happen to the world ecconomy if the defacto world currency couldn't change hands for a week or two.
nezumi
QUOTE (wagnern)
Didn't Ma Bell almost lose their entire network due to a single " ; " being out of place?

If you're talking about the big crash of '92 (it was 92, wasn't it? Right around then, anyway) the bigger reason is that the computers sent a message to each other when they booted up, but when too many messages were received, a system would shut down (in short, and this is assuming memory serves). So when two systems suddenly shut down, it sent messages to their neighbors when they booted up which would overload those ones and they'd reboot, shifting the traffic onto other computers, creating giant reboot loop.

In short, it was because all the systems used the same poorly tested code. This is not ENTIRELY (although partially) valid with our current internet system, as Spookymonster pointed out.
hobgoblin
the argument for the more centralized control of the matrix vs the net as we know is was that it was simpler to contain a virus spread if one reacted soon. ie, cut the connection to the infected area and then start the sweep.

with multiple connections this cant be done, and is one argument for why the pre-matrix crash virus had the effect it had.

allso, recently there was talk about a flaw in cisco equipment. not many home users know about this company but its THE company when it comes to routers (the backbone of the net). basicly you dont have to take down every pc. you just nail the routers and watch the chaos roll nyahnyah.gif

but this can only happen when voip is the main form of telephony for the modern world, be it mobile or not. ie, the crash can only happen if everything moves over a single system.

it would be like say we only had rail transport between any two points in space and then suddenly the rails where cut. any system that require the connection to be there and be highly available will then break down.
mfb
it's interesting to ponder the possibilities in Matrix 2.0. it's known that every individual node in the network is a local router in miniature--think p2p on a much larger scale, where your peers are basically anybody in range of your signal.

the problem is that, as i understand it, the p2p network would have to track physical location by GPS in order to work at all. everybody's effective network address (or, rather, the chain of routers used to reach that address) would be changing constantly as the drove around the city. so, maybe the architecture turns that around, and uses GPS tracking as a helper instead of a hinderance. every node, as part of its network ID, would display its current GPS location. that way, the network itself doesn't have to check the current GPS location of any point of the network except for the sender and the intended recipient of a given transmission. the intended recipient's GPS (or, their GPS at the time of transmission) is included in the addressing information of the transmission. any node which recieves that transmission can look at it and figure out whether or not they need to repeat it by examining the GPS of the intended recipient and comparing it to their own and the last sender's. if the last sender's GPS is closer than the recipient, the packet can be discarded. if not, it's rebroadcast. the packet will necessarily reach the recipient by the fastest available route, and error-checking is easily accomplished by collating the web of transmissions that accompany each packet.

basically, finding a given recipient is a matter of including an address of "thataway!" in each packet. the individual nodes don't need to process anything except a simple yes/no based on values that are easily and quickly accessed. packets will bounce inevitably towards their targets along the shortest possible path.

of course, if the target is far enough away (1,000 miles? 500? not sure), this will be agonizingly slow. in that case, the original ping which locates the recipient (i'm picturing a GPS-indexed equivalent of DNS here, updated hourly) tells the sender to address his transmission to the nearest satellite uplink. the transmission gets shot into space, routed to the satellite uplink closest to the recipient, and then the directional p2p thing picks back up. similar allowances would have to be made for fast-moving nodes--people surfing while on the Tokyo-San Fran semiballistic, bullet trains, etc.
wagnern
You know what would be great for keeping a network going? AIs. No, not the super human alterior motive kind. More of a cockroach aproach.

Imagine an Matrex populated with an entire self regulating echosystem of AI programs. True, if you muck with it enough you can damage it, but you can't destroy it. An 'Astroid' would destroy the dinasors, but the rats would survive.
Nikoli
Ah, a fan of Dan Simmons and his Hyperion Cantos.
nezumi
QUOTE (mfb)
the problem is that, as i understand it, the p2p network would have to track physical location by GPS in order to work at all.

That might be A solution, but it isn't the only one. Especially keep in mind the speed of semi-orbitals and bullet trains and the like which, assuming GPS would work for them, would shift between transmitters so suddenly it would be insane, as the address shifts too fast for reasonable communication. But there are other solutions (I don't imagine cell phones require GPS to operate) and, of course, whatever else people figure out in the next 65 years.


mfb
sure, i'm not saying for certain that's how it works. it's just an idea i thought of for how it could work. the real limit for it would be how fast you can propogate the p2p stuff--how fast a give node can recieve the transmission, decide to send or not send, and then send if it needs to.
FrostyNSO
QUOTE (wagnern)
You know what would be great for keeping a network going? AIs. No, not the super human alterior motive kind. More of a cockroach aproach.

Imagine an Matrex populated with an entire self regulating echosystem of AI programs. True, if you muck with it enough you can damage it, but you can't destroy it. An 'Astroid' would destroy the dinasors, but the rats would survive.

What happens when the rats start working together?
hyzmarca
QUOTE (mfb @ Aug 13 2005, 02:41 AM)
sure, i'm not saying for certain that's how it works. it's just an idea i thought of for how it could work. the real limit for it would be how fast you can propogate the p2p stuff--how fast a give node can recieve the transmission, decide to send or not send, and then send if it needs to.

You wouldn't need GPS for that set up, mfb. All you would need is a way to establish which nodes are connected directly to each other. That is done simply enough. There is no real need for address shifts either. With a single wroldwide network it is simple enough to only use hardware addresses.

Every node would determine which nodes it is directly connected to and then send that information to the nodes that it is directly connected to. The next node would add its connections to the map and send it on. The result is, eventually, a complete network map.

Suborbitals are just fraged, but that isn't anything new. It isn't like you could ever get an internet or matrix connection from one.
nezumi
QUOTE (hyzmarca)
You wouldn't need GPS for that set up, mfb. All you would need is a way to establish which nodes are connected directly to each other. That is done simply enough. There is no real need for address shifts either. With a single wroldwide network it is simple enough to only use hardware addresses.

I believe his problem was what to do if an object is currently moving, like your cell phone, your car or an airplane. If they're using wireless connectivity, they'd quickly leave the range of the node they've been mapped to and reconnect to a new one. THis wouldn't be a problem in and of itself, but any packets currently in transit would still be going to the old node, and you'd have pretty wicked packet loss unless your travel speed is less than (time to remap+time for a complete packet bounce+processing time on the other side(+time for their remap if they're moving too)).

MFB's idea, I presume, would cut that down by having a router redirect the packets as appropriate when it got close to the target, so you'd only lose packets if you were moving faster than (time for remap+time for 1 small hop), which is much kinder.
hobgoblin
first of, for any handoff to be effective there have to be an area where the signal of both the old and the new "tower" is within range, so that the unit can keep getting data from the old while signing up with the new.

atleast that would make sense to me nyahnyah.gif

im guessing that the p2p idea of mfb is only practical in the near neighbourhood when walking or similar. most likely the long range matrix action will be done over a system quite similar to todays mobile phone systems.

so im guessing that a device can both connect localy 1-1 or even be routed over one or more to a third party. but they are allso able to connect to a long range mobile tower when in range. point is that for a quick file transfer you just look for the person in the local area, maybe ask the other items near by if they can see the device your trying to connect to. if this comes up empty then it calls up the local mobile tower and make a connection over that.

this allows 2 or more devices that are not within mobile coverage to still exchange data, but at the same time not connect to the matrix directly.

to rely only on short range devices and a selfgenerating routing system for anything that moves as fast as a car or faster will be silly. you will be moving in and out of so many connection zones that any device you have will use all its time setting up new connections.

hmm, i recall reading about a chip intel have made. its able to look for all kinds of wireless connections and pick the one thats most stable and speedy at any given moment. so, if your stationary within range of a wifi hotspot, that will be used. if your moving so fast that normal wifi hotspots dont have time to connect it may go mobile connection or even wimax or similar.

im not sure, but i think the diff between a mobile router and a normal router may be that it keeps around a copy of the package sendt until its sure its deliverd, in case the user changes tower in the middle of the transfer.

basicly you have x number of towers in an area connected to a area router thats again connected to a router further up the stream. a bit like the idea of ltgs and rtgs of classical sr matrix nyahnyah.gif

only problem may be if the user is close to changing from one area to another. but maybe the routers can talk to each other so that it can tell the upstream router to basicly forward the same package to both area routers and the users device or one of the area routers can tell the other when the package got deliverd or not so that the other can just drop it.

most likely said packages are made small so as to transfer quickly.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012