Help - Search - Members - Calendar
Full Version: Reality Filter, AR and VR
Dumpshock Forums > Discussion > Shadowrun
Pages: 1, 2
Garrowolf
What if instead of having AR and VR being completely seperate you have them as a continuum? Treat AR, VR, and eality Filter as all versions of the same thing.

At the bottom of the rung is small simple interfaces. They can be floating hologram looking menu systems. They give you the abiliy to access information and have a limited feedback system to allow you to push buttons they project.

Then next rung has full floating AR that uses your head as the center of a desktop. Basically it al floats around you and you can move your head some but notthat much before the whole thing moves with it. You can pull these windows around and mentally press some buttons.

The next rung is better integration with the physical world. You can anchor a window to a meatspace object as long as it has a RFID tag there. This way you can have an office++ in meatspace. You can also close your eyes and enter into VR to see a small office or a house with fairly good rendering. You can access a VR site and asign it to a door. Then you can walk through the door in a form of Virtual Projection (kind of like Astral Projection - like that? I just made it up! Virtual Projection (At least I think that I made it up)). You walk around while your meat body is coma like.

The next rung is good integration with the physical world. You can edit the world around you to look like some other place. You can give your house a window that looks like a real window, looking out onto the Mediterrianian ocean. You can change your walls. You can edit new objects that are VR that either cover up the physical or add features to the physical. You can feel these objects as different. The simsense is there adding to it's reality. People in VR can come over to your altered house (or your real one) and interact with you as if they were actually there. They would open the door. It would show that the door was open and provide the correct sounds as well. He could shake your hand and the simsense would provide that information as well.

You could either choose to Virtual Project or edit the world around you to still go to a VR site. You could have you bedroom work like a holodeck. You would just have your reality filter providing clues to keep you from hitting walls or furniture. It could scroll with you so that you could move through the VR site to places out of range of the size of your room. You look like you are walking straight through the room but you are actually pacing slowly back and forth wth the VR rotating around you. You tell it mentally that you want to make it to a certain chair so it will steer you to your own chair at the right moment.

The last level would be fully integrated. Your reality is fluid.

I was thinking that each of these steps would require a higher and higher system and response level. This way you can have low level commlinks that act much like AR PDAs. Then you have someone who is always making the world a rosy hue.
Crusher Bob
The problem here is that the limits on the interface are generally the user's ability to comprehend the data and the design of the interface that makes this harder or easier. Even the dirt cheap CPUs found in t-shirt care tags can probably put up all sorts up pretty pictures in post processor-gasm 2070.
Serbitar
Garrowolf: thats already the case. Sim-modules provide AR and VR. VR is just full AR with RAS override on.

Dont ask me how that fits to the rules, but thats the fluff.
hobgoblin
i would not say that AR is just VR with the RAS off, and i cant say i understand how you get that impression from the books. but then we have had a lot of those discussions lately. so many in fact that i some times wonder if we are reading the same book wink.gif
Serbitar
Effectively its just that.
lorechaser
QUOTE (Garrowolf)
Then next rung has full floating AR that uses your head as the center of a desktop. Basically it al floats around you and you can move your head some but notthat much before the whole thing moves with it. You can pull these windows around and mentally press some buttons.

Tangent.

Why is this a good thing? I can alt-tab from window to window and use keyboard shortcuts a lot faster than I can reach up, grab the dancing document icon, and move it in front of me, then pull out my virtual pen to start writing a document.

I completely buy in to the idea of the VR matrix, but at the same time, I really don't. It simplifies things to a level that anyone can grasp, but I think the hard core are still command-line hacking.

I liken it to the Matrix, where the Operators learn to actually read the code, not the digital representations. They don't watch a movie screen showing what's happening, they watch the streaming code bits to know what's going on....

And I know it's a staple of VR from way way back, it just occasionally bugs me.

Then I go model the next DatSecure haven as a neon blue sphynx with 5 wings, and get over it. wink.gif
hobgoblin
QUOTE (Serbitar)
Effectively its just that.

reductio ad absurdum in effect i take it...
hobgoblin
QUOTE (lorechaser)
I liken it to the Matrix, where the Operators learn to actually read the code, not the digital representations. They don't watch a movie screen showing what's happening, they watch the streaming code bits to know what's going on....

but operators are slowpokes compared to agents and neo.

do not forget that hot sim VR is compared to people being able to feel the code flowing past them. kinda like neo "feeling" the agents outside the door in matrix 2...

its also a case of rather then reading, interpeting and formulating the right command to send back, you just reach for the virtual file like you whould reach for the physical document.

you are not thinking about each and every movement of the arm when reaching for a book on the table. but if want to load a file commandline style you have to think about the file name, the command name, and maybe some switches you want to use to get the right sorting of the files content.

when one talks about the power of the command line, one most often talk about the ability to chain together multiple programs using "pipes". that you can send the output of one program as input to another.

this is much harder to do when using gui style programs.
lorechaser
QUOTE (hobgoblin)
QUOTE (lorechaser @ Feb 6 2007, 06:12 PM)
I liken it to the Matrix, where the Operators learn to actually read the code, not the digital representations.  They don't watch a movie screen showing what's happening, they watch the streaming code bits to know what's going on....

but operators are slowpokes compared to agents and neo.

do not forget that hot sim VR is compared to people being able to feel the code flowing past them. kinda like neo "feeling" the agents outside the door in matrix 2...

its also a case of rather then reading, interpeting and formulating the right command to send back, you just reach for the virtual file like you whould reach for the physical document.

you are not thinking about each and every movement of the arm when reaching for a book on the table. but if want to load a file commandline style you have to think about the file name, the command name, and maybe some switches you want to use to get the right sorting of the files content.

when one talks about the power of the command line, one most often talk about the ability to chain together multiple programs using "pipes". that you can send the output of one program as input to another.

this is much harder to do when using gui style programs.

Maybe that's the key. I'm not thinking of it from a DNI point of view. I'm thinking of the guys with the trodes on their heads and the gloves, actually reaching out to grab the book.

If rather than the thought process of "windows -> arrow down 3x, enter" I think "open drawer, remove book" it might not be so different.
hobgoblin
as in reaching for the AR book icon/ARO?
mfb
interfaces are probably customizable by the user. people who don't use computers often probably slave their icon's 'movement' to their gloves--when the user moves his hand, the icon moves his hand, so they look like blind retards. more advanced users slave most Matrix actions to finger movements. curling your right index finger downloads whatever your left index finger is pointing at. tapping your left index finger and thumb together logs you off. tiny, intuitive gestures.

DNI would work the same way. slow users (low skill, low-end gear) concentrate on moving in the direction they want to go, and their icon moves that way. faster users select a location by glancing at it, twitch their virtual fingers, and end up where they wanted to go.
Garrowolf
okay so I didn't really finish.

What I was trying to say was how about this as a new stat on your commlink? Basically give rules to the fluff.

I was thinking that if it had a rating it could be linked to response. It would show the difference in another way from one level of commlink to another. The cheap stuff would be a version of a PDA. A high level one would be like walking into a desktop.
hobgoblin
i just dont see the need for that...
Garrowolf
QUOTE (hobgoblin)
i just dont see the need for that...

*sigh*

Then don't use it.
I was trying to come up with a way to integrate the pieces a little better. I also don't see the reason that there should be a IP boost when using VR.
cetiah
QUOTE (Garrowolf @ Feb 7 2007, 11:21 PM)
QUOTE (hobgoblin @ Feb 7 2007, 06:53 AM)
i just dont see the need for that...

*sigh*

Then don't use it.
I was trying to come up with a way to integrate the pieces a little better. I also don't see the reason that there should be a IP boost when using VR.

I don't really see the need either. An interface "type" works for me.

But I'll bite anyway.
In 2070, factors such as memory and hard drive space are assumed to be infinitely abundant. And yet, System and Response are limtied and yet they are the only thing that can be theoretically determined to be "infinitely abundant" with today's technology through distribtued computing.

It's clear that we need a limiting attribute somewhere in the rules, and I think replacing Response (which is more or less useless) with Interface opens up a wide range of possibilities. I don't know for what. One idea I can think of is to have every interface device contribute some amount to Interface score. So a DNI interface will have a different Interface score than someone using gloves and glasses.

If you adopt Frank's theory that coordinating activities between multiple nodes is the primary characteristic of System and System represents the ability to orchestrate these resources, then I can even see Interface reflecting System. Alternatively, I suppose Interface could limit hits in the same manner that spells limit hits for magicians.

I don't know. There's stuff you can do.
Garrowolf
well I can see some sort of matrix score based on proxy actions you have taken. I can see what he is talking about. I'm just worried that it is leading away from regular shadowrunning. It would work well for NPC hacker information brokers though.

I already have reworked the concepts of response and system. Response is a combination of CPU and memory. It is the hardware limiter.

System is just a package deal OS. I don't use it as a limiter. It is basically the default rating for the common use programs and the firewall. It is just what it packaged with your commlink. That way the PC doesn't have to buy up a bunch of programs. They buy the System and raise a couple of programs that they use alot. You could have a System of 3 and a Firewall and Reality Filter of 5.

Response is the only limiter. You can have a program on your computer that has a higher rating then your response but it only works at your response level. If you are having to bridge through a repeater then the rating of the repeater's response becomes the limiter.

I like the idea of calling it an Interface rating.
cetiah
QUOTE (Garrowolf)
System is just a package deal OS. I don't use it as a limiter. It is basically the default rating for the common use programs and the firewall. It is just what it packaged with your commlink. That way the PC doesn't have to buy up a bunch of programs.

Hmmm. Sort of like the computer "defaults" to its attribute rating. I kind of like that.
Garrowolf
Basically I think of system as just the stuff that comes with the OS. It's like windows. You could put more advanced programs on it then what it came with. The limiter was the hardware most of the time, not the software. People think that it is the software because they didnt find the right patch to get it to work most of the time.

This way it is simple to buy commlinks but people can fine tune what they want quickly.
Spike
The way I've seen it is more or less thusly:

AR requires some sort of display/interface technology to work. This can be wireless capable contact lenses (In teh RAW no less...) that read your eye movements coupled with voice recognition software and earbuds for sound replay. But you DO have that display/interface requirement.

VR requires a simrig in your commlink. Simrigs either require a trode set or a datajack to function, and the data flows too and from your brain.

I don't see this as an interpretation so much as what the damn manual says. Maybe I'm reading into it, maybe not. VR requires SIM, SIM requires a SIMsense interface... trodes/datajack. AR does not require/use SIMsense at all, completely different method of interfacing. the AR display could completely cover your reality with a reality filter, but it's a hollow illusion, take off your display set, or use a sense not covered (touch, say) and you won't feel plate armor or see knights... you'll touch and see the dirty real world without any ARO's or filters.

VR is essentially real, it's all in your head, but its real. Only by breaking the connection completely can you get out of it (vs just peeking around your VR display).

Obviously the difference between hot and cold VR is the level of signal strength and the number of filters between your brain and nasty Mr. black IC. Note that by purely fluffy descriptions, Hot Sim VR can fry your brain even without hostile IC attacking you (and probably represents a critical glitch). It's runnign sim without safety nets...




All of which might explain why I find these threads a little confusing.... wobble.gif
cetiah
QUOTE (Spike)
All of which might explain why I find these threads a little confusing.... wobble.gif

"What if instead of having AR and VR being completely seperate you have them as a continuum?" -Garowolf
Garrowolf
Yes Spike I was talking about a CHANGE.

I think that the seperation between AR and VR is mostly artificial. If you make it a series of Interface levels then it makes more sense. I think by this point you could have alot more levels of simsense access then just hot, cold, and off. You would probably need one for the AR anyway to give you tactile feedback from touching the floating windows.

I was thinking that the types of interface could give you access to certain interface levels.

Take away the bonuses from VR. It shouldn't make you faster because it is a physical object to be manipulated. If anything it should keep it at the same rate as you normally do things.

I was thinking that the interface type could determine that. Basically have the passive types be things that allow you to see and maybe feel things as a receiver. Basically have something like the squid from Strange Days that sends but does not receive. Then the datajack becomes useful again because you reasonably would be faster if you could mentally access controls. Those controls would be faster as small floating icons and such instead of large virtual objects. Flashing menus and fast icon clusters would be much faster.

Besides there is no real reason for the VR to need simsense. It just is more convienent. You can do VR now (just low res). I see no reason that you can't have simsense in variable levels in AR as well. Why not have a menu for a resturant where you can taste things ahead of time? Why not have your commlink tell your body that you are warmer then it really is?

The game has packaged things together a certain way. I think that you can do alot of interesting things by seperating them.
Spike
Well, I can see that then. Rather than suggesting they ARE something which is contradicted by the RAW, you are advocating making a change to teh RAW to add levels of interesting ideas to your games. I'm down with that.

Recall that as it stands the only way to actually feel or smell anything in AR now is to buy very expensive and limited tactile feedback suits and rare and exotic (becuase you gotta be strange to do it...) cyberware that accepts teh occasional ARO code for smells. The AVERAGE AR user doesn't feel the icon's he moves around.

The reason for the seperateing the AR/VR thing is that according to the book, everyone uses AR to some extent all day long. You don't see billboards and roadsigns and advertisements, you see ARO's. Go to a 'bookstore' and the books are flat black boxes with no information on them, its all in the AR interface everyone uses from RFID tags. You don't talk to your buddy, you chat with him via AR, you don't flirt with that hot chixxor, you check out her AR profile and send a compatabilty check. People aren't going to have it downloaded into their brains be default.... maybe some would like it, but others would refuse to make that leap. Simsense is still 'dangerous' technology... and a technology that overrides your normal sensory input, making it daunting to use while attempting to use your real senses at the same time.

Now, altering sim to make it a continuum is not a bad idea for flavor reasons. Mechanically it wouldn't need to have an effect, unless you just like people being kacked by rogue IC through their cell phones as a feature of daily life. Remember in Strange Days the 'Sim' technology was fairly safe, the closest we saw to an actual murder using it involved the bad guy monkeying with the hardware while the guy was 'jacked in' to override his ability to get out. Essentially he electrocuted the man by the special effect of burning out his brain from sensory overload...
Garrowolf
I don't think that the average user needs to worry about IC in the first place. You can very easily have a safe rig that has safety limits in the hardware. You can feel like you have touched something but not that you have been stabbed. You can feel like you have smelled something but not a poison gas. You can probably taste things but not burn your tongue.

My point is that the setting has had decades to work on this technology. It is already been on the street for years. I think that it makes sense for it to have evolved. You maybe couldn't generate BTL signals with the normal rig. You would need a specialized or hacked rig to do that.

Simsense doesn't need to be seperated off. I think that it would add to advertising and all kinds of interesting things. I see things moving away from the seperation between VR and RL. We already are trying to go into escapism. WHy not make YOUR world a better place without actually doing good.

Sounds pretty cyberpunk to me.
cetiah
QUOTE (Garrowolf @ Feb 8 2007, 01:28 PM)
I don't think that the average user needs to worry about IC in the first place. You can very easily have a safe rig that has safety limits in the hardware. You can feel like you have touched something but not that you have been stabbed. You can feel like you have smelled something but not a poison gas. You can probably taste things but not burn your tongue.

My point is that the setting has had decades to work on this technology. It is already been on the street for years. I think that it makes sense for it to have evolved. You maybe couldn't generate BTL signals with the normal rig. You would need a specialized or hacked rig to do that.

Simsense doesn't need to be seperated off. I think that it would add to advertising and all kinds of interesting things. I see things moving away from the seperation between VR and RL. We already are trying to go into escapism. WHy not make YOUR world a better place without actually doing good.

Sounds pretty cyberpunk to me.

What if your Interface score gave you benefits toward Matrix perception but penalties in actual perception and similiar things. Or penalties to acting outside the Matrix while interfacing with the Matrix at the same time.

How's this?

"Standard" AR has interface 2.
DNI AR and simsense has Interface 4.
VR has Interface 6.
Hot-sim and BTL has Interface 8.
Maximum Interface is 10. Minimum Interface is 0.

A hacker can specify any interface value he wants to use up the limit of his comlink's and associated equipment's Max Interface rating. Technomancers have a max interface value of 10.

Some nodes or matrix activities may require a certain amount of Interface. Most nodes require Interface of 1 for matrix perception or comcalls, and 2 for most operations.
  • Black IC stun/physical damage is limited to your Interface score. Lower scores indicate less possible damage taken in each attack.
  • Each 3 points of interface grants +1 to Matrix perception tests.
  • Each point of interface grants a -1 penalty to "real life" actions taken while conducting Matrix actions, including perception.
It's a start, maybe.
Hope it helps some.
bait
AR is designed to allow the user to operate in both the matrix and the real world, in order to do this the matrix side of things is abstracted. ( Kinda of like the current gui interfaces of today.)

This is also why non-matrix initiative is used when running AR as your dealing with ARO and not full matrix objects.
RunnerPaul
QUOTE (Spike)
AR does not require/use SIMsense at all, completely different method of interfacing.

While AR does not require simsense, it can and does often make use of it.

"The easiest and most common way to get your AR fix, though, is through simsense. You need a sim module for your commlink to interpret the signals and feed you the data via a cyberware simrig, worn simrig, trode net, or datajack. Partial simsense feeds take AR a step further because they can also relay emotions, though services that relay full emotive sim are rare (and sometimes illegal or downright disturbing)—do you really want a Buzz!Blitz energy drink advert to make you feel that way?" p.209, SR4 Core Rules.
Spike
QUOTE (Garrowolf)
I don't think that the average user needs to worry about IC in the first place. You can very easily have a safe rig that has safety limits in the hardware. You can feel like you have touched something but not that you have been stabbed. You can feel like you have smelled something but not a poison gas. You can probably taste things but not burn your tongue.

My point is that the setting has had decades to work on this technology. It is already been on the street for years. I think that it makes sense for it to have evolved. You maybe couldn't generate BTL signals with the normal rig. You would need a specialized or hacked rig to do that.

Simsense doesn't need to be seperated off. I think that it would add to advertising and all kinds of interesting things. I see things moving away from the seperation between VR and RL. We already are trying to go into escapism. WHy not make YOUR world a better place without actually doing good.

Sounds pretty cyberpunk to me.

Like the average web browsing member of the general public doesn't have to worry about virii on his desktop at home?

For ever legitimate user and professional hacker out there, there is going to be at least one idiot out to fuck up other people for the hell of it. Of course, now, instead of stealing passwords or replacing ever word document with random pr0n images or some such, now they can use black programs, downloaded for free from data havens or coded up from scratch using an online 'for dummies guide' and send them out in waves to wreak havoc and earn ten minutes of fame.

Make the world truly dystopian and those fuckers are the NORM. YOu'd have more assholes committing online murder for fun, fame or profit than you'd have honest users who'd never consider it. Every legitimate user would 'mess around' in his off time sending out psychotropic BTL signals, hardware destroying virii and more... just because he hates his job, his girlfreind, his life.

And yeah, currently in RAW you DO need a hacked rig to properly generate BTL level signals. That's Hotsim, it's not a default setting on commercially available simrigs, you have to do it yourself.
Garrowolf
So?

I think all of that makes sense. I don't see the reason that VR needs to have the Hot sim at all. Even the lower levels of it will have some interesting viruses. Terrorist hackers are still going to be around screwing things up.

RunnerPaul pointed out that certain levels of simsense are already attached to the AR, which I wasn't aware of. Thank you BTW.
kigmatzomat
real-world experience says that AR will probably continue to be keyboard-ish interface. Why? Waving your hands around is tiring but twiddling fingers is easy. that's not to say you won't have a set of AR rings/gloves that transmit finger motions when on the road but it's just as likely your comm would have a motion sensor and project a virtual keyboard (http://www.virtual-laser-keyboard.com/) either visually or into AR. You may even have a real keyboard at your home office just for the tactile response.

the advantage to AR is display size. With image linked contacts you have screens as big as your eyes. Combine with scrolling and some zoom features that operate based on your eyes' focusing and the interface quality goes up. You could even tailor the display so that text information in peripheral vision is easily readable.

The downside is that AR is actually based on physical gestures, meaning errors will happen. I'm not a neurologist so I really have no idea if typos are due to misinterpreted or poorly executed commands at the hand/finger level, poorly translated commands at the top of the spinal cord, or bad instructions from the frontal lobe. However, VR could obviously be faster in application because it will read intent from the brain without any irritating typos.

What if the speed boost from VR isn't just that the computer can accept commands faster than meat can typically give them but that the VR gets instruction straight from the source, without any of the errors that arise in the 2-ish meters of neurons and the half-pound of muscle executing commands? It's not that VR makes you think faster but that you never ever make a typo?

I can type about 60 words a minute doing transcription after error corrections. In stream of consciousness that goes down to less than 30wpm as I make changes to my text on the fly. Typical speech is closer to 150 wpm, 2.5x faster than my transcription typing speed and 5x faster than stream of consciousness. Even after you begin to filter out edits (changing context, rewriting sentences for content, etc) you're still in the 100wpm zone.

Voice input is available in AR but it has problems. 1) it is irritating to people arround you, requiring subvocalization. 2) How many times have you said something different from what you thought you said or meant to say? People with strokes epitimize the difference between concept and execution in speech, able to think clearly, possibly write clearly, but unable to vocalize correctly.

VR will again bypass that entire portion of the brain and read intent. Tada, instant performance boost without any freaky-freaky "speed of thought" to argue about. Instead simply have error-free data input.

Which explains why particularly wired AR users can possibly go as fast as full VR; They've got the speed & dexterity to correct their AR errors.
Moon-Hawk
QUOTE (kigmatzomat)
The downside is that AR is actually based on physical gestures, meaning errors will happen. I'm not a neurologist so I really have no idea if typos are due to misinterpreted or poorly executed commands at the hand/finger level, poorly translated commands at the top of the spinal cord, or bad instructions from the frontal lobe. However, VR could obviously be faster in application because it will read intent from the brain without any irritating typos.

What if the speed boost from VR isn't just that the computer can accept commands faster than meat can typically give them but that the VR gets instruction straight from the source, without any of the errors that arise in the 2-ish meters of neurons and the half-pound of muscle executing commands? It's not that VR makes you think faster but that you never ever make a typo?

I am not a neurologist, as that would require me to be a medical doctor, which I am not. I am, however, a research & design engineer doing brain research in a neurology lab, so I do have some slight idea what I'm talking about. wink.gif

As for where typos can come from, as you might expect the answer is all of the above. You'll definitely remove errors coming from the peripheral nervous system and the muscles, although in most people's case I would imagine that this is the smallest source of error, at least while typing.
If previous fluff on how/where datajacks are connected is to be believed, you would also be bypassing any error coming from the spinal cord (which DOES process information, contrary to popular belief) and the hindbrain; the areas often referred to as the "lizard brain". It's the part that takes the commands of intent and figures out how to turn them into coordinated, dexterous movement. There are pros and cons to taking this part of the brain out, and the cost/benefit will vary by application. This has also been discussed in the fluff previously, such as when they were talking about deckers installing their jacks on the temple and riggers installing them at the base of the skull. It was in a 3rd edition book. Anyway, for this application I would agree that removing any potential errors from the lizard brain is probably a good thing. It's great for taking intent and turning it into a coordinated physical task, but in this case the physical task of typing is only a means to an end, and if we can bypass that and go straight to the source (the intent, generated more in the front of the brain) then we can probably eliminate the largest source of typos.

You're STILL going to have errors generated, because sometimes your first intent is not the smartest idea and you're able to generate a smarter idea and overrule the first one before it's carried out. In this setup you're describing where it instantly reads your intent, you may actually introduce errors that you regularly generate but don't generally execute.
A very simple example of this would be, you're reading something on the Dumpshock forums, and somebody says something that makes you mad, so you hit reply and type up a quick and scathing rebuttal. But then you stop and think "Hmmm, maybe I shouldn't call that person an asshat, I need to phrase this more diplomatically." So you go back and edit it. (hey, it might happen) In your direct intent setup, you can't do this. The instant you think it it's gone. The intent is read and executed, and you have no chance once you think it to change your mind. You could introduce an artificial delay, but that basically just amounts to the computer asking "are you sure?" about EVERYTHING.
Overall, I generally agree that this setup would decrease the total number of errors, probably dramatically, and increase speed, also probably dramatically.
The Jopp
QUOTE (kigmatzomat)

The downside is that AR is actually based on physical gestures,

Why? What about those having implanted commlinks using AR. They would use mental commands so it would be instant-klick on the icon of your choice, most likely you wouldn't need a cursor since the commands would be instant.
kigmatzomat
QUOTE (The Jopp)
QUOTE (kigmatzomat @ Feb 9 2007, 02:57 PM)

The downside is that AR is actually based on physical gestures,

Why? What about those having implanted commlinks using AR. They would use mental commands so it would be instant-klick on the icon of your choice, most likely you wouldn't need a cursor since the commands would be instant.

Hmmm. I could be mixing information from different editions but I had the impression that without the RAS cutouts to prevent motion and the external sensory blocks that direct-neural command of external devices wasn't possible. AR could include sensory feeds to the user but the SIM module couldn't tell whether the user was responding to the AR or their actual environment.

Might not be RAW but it seems like a decent way of truly separating VR from AR.
In AR you are still aware of real-world input withthe possible additional sensory input from a SIM module but are limited to some real-world interface, be it a chording keyboard/glove or voice command. In VR you are completely neural input and output with sensory blocks to filter out real-world input. Hot SIM involves VR using out-of-spec configurations,possibly involving intentional "misuse" of neural connections that provide greater response but at risk to the user beyond that otherwise acceptable.
Garrowolf
Well the same logic that says that we would be able to mentally press buttons in VR think works the same for AR. I think that a floating desktop system would be faster then making each object an icon because you would have to go through some of the same steps of thnking about your VR body doing something. You could replace this with direct intent action but then I think that you would end up with strange jerky movement which would go against the smoothness of VR. Either way it would be slower then the mentally click on the icon stuff.

Right now it seems like alot of ideas are being kept together. What I was thinking was to seperate them out to look at them and see if there was better ways to put things together.

1) How to receive input - HUD vs Induction (squid) vs DNI
2) How to send output - Motion Capture of hand movement vs DNI
3) How to display input - Floating Windows vs Overlay the Real World vs VR
4) How to interact with this - Desktop vs Object representation
5) Level of Simsense - Basic sensory, Full Emersion, BTL
6) Speed of Frame Rate - how fast is the computer telling you that you are moving

Now the only reason that I can think of for VR to be faster then AR is the last one. If you telll your brain that you are moving faster then you could interact on a much higher rate. Now I see a problem of fatigue for one. You would exhaust on a much higher rate. This would reduce your attention span quickly. It could increase your physical tension levels, possibly causing adrenaline reactions. It could actually overload your brain as well sinceit would not be able to recover in specific areas as quickly. For short bursts maybe, like the fight or flight of combat but a human isn't designed to stay at high levels like this for long.

I also don't think that it is necessary. With good automation of the system, that does what it should and then gives you a choice at some point once they have reached a stopping place, then you don't need the high speed.

Out of game I think that the high speed thing just causes the person in VR to take too many actions which slows the game down. If the same results can be gained from not taking as many IPs then the game will run better. The only players that didn't get bothered somewhat from this were the powergamers who also had high IPs. I know that a high number of actions seems to be the holy grail of gaming but as a GM I consider it one of the biggest problem areas in a ROLE PLAYING GAME.

So the VR doesn't need to be faster. If you don't have the higher frame rate thing then it would actually be a little slower if it was based in Object representation.

More later
cetiah
All answers provided below are within the context of my custom hacking rules:

QUOTE
1) How to receive input - HUD vs Induction (squid) vs DNI
2) How to send output - Motion Capture of hand movement vs DNI
3) How to display input - Floating Windows vs Overlay the Real World vs VR
4) How to interact with this - Desktop vs Object representation
5) Level of Simsense - Basic sensory, Full Emersion, BTL
6) Speed of Frame Rate - how fast is the computer telling you that you are moving


INPUTING COMMANDS TO THE AR SYSTEM
The AR interface functions through AR-compatible networked devices. The standard AR interface configuration allows AR-compatible devices to share a user's senses, so that anything picked up with these sensors can be interpreted by the AR interface. For example, a user could have an AR-compatible earpiece which allows him and his AR system to share the same audio sense - hearing. The most common sensed shared are sight-devices and hearing-devices.

With these devices, the dumb-agents built into the operating system of your AR interface can interpret whatever it senses using whatever instructions have been built into it. Most AR interfaces come pre-programmed with a variety of gestures and audio commands that it recognises, as well as helpful sense overlay tools that teach the user how to use these commands and help the user teach the agents new interface commands.

Anything that uses these senses can be used for input, and most Utility agents include additional input commands by default. For example, certain interface-agents (SpySOFT, for example) used by law enforcement will automatically react to the presence of any firearms or illegal goods spotted through their shared AR vision, pointing them out to the user who may not have noticed them yet (through a variety of output options).

DNI cannot be used for input to an AR system, but can be used in a limited capacity with simsense. Media such as tridio and simsense can be outputed to the user, but these do not function as an effective interface for computer operations.



OUTPUTING COMMANDS FROM THE AR SYSTEM

The AR System can interface with any AR-compatible networked device in a user's PAN. Usually this is done through shares-sensory AR devices or through remote commands sent to drones.

Users can configure their interace-agents with whatever output they are comfortable with - most configure their AR to display a series of representative graphical icons and text that overlay across their vision (or just part of it). These can be realistic or symbolic as the person wants. By default, the system automatically switches between these modes as needed, using abstract icons for basic internal computer functions and detailed realistic output to display media, information, comcalls, etc.

If the user has an AR-compatible hearing-device than he can recieve output through verbal responses (in whatever language is programmed into the interface-agents), musical tones, animal noises, etc.

A variety of companys have come up with different tools and devices for presenting tactile AR output, but so far these have only retained popularity in the adult-services market.


INTERACTING WITH THE SYSTEM

Interaction follows from two primary features: AR-compatible sensory devices and agents that work in the background, filtering and sorting through sensory data. An agent interprets this sensory data as commands to be processed whenever the agent finds something he recognises (like seeing a familiar hand gesture, for example) or otherwise interprets sensory input in such a way that it should be interacting with sensory output (like seeing or feeling hands typing on a virtual representation of a keyboard overlayed onto the user's vision). The basic AR interface is rather universal and limited, but easily customizable through the interface agent that interacts with the user and learns to adapt to him, but even so, it can take months to properly configure an advanced interface to the user's satisfaction.

A variety of Utility-programs exist that augment the basic interface and provide it with new features, pre-programmed input and output commands, or alternative learning techniques - each utility basically enhances the "intelligence" of the interface-agent by adding more agents to the interface designed for specialized functions. For example, law enforcement personell typically use interfaces that filter through visual sensory data looking for illegal goods, drugs, and firearms and uses the officer's visual and audio AR devices to alert him when these things have been detected. In some cases, advanced training may be required for both the user and the agent to interact properly, but once this is complete, they function together in a more efficient, symbiotic process.


LEVEL OF SIMSENSE AND SPEED OF FRAME RATE

Simsense has very little to do with AR. The very basis of AR is that the user and computer share senses, wheras with simsense, a user's senses are largely overridden. Consequently, these interface methods are really only used for specific functions and not general computing needs. Many advertisements claim that simsense-interface is superior to AR-interface because it allows a "full immersive experience", but there are very few programs that actually require or make use of anything beyond typical or advanced AR.

Simsense has two major advantages:

First, there's no extra equipment. AR-users typically always find themselves buying new interface devices to enhance the capabilties of their AR. Most AR-users (which is most of the world) have more interface devices than they strictly. Although simsense won't provide the functionality of having five or more AR-equipped sensory devices, it can be useful for people who want a full VR experience without buying several different devices. Despite it increased cost, many people see simsense as a cheaper alternative for this reason.

Second, virtual reality. The interface is just good; damn good. It's not very convinient for computer functions anymore than our default senses are, but for everything normally requiring our senses, simsense fulfills all of that and more. The quality of simsense is much better than the typical human qualities, and human senses experienced through simsense can be experienced directly without having to translate those into typical human senses. Sensory data, intuitive and conceptual ideas, emotions, even physical attraction can all function as input and output for simsense.

There are rumors of people who have plugged into simsense and never plugged out again, living their whole life in the machine and not knowing it but these stories are largely in the domain of Horizon tabloids and urban legends. They are scoffed by the academic community, but featured very proninantly in many commercials and simsense movies. Some of those movies can be downright eerie when experienced through simsense... which is sort of the point.

Some companies have tried to experiment with actually expanding the capabilities of the mind and body with a direct simsense connection to the brain, allowing thought, emotional development, and conceptual understanding at rapid speeds through the process of Better-Than-Life conditioning. The most common applications of BTL are in the entertainment industry where the emotions and experiences portrayed through simsense can not only be communicated and experienced by users (like simsense), but taken to an extreme level with states of consciousness, intensity of emotions, psychosexual development, and a variety of other 'boosted' aspects of the brain. BTL addiction is a natural consequence, not only because the experience itself is as addictive as it is dangerous, but also because the addiction itself can be 'boosted' by the BTL creator. BTL chips have also been used for constructive purposes, such as to alter the personalities of deviant sociopaths, enhance employee and consumer loyalty, or enhance the bond of love between two newliweds. Once BTL-fatalities started to rise and BTL became illegal, many companies had to divert their BTL research into new avenues and marketing packages. BTL may be hush-word not spoken in many confrence rooms, but knowsofts, personafixes, and lovebonds, and angel dust are only a credit transaction away.
Garrowolf
Okay why wouldn't DNI work as an input device for AR. It seems like it would work the same as for VR. It would also give an advantage to someone who pays for a datajack over someone who just gets the gloves.

Animal noises? Is there something you want to tell us? wink.gif

Okay I disagree about the tactile interface. I think that would be highly useful for the AR. You would feel pressure against the tips of your fingers when you press things which would help you know that you made contact instinctively. Having the feel of grabbing the window to resize it or move it would be helpful as well. These would be the same tactile interfaces that would allow you to pick up objects in VR and move them around. Some thing.

I like the concept of the law enforcement object recognition system. I'm not sure that I like it attached to the AR. I think that I would have that running on the commlink as a function of cybereye input. To a limited extent I can see the contacts picking up your environment. You would need that if you had overlays for VR but I'm not sure. It seems like the AR input would usually need to be short range. I'll have to think about that.

I can't see simsense being cheaper then AR in any way. If you have an input device that can give you simsense then it will also give you AR. Most of the AR stuff would come with your commlink.

I could see the addiction occuring but that doesn't respond to why VR would be faster.

BTW I found your web site. Thanx. I get back to you when I get through it all.
cetiah
QUOTE
Okay why wouldn't DNI work as an input device for AR. It seems like it would work the same as for VR. It would also give an advantage to someone who pays for a datajack over someone who just gets the gloves.


No reason. I just don't like it. It's not how I picture AR.
The way I see it is that the original Matrix was all about DNI technology and capitalizing on that as much as possible, augmenting computer technologies with the advantages of direct human brain input. The new Matrix is all AR based... it's not just VR with a new name, but an entirely new technological foundation designed with a different goal: to augment humans with the advantages of direct computer connections. Essentially, the opposite of the DNI-based Matrix' goal.

And I like the idea of people interacting with their computers through interface devices. "Thinking to the machine" should be the exclusive province of technomancers.

As for your wanting to reward people who "buy a datajack", there's no need. People can already buy cybereyes and cyberears rather than goggles and earpierces and the rewards are inherently obvious. It works. The gloves are stupid and low-tech, hailing back to the innovation of Nintendo's power glove in 1989.

QUOTE
Okay I disagree about the tactile interface. I think that would be highly useful for the AR. You would feel pressure against the tips of your fingers when you press things which would help you know that you made contact instinctively. Having the feel of grabbing the window to resize it or move it would be helpful as well. These would be the same tactile interfaces that would allow you to pick up objects in VR and move them around. Some thing.

Ultimately, this still sounds like a cheezy gimick to me. It's like the difference between a touch screen vs a stylus, except without the screen to touch. You touch the appearance of a screen instead. So long as the tactile input registers, the computer will react in some way and so tactile output to the user really isn't necessary. Further, the sense of sight can (usually) substitute for the tactile input. You don't need to physically drag the icon of a book like you would drag a book, you just have to be appearing to move the virtual image of the book for your AR cybereyes to register. We have the technology for tactile-response gloves now, but no one uses it. There's no reason you couldn't have a glove that allowed you to feel small pressure when you clicked on an icon, but so far this kind of technology still limits itself only to video games with vibrating joysticks.

But that's not why I decided to get rid of it. The reason I decided to get rid of it was kind of the opposite... the AR gloves are dumb. They are. But "tactile response" covers a whole broad category beyong those gloves. Ultimately there were so many tactile interface devices I could think of that I just didn't want to deal with it. Also, for most of those devices, sight and sound world still function as a better input device so those devices still retain use primarily for output to the user only. Imagine pants, for example, that tucked themselves in whenever the chemsniffer or cybereyes registered a pretty girl walking by. Temperature-detecting clothes was really the only thing I could think that I would consider a remotely essential and exclusive tactile-input device. The same device would also inform the computer of your exact posture and stance for certain specialized functions.

I forgot to mention biomonitors in my initial post. Biomonitors function as an input device for your AR, and many utilities might use its data for different functions. This has so many potential practical everyday uses and functions that it should also be considered one of the "common" AR devices along with visual and hearing devices. I especially like the idea that it basically gives the information from a player's record sheet to the character... I don't see any reason why a character's condition monitor shouldn't appear on his HUD.

QUOTE
I like the concept of the law enforcement object recognition system. I'm not sure that I like it attached to the AR. I think that I would have that running on the commlink as a function of cybereye input. To a limited extent I can see the contacts picking up your environment. You would need that if you had overlays for VR but I'm not sure. It seems like the AR input would usually need to be short range. I'll have to think about that.


This is how AR works in my interpretation. It's not RAW, but I like it. If you have a program running on your agent reviewing info from your cybereyes and sending information to your HUD... that's AR. That's what it is; that's what it's all about... the expanding of human capabilities by the augmenting of human beings with computers and electronic devices for everyday household use. Just about everything you could of think of doing could be augmented in this fashion. I'm trying to move away from the idea that computers can replace humans and humans can replace comptuers... together, they're something so much better. They're Augments.

QUOTE
I can't see simsense being cheaper then AR in any way. If you have an input device that can give you simsense then it will also give you AR. Most of the AR stuff would come with your commlink.

What I meant is that an AR user has the option to buy glasses, goggles, cybereyes, earpieces, speakers, subvocal microphones, cyberears, tactile wriestbands, AR-compatible clothes, disposable AR skin and tongue patches, chemsniffers, radar suites, spy drones, and thousands of other options to augment their interface.

A basic AR package is cheaper than VR, until you add in how much money you'll be spending upgrading that interface over the next 5 years.

A VR package is just the one-time expense and you have a terrific interface without having to make all the decisions about what your next AR device will be.

It's like the difference between PCs and Macs. Each one will have their supporters. And each side will say theirs is the better deal. I kind of see Ranraku as the principle source of simsense-type VR interface technology with Neonet supporting and sponsoring AR.


QUOTE
I could see the addiction occuring but that doesn't respond to why VR would be faster.

::blink:: Huh? VR's faster?

BTL's faster. That follows from the idea that the machine is modifying the brain. It's dangerous and stupid. It's better not to think of it as faster, but very efficient. In the same amount of time it takes to edit a file, you could edit someone's entire personality or give them the collective knowledge of 10,000 convicted fellons. Or live a whole lifetime of extreme sports stunts in the physical timespan of a few seconds.

QUOTE
BTW I found your web site. Thanx. I get back to you when I get through it all.

I'm working on re-packaging the information on my home PC and constructing it into a wiki, along with supporting material from this forum. (I won't use anyone posts without permission.) When its done, hopefully someone will host it somewhere and then people can add any stuff they want to.
cetiah
QUOTE
Well the same logic that says that we would be able to mentally press buttons in VR think works the same for AR.


I think this is true for both AR and VR. It makes sense. Actually, I think AR would be faster because you are basically having agents do most of the digital work and only making a few choices represented through icons and windows or whatnot. I represent this through a lot of passive bonuses and functions that AR utilities give you.

But not for BTL. BTL is waaaaaaay faster.
But the difference is that BTL is not a true interface. You can interact with the BTL software but you can't really network with it. That's not what it's for.

If I played a movie for you in AR, you could see the movie on your heads up display and hear it through your audio feed. You might even be able to smell the smoke during the explosion scenes. This might take about 2 hours to watch while you also do other stuff.

If you watched the movie in VR, you could have an interactive experience as if you were there. You could watch the chase scene, decide you were bored, and then go check out the lover's quarrel in the other scene. You have a way more interactive and fun time. But it still takes about 2 hours to experience.

But if you had the movie on BTL, the comparison is no longer valid. The BTL chip itself can hack into your brain and start making modifications all willy-nilly, inserting 2 hours of movie experiences into your memory in only 1 second of time. You never experienced the movie, technically, but its the same thing as if you did... you remember being there. You remember the sights, sounds, images and they seemed more real that your real life. You watched the car scene, decided you were bored, and went to watch the lover's quarrel, then had sex with all the female cast members... didn't you?

So BTL can only be used passively or to provide the illusion of interactivity or to "edit" someone's brain. Really, they don't even need to see the movie... just have the pleasant experience of having watched THE BEST MOVIE EVER. I can see how that can be addictive. And like I said, the BTL chip might as well insert a BTL addiction while its hacking away at your brain. Marketing at its finest.

Disclaimer: None of this is RAW. Hell, it may not even be Shadowrun. But it works for me.
cetiah

Some ideas I've been toying with. The terminology is inspired by Netrunner, as most of the terms in my house rules are.

---

There are three degrees of Biofeedback that ICE can inflict against hackers in the Matrix.

Net Damage is inflicted against a user by exploiting whatever sensory connection is being used. The more sensory immersive the experience is, the greater the damage and disorientation from sensory overload. A hacker using visual and audio AR devices might be exposed to invasive, harmful images and sounds specially coded to damage human senses and brain patterns. Subliminal messages can be built into these to further the unpleasantness. In general, the better the Interface, the more maximum damage that can be inflicted.

Net Damage is surprisingly effective, but easy to defend against. First of all, a hacker using only the bare minimum AR can only be minimally effected by net damage. Second, net damage is fairly easy to filter out using an advanced firewall and/or special programs designed to detect and screen out hostile sensory feedback that could damage the user or his equipment.

(Note to Garrowolf: Net Damage could be really useful if you had an interface score, as the interface score would also be the maximum net damage possible. Further, you could reflect the decreased functionality and/or hardware damage by a permanently lowering of the interface score.)

Net damage is treated as stun damage and opposed with Willpower. Users also sufer an additional -2 penalty to all actions for 1 minute per point of unresisted net damage suffered.

While net damage can effect pretty much anyone to some degree that accesses the Matrix, Brain Damage only works against people using a direct DNI computer interface. In most cases, this means, VR, but not always. Brain damage is treated as stun damage but more sophisticated programs and hardware are needed to protect the user against it. In addition, there is no maximum limit on how much brain damage can be inflicted through a DNI.

Brain damage can also be adjusted with psychotropic effects, causing temporary insanity, memory loss, disorientation, and mental capability. This has the effect of temporarily lowering the hacker's Logic score. When all of the hacker's physical and stun damage is restored, the temporary Logic penalty is negated.

Meat Damage represents permanent and possibly fatal damage inflicted to a user's brain and neural system through its DNI or tactile interface. Meat Damage can only affect hackers using tactile, BTL, and hot-sim VR interfaces. If the hacker has any other form of DNI, then the Meat Damage is treated as Brain Damage instead.

Meat damage can be even more fatal and difficult to defend against than Brain Damage. Utilities and hardware must be specifially and specially designed to protect against Meat Damage. In addition, Meat Damage can also destroy your DNI or comlink unless they are Hardened.

Meat damage is considered physical damage and resisted with Body plus the rating of any Meat Filters you possess. Psychotropic effects built into Meat Damage delivering systems tend to be permanent, adjusting the attributes, skills, qualities, and personality of the hacker affected.
Garrowolf
Okay I really don't like the BTL stuff. I like having it as having your simsense basically jacked up too high. It seems like you are making it too integral. I don't see why it would need to be so interactive since it is going to burn out after one or two uses. Going to that much effort implies that you want them to use that copy more then once instead of just generating a random change in each copy.

I'm also not sure I see the reason to have agents doing things for AR. I think that would just be the system (unless you see the system as the Agent and there is no OS).

So why would VR be more interactive then AR for the movie. I can do all those things now with a DVD player.

I think that you are making a BTL chip into Ghost Hacking. I don't mind the concept of Ghost Hacking but not automated in a chip. Also you have a player pushing a movie into your memory. I think personally this would be a lousy way of watching a movie because you would not be able to react to the movie until you sat and remembered it all. You are also moving in to the realm of pushing memories and therefore skills. I think that this is going too much into skillsoft stuff.

Now don't get me wrong I like all of those technologies and can see them developing in Shadowrun but I think that they need to be broken down seperately and analyzed. Assuming them into a movie player isn't a good idea.

One thing that I was thinking was that you could have an interesting program for VR to cover the sex with cast members and inserting yourself into a movie. Definately worth detailing.

The thinking to the machine stuff is the effect that they are trying to get right now! There is a guy who has a chip in his brain that can move a cursor and play games with the chip controling the mouse. He is quadripeligic I think. We will definately have that technology by this time.

Well I think that you could have system damage. Then you could have Stun damage from sensory overload. I don't see the AR being as much of a limiter unless your system can't go to VR. Otherwise it would be as easy to manipulate the system to switch to VR then overload as much as possible.

The problem I have with anything past that is sort of the question from Star Trek of why they don't have surge protectors on their consoles. I know that the old logic of the game was the more immersed you got, the more bonuses you got, and the more damage you could potentionally take. My point is that I disagree with this idea, if not for the whole setting, we can assume that they will have solved this problem after a while. Why not have your commlink have a low sensory limit for hacking and then have a seperate device for BTL? I don't think that they are necessarily connected because there would be too many consumers on them by this point.

I know that correlation is the cornerstone of the older systems. I know that I am flying in the face of a shadowrun iconic tradition. I just think that somebody would have solved the problem in all this time. The military would have wanted to at least. Somebody would have.

I also disagree with the focus on IC in the first place. I have no trouble with Firewalls and IDS (security) but I don't see the need for the IC. Can you imagine the problems that could occur with IC leaving your system? You have a hacker that spoofs a data trail randomly and you have a bunch of law suits on your hands because your IC attacked the daughter of a megacorp CEO in a virtual mall.

I can see the sysadmin making it harder to break into their system but I don't think that the model of a sysadmin versus a hacker implies too little traffic on a server. It might of made sense at the beginning of computers but now and in the future it seems like there would be too much traffic. I see hacking now as a rogue vs a trap dungeon instead of as a complicated version of chess.


cetiah


QUOTE
So why would VR be more interactive then AR for the movie. I can do all those things now with a DVD player.

Garrowolf, I don't want to get snippy, but how could I respond to this? If your saying that your DVD player is just as immersive a sensory experience as sophisticated sci-fi realtiy simulation... well, I just don't know how to respond to that.



QUOTE
I'm also not sure I see the reason to have agents doing things for AR. I think that would just be the system (unless you see the system as the Agent and there is no OS).

But you're the one who initially explained to me how useful it would be to have agents built into your local system. That's exactly what I've been describing. It's all fluff anyway - just don't call them agents. Does that make you feel better? I personally like wondering if some of my programs might have more intelligence and personality than me and the paranoia with having a digital secretary who knows you waaay too well.


QUOTE
Okay I really don't like the BTL stuff. I like having it as having your simsense basically jacked up too high. It seems like you are making it too integral. I don't see why it would need to be so interactive since it is going to burn out after one or two uses. Going to that much effort implies that you want them to use that copy more then once instead of just generating a random change in each copy.


I have no idea what you mean here.
I think the confusion is that you're talking about BTL chips from the Shadowrun corebook whereas I'm trying to describe applications of overall BTL technology as an alternate interface type. But I'm not sure. Either way... I have no idea what you're saying in the above paragraph. I don't really see how BTL was interactive at all - that was the point. It's kind of a passive medium. It just hacks into your brain and starts making changes.


QUOTE
I think that you are making a BTL chip into Ghost Hacking. I don't mind the concept of Ghost Hacking but not automated in a chip. Also you have a player pushing a movie into your memory. I think personally this would be a lousy way of watching a movie because you would not be able to react to the movie until you sat and remembered it all. You are also moving in to the realm of pushing memories and therefore skills. I think that this is going too much into skillsoft stuff.


I've never heard the term ghost hacking before. I don't know what it is.
Too much into the skill soft stuff? Well, yeah... I was trying to say that skillsofts, BTL chips, and personafixes evolved from the same interface technology and work in a similiar manner - by hacking into a user's brain.

QUOTE
Now don't get me wrong I like all of those technologies and can see them developing in Shadowrun but I think that they need to be broken down seperately and analyzed. Assuming them into a movie player isn't a good idea.


Why are we assuming them into a movie player? I didn't realize we were making rules for movie players. I thought we were making interface rules for the various electronic applications of technology in Shadowrun.

I thought you would like exploring how different interface technology relates to each other rather than treating them as entirely seperate entities... I thought that was the point of this thread. Was I wrong?

QUOTE
One thing that I was thinking was that you could have an interesting program for VR to cover the sex with cast members and inserting yourself into a movie. Definately worth detailing.

Well that was what I meant by more immersive. And its not just movies.

QUOTE
The thinking to the machine stuff is the effect that they are trying to get right now! There is a guy who has a chip in his brain that can move a cursor and play games with the chip controling the mouse. He is quadripeligic I think. We will definately have that technology by this time.


Yeah? And? I think if everyone had perfect DNI interfaces the game would be boring. I like having different interface types and introducing one that's just dramatically better with no drawbacks just isn't my cup of tea. Like I said in my previous post, the shift away from datajacks was mostly cultural and sociological after the last Matrix crash. Also, there's no reason to assume the infrastructure is compatible with a Matrix specifically constructed for AR (by a company seeking to sell AR devices).

QUOTE
Well I think that you could have system damage. Then you could have Stun damage from sensory overload. I don't see the AR being as much of a limiter unless your system can't go to VR. Otherwise it would be as easy to manipulate the system to switch to VR then overload as much as possible.

The problem I have with anything past that is sort of the question from Star Trek of why they don't have surge protectors on their consoles. I know that the old logic of the game was the more immersed you got, the more bonuses you got, and the more damage you could potentionally take. My point is that I disagree with this idea, if not for the whole setting, we can assume that they will have solved this problem after a while. Why not have your commlink have a low sensory limit for hacking and then have a seperate device for BTL? I don't think that they are necessarily connected because there would be too many consumers on them by this point.


Why would you need a comlink with a low-sensory limit for hacking and a seperate device for BTL? Just don't use the BTL interface while hacking... use a low-sensory alternative. The way you are describing things SHOULD work is the way they DO work.

Also, the Interface score you proposed could clear up the issue altogether. As I said, the Interface score acts as a maximum net damage thing. If you could simply vuluntarily lower your Interface score then you would be doing just what you are describing... using low-sensory alternatives for certain tasks that protect the user.

As for your surge protector thing... what's what biofeedback filters are. I don't understand your point. If your point is "Why isn't defense against malicious hacking attempts perfect?" then I have to disagree there just on a genre and game issue. It's the same reason I don't let my players try to hide behind 500 different firewalls.

QUOTE

I know that correlation is the cornerstone of the older systems. I know that I am flying in the face of a shadowrun iconic tradition. I just think that somebody would have solved the problem in all this time. The military would have wanted to at least. Somebody would have.

I also disagree with the focus on IC in the first place. I have no trouble with Firewalls and IDS (security) but I don't see the need for the IC. Can you imagine the problems that could occur with IC leaving your system? You have a hacker that spoofs a data trail randomly and you have a bunch of law suits on your hands because your IC attacked the daughter of a megacorp CEO in a virtual mall.


Huh? What does that have to do with interface? Is this actually responding to my post or just a general rant? You prefer IDS instead of ICE? Fine, call it IDS. Call it "Security". Call it "Matrix Defense". I don't really care. Why are you so hung up on terms tonight?

It could help if you would quote my posts if your going to respond to them... because I really don't know what you're responding to. What did I say that prompted this reaction?

QUOTE
I can see the sysadmin making it harder to break into their system but I don't think that the model of a sysadmin versus a hacker implies too little traffic on a server. It might of made sense at the beginning of computers but now and in the future it seems like there would be too much traffic. I see hacking now as a rogue vs a trap dungeon instead of as a complicated version of chess.


Umm... okay. I thin I get what you're saying here and I think I agree, I just don't see what it has to do with my posts or why this relates to you objections to the way I described interfaces.

--
I really need some help and clarification here, Garrowolf, because I don't even know if we're on the same topic anymore. I was trying to "seperate and look at" the issues behind interfaces using the six methods of analysis you listed.

I can't tell if you are objecting to my suggestions, agreeing with them, or talking about something completely different. I don't feel like I can respond constructively without a little guidance.
cetiah
Garrowolf, please don't take this the wrong way... but would it help at all if we just dropped all Shadowrun terminology for this analysis? Go back and read my post, but substitute the following terms:

AR = SSI (Shared-sensory Interface)
VR = ASI (Alternate-sensory Interface)
BTL = MMI (Memory-modification Interface)
DNI = Machine-merge (or MMI for Machine-merge interface)
psychotropic = mind-affecting
simsense = simulated environment (sometimes referred to as "holodeck")
trideo = movie(s)
ICE = security programs
node = computer system
agent = computer application
Garrowolf
Sorry I was responding to several posts all together. I'll try and quote more.

I can see why we are having problems since we are using the definitions of these terms completely differently. (your last post)

Okay

AR - Augmented Reality. The superimposition of information into your view. This would be things like floating menus, seeing RFID information, floating windows, floating movies, etc. You can see the information put out by RFIDs and by other commlinks. If you play a movie then when you turn your head the floating window stays there in that same point of your vision because it has no external point of view of your surroundings.

Overlays - Halfway between AR and VR. It would access input about the surroundings and put a false image over a specific point in space whenever you look in that direction. You could have an overlay of a movie screen on your current wall. You turn your head and you don't see it anymore. You could have overlays of rooms you are not in as well as making people look different then they really do (cops look like pig people or have an extrapilation of what people look like nude). You move around as normal.

VR - This is the immersion of a totally false environment. It could be an office in your commlink. It could be a meeting room based on a conference call. It could be a virtual mall that is run by a megacorp but has people from all across the world in it. You would normally have your body not moving while your virtual body moves around.

Virtual Projection - The act of moving around in a virtual environment without your body moving. You could move around in a VR environment or enter into an overlay running for someone else. This would allow a person to virtual enter someone else's apartment as long as that person is there and can map their environment. A virtual person could sit on your couch or go into another room as long as you have mapped it.

Persona - The look of a person in a Virtual environment.

BTL - Better then Life - use of simsense on a higher then realistic setting to get the user high on the output. Similar in idea to wireheads from Niven's books. They had their pleasures centers wired so that they could get high at the touch of a button. Many died from starvation. BTLs come as a one or short term use and burn out. This allows repeat business.

DNI - Direct Neural Interface - This is any technology that causes an interaction between computers and your nervous system. This is a part of cyberware and datajacks especially.

Simsense - Inputing physical feelings into your CNS from an outside source. It only applies to how you feel and has nothing to do with what you see. Think about it alike additional file types. You have several visual files, several audio file types, you could have some tactile, some smell, some pleasure, some pain, some hot/cold, etc. It is short for simulated sensory input.

IC - Active Intrusive Counter Measures - They don't include Firewalls, Passwords, and user levels for the most part. They would be agents that attack users that they think are intruders.

Agents - These are complex independant semi intelligent knowbots. They are like the brain of a droid but not the body. Another term I have heard for them is Infomorph. They are a simple AI system without awareness. They are not a process, they are thousands of processes. The reason I have a problem with the way that they are sometimes used is that these things are similar to the autonomy of a drone pilot. Agents may actually be the same thing as a Pilot. It's hard to tell in the RAW. If that is true then they are effectively sending a huge amount of irrelivent information.

I'll send more later.






cetiah
QUOTE
Agents - These are complex independant semi intelligent knowbots. They are like the brain of a droid but not the body. Another term I have heard for them is Infomorph. They are a simple AI system without awareness. They are not a process, they are thousands of processes. The reason I have a problem with the way that they are sometimes used is that these things are similar to the autonomy of a drone pilot. Agents may actually be the same thing as a Pilot. It's hard to tell in the RAW. If that is true then they are effectively sending a huge amount of irrelivent information.


So what...? What does that have to do with our discussion here?

I can't use words like agent, terminal, node, program, hacker, connection, firewall, etc, because these things are already defined in Shadowrun RAW a certain way? I have to invent new terms to discuss anything that deviates with a strict interpretation of RAW? Everything would have a different term then... I don't see how that would be helpful to you.

It doesn't seem like we should be limited to only talking about RAW definitions. We're not talking about game rules here, we're talking about general concepts and theory related to computer interface in the shadowrun world. I should be able to use the word 'agent' in a general way to describe a concept if its the closest analogy to what I'm trying to describe to you, without worried about whether or not we'll get in a discussion about subscription links and agent Response attributes, right?

Here's an example:

QUOTE
You tell it mentally that you want to make it to a certain chair so it will steer you to your own chair at the right moment.


Why is it so wrong to call this an agent?

If I said I wanted to install this feature into a comlink, how would I refer it to it? I think a "utility agent" is a valid category to apply this feature, without prompting a response of how much you hate IC and disregarding everything else I said.

I can't even find where I mentioned IC in my last three posts or where your dungeon analogy applies. I mentioned biofeedback and that you could take damage in the Matrix. But the necessary response to smeone talking about biofeedback isn't that you hate virtual dungeon crawls... you could still have biofeedback even if you changed that.

Honestly, I thought the worse problem you would have with the terminology in my posts was the innapropriateness of the term 'meat damage" to describe physical damage to the user or his computer devices.
Garrowolf
sorry people showed up at my work and I had to stop that last post. damn its cold here.

Okay what I was thinking was that we need to work as much as we can from the shadowrun definitions or we will end up with a conversation that ONLY we can follow. I am also wanting people to rethink their assumptions and that won't happen if we isolate the conversation.

I wasn't disagreeing totally with the concept of agent and we do need a term for it. I just think that people are taking it too far. I was saying that they are complex and powerful functions that would make more sense tied to the system working as a secretary on a commlink. THey would be for interacting with people and pretending to be a person for things like virtual secretaries at megacorps answering questions and such.

We also need a term for a process. This can be an action on the part of the computer. It could be a memory process or a filetype process. The point is that we already usethat term to mean that. If we use the term agent to cover this then we will get confused.

I was also saying that alot of what people are using agents for could be done with processes BY agents on your system. The browser getting a page is a process. The secretary browsing for you is an agent.

BTW I am not trying to disregard everything you say. I'm actually enjoying our conversations. I'm not attacking you in the least.

On that other post I was summorizing and going off on a rant for a moment. It was trying to explain my overall thought on the subject of hacking.

I think that this is a very good exercise to explain what we are thinking and get on the same page.

I don't like a lot of the RAW but I guess I'm trying to take at least some of it and make it more usable. Part of this is that I have players that have read the RAW and agree and disagree with me. The farther I push it to be totally different the more they cry foul. Its an interesting struggle beause I am co-GMing with someone else and they see my point about alot of it but he wants to fall back into old style decking because he has read all the books and thinks that way. The more I create that is totally new the more confused he gets. It's one thing to tell them that they can't use agents as hacker in the box. It's another to rename everything. I guess I may be working under more of a restriction then you.

cetiah

QUOTE
IC - Active Intrusive Counter Measures - They don't include Firewalls, Passwords, and user levels for the most part. They would be agents that attack users that they think are intruders.


As I said, everything I posted originally was in the context of my custom house rules and they don't have IC. They have I.C.E. (Intrusion Countermeasure Electroncis) which function as semi-intelligent upgrades that enhance the security features of the node's OS and various applications.


Instead of agents, I'll use the word Bot for this discussion.

Bot: Effectively a "smart" application, capable of interpreting data and solving problems. They have primitive intellectual and personality features that can be modified and upgraded by the user. A Bot isn't technically a single program - but a huge collection of "smart" programs that work together to help perform a basic function. Bots that only interact with other Bots are considered OS programs. Bots that interact with the user are utilities. Bots that perform data-analysis, filtering, and computation are applications. In general, the entire Bot Program is labelled in one of these three categories depending on what the primary function of the entire program is.

The most important characteristic of Bot Programs (as opposed to other standard programs that may or may not utilize bots) is that the Bot Program is capable of functioning "in the background" without direct user input or guidance. In many cases, Utility Programs and OS Programs work to guide the user, rather than the other way around.

Bots used for Security purposes are called I.C.E. They run in the background performing a certain function that almost never requires input from the user. They are essentially reactive, constantly monitoring certain processing, and responding when necessary. Different ICE perform different functions, but most exist as enhacement to the Firewall (the most sophisticated of all I.C.E. and a built in feature on all modern security OSes). These additional ICE work to add features into the Firewall or react to any attack against the firewall, either by active monitoring for intrusion or directed communication from the firewall bots.

Intelligent Bot Programs that work with a user to defeat ICE are called ICE-Breakers.


cetiah
QUOTE
BTW I am not trying to disregard everything you say. I'm actually enjoying our conversations. I'm not attacking you in the least.

On that other post I was summorizing and going off on a rant for a moment. It was trying to explain my overall thought on the subject of hacking.

I think that this is a very good exercise to explain what we are thinking and get on the same page.


I just can't tell if anything I'm saying is really helpful or not.
I don't want to be unhelpful as that defeats the whole point of originally posting into this thread.
cetiah
QUOTE
was also saying that alot of what people are using agents for could be done with processes BY agents on your system. The browser getting a page is a process. The secretary browsing for you is an agent.


Alright, I'll buy that. So here's the revision:


As I said, everything I posted originally was in the context of my custom house rules and they don't have IC. They have I.C.E. (Intrusion Countermeasure Electroncis) which function as semi-intelligent upgrades that enhance the security features of the node's OS and various applications.


Instead of agents, I'll use the word Bot for this discussion.

Bot: Effectively a "smart" application, capable of interpreting data and solving problems. They have primitive intellectual and personality features that can be modified and upgraded by the user. Bots are responsible for automating and expanding on a number of system processes in order to accomplish a certain specialized function.

Bots designed primarily to interact with the user - especially those utilizing Shares-Sensory Interfaces (SSI) are typically referred to as "Applications".

Bots designed primarily to automate and expand background processes to facilitate various program functions and information transfer protocols are considered "OS" Bots.

Bots designed primarily to perform data-analysis, filtering, and computation are "Utilities." Utilities are often made to collect, analyze,and present data through Shared Sensory Interface (SSI) devices.

The most important characteristic of the Bot Programs is that it utilizes system processes "in the background" without direct user input or guidance. In many cases, Utility Programs and OS Programs work to guide the user, rather than the other way around.

Bots used for Security purposes are called "I.C.E." They run in the background performing a certain function that almost never requires input from the user. They are essentially reactive, constantly monitoring certain processing, and responding when necessary. Different ICE perform different functions, but most exist as enhacement to the Firewall (the most sophisticated of all I.C.E. and a built in feature on all modern security OSes). These additional ICE work to add features into the Firewall or react to any attack against the firewall, either by active monitoring for intrusion or directed communication from the firewall bots.

Intelligent Bot Programs that work with a user to defeat ICE are called ICE-Breakers.
Spike
Just because I know it could be lost in the shuffle (as in, Cetiah already pointed this out... in the middle of a long long post)

Garro: The DO have surge protectors for simsense hacking (VR). HotSim is when you take the surge protector away.

They DO have perfect defences against brain hacking. It's called not removing the surge protector from your simsense rig.


Cetiah: Read Masamune Shirow, specifically Ghost in the Shell. A lot of what you want BTL to be is essentially stuff he's been talking about. The movies may be more accessable to you as you can process the main one in about two hours and they really focus on the Ghost (wireless personality construct....vaguely) and Ghost hacking with more focus than the Manga gets, though the Manga tells a more complete story I think...
cetiah
QUOTE (Spike @ Feb 11 2007, 10:34 AM)
Cetiah: Read Masamune Shirow, specifically Ghost in the Shell. A lot of what you want BTL to be is essentially stuff he's been talking about.

No fair, you guys making me do research and stuff!!!! Waaahh!

Alright, well I looked it up using what limited resources are available to me (i.e., my agent-less browser). I really wish I had a datarat I could activate while I go to school tomorrow to just collect and filter out the relavent information I was looking for.

What you called "ghost hacking" (as derived from obscure anime references) seems to be NOTHING like what I was describing. In fact, what I was describing was based on actual shadowrun technology: personafixes, skillsosfts, BTL, etc. It relies on principles of "psychotropy" rather than "ghost-hacking". This Shadowrun technology has been around for decades.

Here's a quote from wiki-pedia:
"When a criminal is convicted of a crime in Masamune Shirow's future world, a detailed technical analysis is conducted upon the subject. If it is discovered that the crime was committed due to a material defect in either the biological or electronic components of the convict's brain, the defect is repaired and the convict is released. If, instead, the crime is determined to have been the result of an individual's ghost, then there is only one cure: the removal of the portion of the brain that communicates with the soul, thereby de-ghosting the criminal and preventing any possibility of future criminal behavior."

My definition of the "brain hacking" interface technology (which I like to attribute to BTL to seperate it completely from the concept of VR) is much closer to the principle processes behind "repairing defects in the biological or electronic components of the convict's brain" rather than hacking away at the "portion of the brain that communicates with the soul".

Ghost-hacking seems to be all about mysticism and whether or not a machine can have a soul and some such things. But your basic run-of-the-mill brain-hacking just involves using electrical stimulation to make changes to the way that information and processes function in the brain... make a slight adjustment here and WALLA, you have a new pre-programmed memory now. Too bad about whatever was stored there before...

The closest analogy to "ghost-hacking" in Shadowrun is modifying a character's essence score. If you suddenly had some reason to suspect than an AI had "Essence", you could pretty much re-create the themes and conflicts in that anime.
Spike
that's what you get for taking the shortcut.

Here's how I see it. YOU use BTL as a personality rewrite, and hacking into someone's brain through hot sim/btl can allow you to alter their memories, personality and more.

Welcome to Ghost Hacking. In GitS, a Ghost is a poorly described phenomenon which is essentially the 'personality matrix' of a meatbody. Is it a 'soul'? Who can say, and exploration of that theme DOES show up in the stories, but its not so clear cut.

What IS important... from YOUR shadowrun perspective regarding BTL and the brain, is that the Ghost can be hacked. You can insert false memories that are entirely real, you can subvert personalities to make people do things you want them to do, or see things you want them to see. The first (and best) Ghost in the Shell movie had a minor character who had been Ghost Hacked into believing he had a wife and child that he was estranged from. Every time he'd try to call them he'd actually be hacking on behalf of the person who had Hacked HIM. His photo of his dog he saw as a picture of his family.

One 'super hacker' in the setting is skilled enough to make everyone see his face as a giant smiley face in real time to preserve his anonymity. Only the exceptionally rare individuals without extensive cyberware are immune to this sort of thing, as they lack the connections that allow their Ghosts to be hacked.

I don't see BTL the way you do. According to RAW it's just overly strong emotional tracks making the expirence overwhelming like opiates and synthetic endorphins are. The fact that most BTL's include reality overlays is unimportant. But if you want to run your Shadowrun to include 'brain hacking BTL'... then go to the acknowledge master of the genre for ideas. That's all I suggest.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012