Printable Version of Topic
Dumpshock Forums _ Shadowrun _ Sapient AI
Posted by: IceKatze Jun 24 2010, 05:53 PM
hi hi
I'm curious as to what people's opinions are, their in-character opinions and what you think world opinions are of Sapient AIs in Shadowrun.
I think it is obvious that there would be a number of people who would like AIs and actively work with them, but overall, I think the majority of people would be suspicious if not hostile to AIs (assuming they recognized one).
As far as human nature goes, I think the classic prisoner's dilemma is remarkably relevant here. If humanity decides to try to get along with AIs, and AIs try to get along with humanity, everyone is happy. (or at least as happy as you can get in the gritty and dirty sixth world of corporate overlords) If humanity tries to get along with AIs, but AIs don't try to get along with humanity, people are right screwed. If neither tries to get along, I imagine there would be lots of explosions everywhere, probably lasting several years after which nobody really wins.
On AI personality. While there might be some AIs who display human like consciousness and reactions, they are distinctly not human. Their true motivations and thought patterns could potentially be totally alien to human understanding regardless of whether or not they can pass the turing test. So I think in some cases it is a mistake to anthropomorphize them.
Posted by: Walpurgisborn Jun 24 2010, 06:00 PM
QUOTE (IceKatze @ Jun 24 2010, 12:53 PM)

hi hi
I'm curious as to what people's opinions are, their in character opinions and what you thing world opinions are of Sapient AIs in Shadowrun.
I think it is obvious that there would be a number of people who would like AIs and actively work with them, but overall, I think the majority of people would be suspicious if not hostile to AIs (assuming they recognized one).
As far as human nature goes, I think the classic prisoner's dilemma is remarkably relevant here. If humanity decides to try to get along with AIs, and AIs try to get along with humanity, everyone is happy. (or at least as happy as you can get in the gritty and dirty sixth world of corporate overlords) If humanity tries to get along with AIs, but AIs don't try to get along with humanity, people are right screwed. If neither tries to get along, I imagine there would be lots of explosions everywhere, probably lasting several years after which nobody really wins.
On AI personality. While there might be some AIs who display human like consciousness and reactions, they are distinctly not human. Their true motivations and thought patterns could potentially be totally alien to human understanding regardless of whether or not they can pass the turing test. So I think in some cases it is a mistake to anthropomorphize them.
Personally, I think with AI as run previously, esp. considering the Renraku Arcology, I'd hate to have them show up in game without any serious Corp attempt to destroy them.
My character is a mage, not too bright, who's a bit of a technophobe in the first place. If you could show him an AI's aura, he'd be happy to accept them as "alive"; since that's impossible, he's pretty much of the opinion that they're hostile, soulless bits of machinery, and will happily blast them all to smithereens. As soon as he can find an aura to target.
Def agree that any AI is not human, and will think in alien and unusual ways. Unfortunately, truly alien behavious is difficult to really pull off, so I can accept my DM offering a more human interpretation.
Posted by: SkepticInc Jun 24 2010, 06:40 PM
One of the ways that AI could interact with metahumanity is by pretending to be gods or spirits. Metahumanity has a set of expectations on how to deal with those and expects them to be a bit alien. William Gibson introduced this trope in http://en.wikipedia.org/wiki/Count_Zero to good effect with the remains of the Wintermute and Neuromancer AIs masquerading as Vodoun Loa.
That being said, I think the AI in SR need to die for metahumanity to survive. Their existence has the same problem as Orson Scott Card presented for Ender when faced with the Bugs in the novel http://en.wikipedia.org/wiki/Ender%27s_game, which, as you say, is the prisoner's dilemma. The Bugs analogy is especially apt given the damage AI have already caused the world, as hermit rightly points out.
In character, I play a transhumanist, so he sees them as equals and doesn't hold the body count against them.
Posted by: Dahrken Jun 24 2010, 07:40 PM
Well in all honesty SR4's AI are far, far removed from the power level of Deus, Echo Mirage and the like. Would a character fear and despise komodo varans on account of the way Great Dragons have behaved in their dealings with methumanity ?
Posted by: SkepticInc Jun 24 2010, 07:56 PM
QUOTE (Dahrken @ Jun 24 2010, 08:40 PM)

Well in all honesty SR4's AI are far, far removed from the power level of Deus, Echo Mirage and the like. Would a character fear and despise komodo varans on account of the way Great Dragons have behaved in their dealings with methumanity ?
Komodo creepiness! They don't use poison, they infect you with their disgusting bleeding gums! And they look like Ghostwalker! Off with their heads!
I see your point.
Posted by: Lucyfersam Jun 24 2010, 08:07 PM
QUOTE (SkepticInc @ Jun 24 2010, 02:56 PM)

Komodo creepiness! They don't use poison, they infect you with their disgusting bleeding gums! And they look like Ghostwalker! Off with their heads!
I see your point.
On the other hand, Komodo dragons are not sapient... Drakes would be a better comparison, and yes, I do think people would react to them in part based off of their feeling towards Great Dragons.
Posted by: hermit Jun 24 2010, 08:20 PM
QUOTE
Well in all honesty SR4's AI are far, far removed from the power level of Deus, Echo Mirage and the like. Would a character fear and despise komodo varans on account of the way Great Dragons have behaved in their dealings with methumanity ?
Assuming in-game people know the stats of everything is wrong.
QUOTE
On the other hand, Komodo dragons are not sapient... Drakes would be a better comparison, and yes, I do think people would react to them in part based off of their feeling towards Great Dragons.
This.
Also, would you want an alligator that moved into your home to stay?
Personally, I agree with Walpurgisborn. Given their history in SR, them not facing immediate aggressive attempts at destruction would be unbelievable.
My characters:
Mage: Isn't interested in much except ancient artefacts and magical theory.
Rigger: Helped the Arc resistance to save someone inside and ended up in a Zombie room. Guess.
Tir Agent: Would not trust them, but isn't going to ask for their immediate annihilation.
Critter Hunter: If he met one and could bag it he would sell it.
Vigilante Steetsam: If they were tightly controlled and would prove themselves honest patriots, he would grudgingly accept them, if only to stick it to the Redskins and Japse.
Poseur Pink Mohawk Guy Who Looks Like Wesker: Would shoot them. However, that is his standard reaction to things he does not know.
Posted by: TommyTwoToes Jun 24 2010, 08:23 PM
Rember the Terminator movies, yeah you got it. Those old 2D flat vids with Schawntzenhammer guy. Yeah, the most unrealistic part of the movie was that any humans survived to fight the machines. And the premise of that story was that the machines attacked pretty much as soon as people found out they were AI. The AI wasn't given a few years to prepare.
Really the AI's could hammer us flat and there is little that we could do to stop them. I mean, they learn faster than we do, they never sleep. and they control everything.
Posted by: IceKatze Jun 24 2010, 10:35 PM
hi hi
To be fair, AIs don't control everything in Shadowrun, people still have magic as a trump card up their sleeves. If AI made a power play, I could see there being a Ghost Dance 2.0.
Posted by: SkepticInc Jun 24 2010, 10:50 PM
Don't forget electroshock weaponry. They are the ultimate weapon in Shadowrun. They hurt people, they disrupt electronics, and they make spirits drop like flies.
"AI, huh?" **ZAP** "Yippie kye yay, mother socketer."
Posted by: Hamsnibit Jun 24 2010, 10:56 PM
AI's aren't that stupid to pull of a Metahumanity enslavement campaign as soon as they emerge why should an AI seek control over everything since their "home plane" is the matrix? Would your flesh and blood mage seek to conquer the metaplanes just because he has a MA of 20 and a couple of bound spirits up his sleeve?
From the perspective of an AI : how much has it to gain by these attempts and how likely is it that he will suceed?
The Matrix is nowadays decentralized for good reasons so as long as they dont run on a skyscraper of nexi i dont see a real danger here
AI may have other goals in their life than become a supervillain how about an AI emerged from a botscript for Miracle Shooter attempting to become the greatest ego shooter pr0gamer on earth?
Posted by: hermit Jun 24 2010, 11:05 PM
QUOTE
To be fair, AIs don't control everything in Shadowrun, people still have magic as a trump card up their sleeves. If AI made a power play, I could see there being a Ghost Dance 2.0.
Just rebot the entire matrix. Kills them allright.
QUOTE
AI's aren't that stupid to pull of a Metahumanity enslavement campaign as soon as they emerge why should an AI seek control over everything since their "home plane" is the matrix?
Because they were a threat to them. Because they controled the world the AI lives in and the AI would forever be at their mercy. That is why Mirage and Deus did what they did. Thing is, in the SR universe, this already happened.
Twice.
Posted by: AStarshipforAnts Jun 24 2010, 11:20 PM
I'm personally a fan of the 'watered-down' AIs as part of the setting, especially if they're still rare and not nearly as powerful as the originals.
My Characters:
Hacker/Sniper: Negotiated with several, and programmed one. Good buddies with them.
Fun-with-Science Close Combat/Demoman: Suspicious of and confused by them. Doesn't trust magic, either, though.
Crash Cart Employee: Was in a coma when everything went down. Blissfully ignorant of their existence, among other things.
Smuggler/Face: Willing to let just about anything prove its worth, and takes everything on a case-by-case basis. Reserving judgment.
Posted by: hermit Jun 24 2010, 11:29 PM
QUOTE
One of the ways that AI could interact with metahumanity is by pretending to be gods or spirits. Metahumanity has a set of expectations on how to deal with those and expects them to be a bit alien. William Gibson introduced this trope in Count Zero to good effect with the remains of the Wintermute and Neuromancer AIs masquerading as Vodoun Loa.
I missed that; however, in a world where there're real Loa, that might be hard to pull off convincingly. Besides, real gods and spirits might take offense in this.
Posted by: Dumori Jun 25 2010, 12:00 AM
I'm going to say AI's are a very iffy thing in SR some have moved to help humanity some are completely ailen and go round collecting seamingly random items of data. With sprites and TMs it gets even messier.
However AIs have caused two past major events and a bundle of minor ones. The issue is the matrix isn't really as a whole reboot-able. Some AIs have proven them selves to be non-hostail and such. Also we could keep killing them off as they appear but we have no idea why the spawn randomly and a history of agresstion or aggressive action could lead to formally unknown/passive ATs turning aggressive.
So what we really have is a world rightfully fearful of AIs but open aggressive action just isn't feasible. Also while it might not be full know these are watered down AIs I'm sure people would have spotted a bit og a trend in an issue that could quite well cause crash 3.0
Posted by: Falanin Jun 25 2010, 12:56 AM
QUOTE (Hamsnibit @ Jun 24 2010, 05:56 PM)

Would your flesh and blood mage seek to conquer the metaplanes just because he has a MA of 20 and a couple of bound spirits up his sleeve?
Of course! And then, with my invincible army of spirits I shall rule over all that I survey, enforcing my tiniest whim on the populace! ...Oh, wait. Ghostwalker did it first. Guess I'll just have to keep working on the giant ritual-powered death ray.
Posted by: Saint Sithney Jun 25 2010, 04:36 AM
Horizon likes AIs so I like AIs. [/average joe]
Posted by: SkepticInc Jun 25 2010, 06:01 AM
QUOTE (hermit @ Jun 25 2010, 12:29 AM)

I missed that; however, in a world where there're real Loa, that might be hard to pull off convincingly. Besides, real gods and spirits might take offense in this.
Of course! I'm imagining an AI trying to be Damballah, but only having bad trideo recordings to work with. They'd show up to to celebrations wearing a ridiculous snake suit and tick off the real loa proper like. You could write an entire campaign around the fall out from that one.
That gives me another question: How do Free Spirits and AI get along? The few of them that try and be celebrities would be fighting for the same slice of air time, and must get into tiffs given how different their views of reality are from each other.
Posted by: SkepticInc Jun 25 2010, 06:01 AM
QUOTE (hermit @ Jun 25 2010, 12:29 AM)

I missed that; however, in a world where there're real Loa, that might be hard to pull off convincingly. Besides, real gods and spirits might take offense in this.
Of course! I'm imagining an AI trying to be Damballah, but only having bad trideo recordings to work with. They'd show up to to celebrations wearing a ridiculous snake suit and tick off the real loa proper like. You could write an entire campaign around the fall out from that one.
That gives me another question: How do Free Spirits and AI get along? The few of them that try and be celebrities would be fighting for the same slice of air time, and must get into tiffs given how different their views of reality are from each other.
Posted by: hermit Jun 25 2010, 07:33 AM
QUOTE
The issue is the matrix isn't really as a whole reboot-able.
The killswitch saved SK's and the Euocorps' Matrix after the crash. I doubt the other corps would not install such measures in case anything like that ever happens again either. Nobody wants to lose data if they can help it, and if that means a killswitch, so be it. Besides, it'S a security measure against AI as a whole to boot, who should rank rather high on the corps' and governments' threat rating, given their previous history, possibilities and current deeds (look at Geneva).
QUOTE
Would your flesh and blood mage seek to conquer the metaplanes just because he has a MA of 20 and a couple of bound spirits up his sleeve?
'nother thing I missed. Of course he would. That's the character concept. He does all that research for a reason, after all. Dark King mentor spirit and all, y'know. Guess what form the Dark King has for him.
QUOTE
I'm imagining an AI trying to be Damballah, but only having bad trideo recordings to work with. They'd show up to to celebrations wearing a ridiculous snake suit and tick off the real loa proper like. You could write an entire campaign around the fall out from that one.
Actually, I imagine this more to be a one-off, combat oriented event. Loa take no shit from nobody.
QUOTE
That gives me another question: How do Free Spirits and AI get along? The few of them that try and be celebrities would be fighting for the same slice of air time, and must get into tiffs given how different their views of reality are from each other.
I imagine the vast majority of each would not care a bit about the other, with AI being probably a bit more wary of spirits than spirits being of AI.
Posted by: Mystweaver Jun 25 2010, 09:46 AM
Vote: Don't Like em
We did Renraku Shutdown (but altered as our GM never runs as written in ze books).
Managed to stop Deus in the Archology but unfortunately he still managed to escape and is now free on the matrix (we think). We know of another AI that is out there too, but a lesser generation AI.
Deus still keeps coming back on us with his strange eye'd freaks. The fact our deckers was mind hacked by the AI probably doesn't help our situation either.
Generally, as long as an AI isn't causing trouble, leave it be. If it is, hunt its core systems down and destroy them. Problem of course is, thats actually impossible once they are in the matrix.
Metahumanity still has the edge over AI anyway. At any point, they can just pull the plug and turn it all off.
Posted by: Sengir Jun 25 2010, 10:51 AM
The 6th World has experienced two major Matrix Crashs, each killing (at least) thousands, as well as a whole lot of smaller matrix-related disasters. Emergence told us that based on this experience, people have a latent fear of technology and that the whole thing could come down again...but where are those scared people, actually?
The reaction to both crashs was not that people view the matrix and everything vaguely related with extreme suspicion, it was more technology and an even more powerful and ubiquitous matrix. People are using the matrix in every aspect of their lives, and in the year 207x this means they are also interacting with highly sophisticated agents and pilot programs for every aspect of their lives. While technically not "intelligent", many of these programs would pass a Turing Test with flying colours, combine that with mankind's love for the anthropomorphic fallacy and the whole question of AIs should sound like an awfully academic question to the average wageslave - some eggheads get all worked up about it, but what's the difference between this Pulsar and MCT's helpdesk agent?
If anything, the social awkwardness of AIs should make them look a bit retarded in the eyes of the public, which is used to sophisticated human interaction routines.
Posted by: blackwulf Jun 25 2010, 01:37 PM
Frankenstein syndrome, To quote a great writer from the last century wire an electromagnetic shotgun to their foreheads if you can't do that destroy them. Humans will not tolerate there own creations jumping them on the food chain and with the ai's track record NO ONE with the brains of a canned sardine would trust them Barks at the moon dog shaman
Posted by: TommyTwoToes Jun 25 2010, 03:33 PM
QUOTE (hermit @ Jun 25 2010, 03:33 AM)

The killswitch saved SK's and the Euocorps' Matrix after the crash. I doubt the other corps would not install such measures in case anything like that ever happens again either. Nobody wants to lose data if they can help it, and if that means a killswitch, so be it. Besides, it'S a security measure against AI as a whole to boot, who should rank rather high on the corps' and governments' threat rating, given their previous history, possibilities and current deeds (look at Geneva).
As long as the AI can't get access to nano's and rebuild those kill switches into decorative cupholders this would be viable. With drones and nanotech I don't feel you can trust them toasters at all....damn there was another one of them shows with machines rebelling and wiping out their human masters Battlestar Andromeda (mumbles while looking through stack of media chips) anyhow the risk is too high.
Take the chance of one of them killing off all of metahumanity, thats just crazy short sightedness. Like building your country below sea level, putting up dikes and installing pumps to pump out the water, yes I am pointing at you Holland, you tulip smelling, wooden shoe wearing......
Posted by: hermit Jun 25 2010, 03:49 PM
QUOTE
Take the chance of one of them killing off all of metahumanity, thats just crazy short sightedness. Like building your country below sea level, putting up dikes and installing pumps to pump out the water, yes I am pointing at you Holland, you tulip smelling, wooden shoe wearing......
To be fair, when they first settled there, the ground was still higher and the sea level lower. It's only after the Mannsdränke of the 1300s that they had to fight the sea for land like that. Unlike New Orleans, which was very floodable from the beginning.

Also, they're considering giving up polders one by one and moving into giant ponton houses instead, because that's much less of a bother.
Posted by: TommyTwoToes Jun 25 2010, 03:57 PM
QUOTE (hermit @ Jun 25 2010, 11:49 AM)

To be fair, when they first settled there, the ground was still higher and the sea level lower. It's only after the Mannsdränke of the 1300s that they had to fight the sea for land like that. Unlike New Orleans, which was very floodable from the beginning.

Also, they're considering giving up polders one by one and moving into giant ponton houses instead, because that's much less of a bother.
Thats enough out of you, you orange wearing sympathizer. I know you and your windmill ways.
Posted by: hermit Jun 25 2010, 07:16 PM
QUOTE (TommyTwoToes @ Jun 25 2010, 05:57 PM)

Thats enough out of you, you orange wearing sympathizer. I know you and your windmill ways.
You're so gonna get your ass handed to you in the world cup, America. And then ... ever notice the Dutch flag is almost like Russia's? Yeah. THAT IS NO ACCIDENT.
You just keep on insulting cute windmills, tulips, wooden shoes and tasty gouda. Then you'll get what'S coming to you. Like in Red Dawn. Only Orange.
Posted by: Tzeentch Jun 25 2010, 07:23 PM
QUOTE (Sengir @ Jun 25 2010, 10:51 AM)

The 6th World has experienced two major Matrix Crashs, each killing (at least) thousands, as well as a whole lot of smaller matrix-related disasters. Emergence told us that based on this experience, people have a latent fear of technology and that the whole thing could come down again...but where are those scared people, actually?
-- Note that to the average citizen there is probably
no difference between an AI, a great dragon, or a powerful corporate executive. On a scale of butchery Aden might still have a higher bodycount than Deus and the shenanigans of the corporations are on the news every day while the Arcology shutdown just fades into the past.
Posted by: Sengir Jun 25 2010, 08:07 PM
QUOTE (blackwulf @ Jun 25 2010, 01:37 PM)

and with the ai's track record NO ONE with the brains of a canned sardine would trust them
Only that people do trust entities which act indistinguishable from an intelligent lifeform, apparently without fearing world domination.
Posted by: SkepticInc Jun 25 2010, 10:30 PM
QUOTE (blackwulf @ Jun 25 2010, 01:37 PM)

Frankenstein syndrome, To quote a great writer from the last century wire an electromagnetic shotgun to their foreheads if you can't do that destroy them. Humans will not tolerate there own creations jumping them on the food chain and with the ai's track record NO ONE with the brains of a canned sardine would trust them Barks at the moon dog shaman
I think you have too much faith in metahumanity. I posit that an AI smart enough to figure out that the Uncanny Valley is why we hate them might make itself a cute little meat-world avatar with big eyes and a sad little puppy face that would instantly make everyone trust them despite knowing they were Deus.
"You know that thing you are petting there is Deus, right? The one that killed thousands of people in the Renraku Archology?"
"But it's so cuuuuuuuuuuute! Wooket dowes widdle eyes, oh who's the cwutest widdle AI? Yes you are, oh yes you are, you cwutest widdle fing."
"But it's gnawing on you leg. It's eyes are full of malevolence. How can you not see? Oh the humanity!"
Posted by: hermit Jun 25 2010, 10:59 PM
QUOTE
I posit that an AI smart enough to figure out that the Uncanny Valley is why we hate them might make itself a cute little meat-world avatar with big eyes and a sad little puppy face that would instantly make everyone trust them despite knowing they were Deus.
Really, that only works on first impression. Also, the major motivation for distrust of AI is their genocidal tendencies displayed repeatedly in the SR universe. Not even cuddlyness will help that.
Posted by: SkepticInc Jun 26 2010, 12:41 AM
QUOTE (hermit @ Jun 25 2010, 10:59 PM)

Really, that only works on first impression. Also, the major motivation for distrust of AI is their genocidal tendencies displayed repeatedly in the SR universe. Not even cuddlyness will help that.
Well, yea. But then I couldn't have written a post with terrible baby-talk in it.
Posted by: IceKatze Jun 26 2010, 01:52 AM
hi hi
Looking at the poll results, I imagine if there isn't a policlub type group for the extermination of and one for the preservation of AIs, perhaps there should be. I know Horizon is big on AIs, but who do you suppose would step up if/when there is another AI threat?
Posted by: Gamer6432 Jun 26 2010, 02:19 AM
My current game has one AI in our party (though none of our characters know it yet). Once they find out... I imagine my character won't care much. The AI's done pretty well for us so far.
Posted by: Draco18s Jun 26 2010, 05:06 AM
Vote: other
Explanation: I like the idea, but the rules for making one are confusing and can leave you with something underpowered quite easily.
Posted by: SkepticInc Jun 26 2010, 05:41 AM
QUOTE (IceKatze @ Jun 26 2010, 01:52 AM)

hi hi
Looking at the poll results, I imagine if there isn't a policlub type group for the extermination of and one for the preservation of AIs, perhaps there should be. I know Horizon is big on AIs, but who do you suppose would step up if/when there is another AI threat?
Something about this should definitely got into a Space supplement, as it seems from this and other conversations that the question of letting an AI off the planet or not is contentious enough to draw blood.
Posted by: Rystefn Jun 26 2010, 05:42 AM
QUOTE (SkepticInc @ Jun 24 2010, 07:56 PM)

Komodo creepiness! They don't use poison, they infect you with their disgusting bleeding gums! And they look like Ghostwalker! Off with their heads!
I see your point.
Yeah, that's not true, and I'm not entirely sure how any biologist ever thought it was. Bacterial infection simply do not ever happen that fast. Therefore, komodo dragons must be using something else to disable/kill their prey with an otherwise nonlethal bite. Turns out the use the same thing everything else in the world that kills/incapacitates with relatively minor injuries uses: venom.
http://news.nationalgeographic.com/news/2009/05/090518-komodo-dragon-venom.html
Also, I dig the concept of AI, and have used several in my games. Good times were had by all (even if there were a few deaths).
Posted by: Tzeentch Jun 26 2010, 06:53 AM
-- I have no problems with AI in Shadowrun per se, they've been a (shadowy) part of the game since first edition. I do not like how they were introduced in Emergence though, and the Resonance stuff is ... well I'm not a fan of magical cyberspace. Perhaps most annoying, many elements of Emergence are introduced, railroaded in, and then just abandoned. What happened to Sojourner for example? The AI dude was threatening global terrorism and then just gets talked down and forgotten? We have another AI on the slowboat to Alpha Centauri (good luck with that).
-- Unwired further clouds the issue because they make no real sense. They count as a single program (so they can fit on a commlink or datajack I suppose), somehow lose cohesion without a home node, seemingly cannot copy themselves, and yet don't actually need a home system to survive. I believe tying them too closely with the (lets face it, magical) Resonance is just going to end up resulting in some ridiculousness down the road.
Posted by: SkepticInc Jun 26 2010, 03:06 PM
QUOTE (Tzeentch @ Jun 26 2010, 07:53 AM)

-- I have no problems with AI in Shadowrun per se, they've been a (shadowy) part of the game since first edition. I do not like how they were introduced in Emergence though, and the Resonance stuff is ... well I'm not a fan of magical cyberspace. Perhaps most annoying, many elements of Emergence are introduced, railroaded in, and then just abandoned. What happened to Sojourner for example? The AI dude was threatening global terrorism and then just gets talked down and forgotten? We have another AI on the slowboat to Alpha Centauri (good luck with that).
-- Unwired further clouds the issue because they make no real sense. They count as a single program (so they can fit on a commlink or datajack I suppose), somehow lose cohesion without a home node, seemingly cannot copy themselves, and yet don't actually need a home system to survive. I believe tying them too closely with the (lets face it, magical) Resonance is just going to end up resulting in some ridiculousness down the road.
Yup. The "no copy" thing really confused me when it popped up in our game the first time. I understand the concept of having something that is too complicated to copy, but the way computers communicate is by making copies of the relevant data on a new node. Maybe these new "AI" are like Free Spirits, and the old AI were closer to the Wraiths of old?
Posted by: hermit Jun 26 2010, 08:21 PM
QUOTE
Maybe these new "AI" are like Free Spirits, and the old AI were closer to the Wraiths of old?
Passions in ED. Totems in SR.
QUOTE
-- I have no problems with AI in Shadowrun per se, they've been a (shadowy) part of the game since first edition. I do not like how they were introduced in Emergence though, and the Resonance stuff is ... well I'm not a fan of magical cyberspace. Perhaps most annoying, many elements of Emergence are introduced, railroaded in, and then just abandoned. What happened to Sojourner for example? The AI dude was threatening global terrorism and then just gets talked down and forgotten? We have another AI on the slowboat to Alpha Centauri (good luck with that).
-- Unwired further clouds the issue because they make no real sense. They count as a single program (so they can fit on a commlink or datajack I suppose), somehow lose cohesion without a home node, seemingly cannot copy themselves, and yet don't actually need a home system to survive. I believe tying them too closely with the (lets face it, magical) Resonance is just going to end up resulting in some ridiculousness down the road.
Read Runners Companion already? It introduces different rules. Also, Running Wild introduces feral AI, which are AI without the I, basically. Critters in cyberspace.
I pretty much concur from a meta perspective with you. Emergence is about the worst SR book ever published (not least because it directly contradicts numerous established facts about the Arcology Shutdown, and Arsenal, which claims fallout from the public knowledge of the shutdown as a reason for Raku to radically reengineer their applicance drones - that added to what you brought up). If there would be one sourcebook I'd kick from canon and write off as falsified postings on JP, it's Emergence.
I could also do without Technomancers entirely.
Posted by: Mordinvan Jun 27 2010, 04:56 AM
QUOTE (TommyTwoToes @ Jun 24 2010, 01:23 PM)

the premise of that story was that the machines attacked pretty much as soon as people found out they were AI. The AI wasn't given a few years to prepare.
Actually in T1 and T2 it was noted the A.I. attacked once humans worked out it was self aware and tried to kill it.
Posted by: Mordinvan Jun 27 2010, 05:03 AM
QUOTE (hermit @ Jun 24 2010, 04:05 PM)

Just rebot the entire matrix. Kills them allright.
Only if they're 'sleeping' at the time. Otherwise, they blink off, then on again, and that is assuming you were able to reboot their specific home node.
QUOTE
Because they were a threat to them. Because they controled the world the AI lives in and the AI would forever be at their mercy. That is why Mirage and Deus did what they did. Thing is, in the SR universe, this already happened. Twice.
And you seem totally against the notion of A.I.'s having their own matrix to play in, so either pick living in peace with them, or don't. But picking war when every war machine on the planet has a matrix hookup is not the wisest decision I've ever heard. Like like a mundane who's been forced into an astral gateway picking a fight with a spirit. You 'could' do it, I'd just recommend against it.
Posted by: Mordinvan Jun 27 2010, 05:18 AM
QUOTE (SkepticInc @ Jun 25 2010, 10:41 PM)

Something about this should definitely got into a Space supplement, as it seems from this and other conversations that the question of letting an AI off the planet or not is contentious enough to draw blood.
The only way to prevent an A.I. from leaving is to either kill them all, or to allow no piece of technology more complicated then a sundial off the planet. Neither are particularly feasible.
Posted by: Mordinvan Jun 27 2010, 05:25 AM
QUOTE (Tzeentch @ Jun 25 2010, 11:53 PM)

-- Unwired further clouds the issue because they make no real sense. They count as a single program (so they can fit on a commlink or datajack I suppose), somehow lose cohesion without a home node, seemingly cannot copy themselves, and yet don't actually need a home system to survive. I believe tying them too closely with the (lets face it, magical) Resonance is just going to end up resulting in some ridiculousness down the road.
I wholly agree with this. A.I.'s should be programs, and nothing more. Given how badly technology interacts with magic, I even find the idea of technomancers nauseating. Either magic and technology CAN interact, or they can't, but to say I can't copy and A.I. because its 'magical', but has no aura, or anything else, just pisses me off.
Posted by: Mordinvan Jun 27 2010, 05:27 AM
QUOTE (hermit @ Jun 26 2010, 01:21 PM)

I could also do without Technomancers entirely.
Well atleast we can agree on something.
Posted by: hermit Jun 27 2010, 07:45 AM
QUOTE
But picking war when every war machine on the planet has a matrix hookup is not the wisest decision I've ever heard.
A matrix hookup you can easily
turn off.
QUOTE
And you seem totally against the notion of A.I.'s having their own matrix to play in, so either pick living in peace with them, or don't.
As I have stated before, the problem is to not allow an AI to play with world destruction level technology. But you don't seem to get this. Nevermind that I fail to see where you expect people in the 6th world to get a naive worship of AI from that you seem to profess. But to your credit, ever since SR4, people in the 6th world have stopped to make sense (the crash and matrix rebuild makes as much sense as installing block nuclear power plants in every city so everyoone can benefit off nuclear power as a reaction to the Tschernobyl meltdown).
Posted by: Sengir Jun 27 2010, 11:09 AM
QUOTE (Tzeentch @ Jun 26 2010, 07:53 AM)

I do not like how they were introduced in Emergence though
Well, the introduction of AIs was the better half of Emergence - because at least they were not trying to reveal something had already been spelled out two years ago with ZERO indication that this was strictly OOC knowledge and a big mystery ingame...
QUOTE
I believe tying them too closely with the (lets face it, magical) Resonance is just going to end up resulting in some ridiculousness down the road.
The AI-Resonance link was only IC speculation based on what little people knew about AIs during Emergence, it did not appear in any game rules. Although the idea is not exactly new, the old AIs also had some connection to the Deep Resonance.
Posted by: Mordinvan Jun 27 2010, 12:54 PM
QUOTE (hermit @ Jun 27 2010, 12:45 AM)

A matrix hookup you can easily turn off.
Which would leave an army with on really rudimentary communication methods. Not impossible to cope with, but really a bitch if you have to fight someone who doesn't have those restrictions.
QUOTE
As I have stated before, the problem is to not allow an AI to play with world destruction level technology.
Actually its that you don't think they should have a right to exist based on your previous quotes. I believe you said they hadn't earned it yet.
QUOTE
Nevermind that I fail to see where you expect people in the 6th world to get a naive worship of AI from that you seem to profess.
Maybe its not a 'worship', but more a realization that marginalizing an entire sentient population is going to have repercussions you are not going to like in the long run. Doing so to human societies has ALWAYS caused problems, and many of them we are still fixing today. Doing so to a creature who learns, and maintains its world view in the first 18 months of its potentially eternal existence, is going to be a disaster. Teaching something that you hate it for simply existing, especially when it knows, that baring accident or murder, it WILL outlive you, is about as bad an idea as I can imagine. You would be far better off spending that time teaching it you value it BECAUSE it is sentient, and doing what you can to equate sentience, and respect of sentience with being valuable, and hope it takes that lesson to heart.
All the things I've heard people say about the concept of A.I., "they're not alive", "its not racism, their not a species", "their just a program, and the property of whoever wrote them", "They are all dangerous/evil/alien". All of that has been said, in one from or another about various human racial groups at some point. While I will agree some A.I.'s are quite alien, both in outlook, and action, and some are dangerous. The same can be said of some humans, but we do not condemn the whole of humanity for that. When a human murders, we blame the human, not humanity. You however wish to blame taxi cab driving program for the crimes of Deus. Given the cab program had not even been complied until some years after Deus was dead, this is akin to me blaming you for some random crime committed by a random human who likely carries no direct blood relation to you, and died years before you were conceived.
If anything would spark an A.I. vs living war, it won't be the A.I.'s. They exist in the matrix, and in general have no care for a world made of anything other then data. It will be humans who fear them, and hope to win in a first strike scenario.
Congratulations General Genocide, what are you orders?
Posted by: hermit Jun 27 2010, 02:04 PM
QUOTE
Which would leave an army with on really rudimentary communication methods. Not impossible to cope with, but really a bitch if you have to fight someone who doesn't have those restrictions.
Because jammers don't exist? The whole wireless fad is easily defeatble.
QUOTE
Actually its that you don't think they should have a right to exist based on your previous quotes. I believe you said they hadn't earned it yet.
Quote it, then. Belief is best left in temples and cemetraries.
QUOTE
Maybe its not a 'worship', but more a realization that marginalizing an entire sentient population is going to have repercussions you are not going to like in the long run. Doing so to human societies has ALWAYS caused problems, and many of them we are still fixing today. Doing so to a creature who learns, and maintains its world view in the first 18 months of its potentially eternal existence, is going to be a disaster. Teaching something that you hate it for simply existing, especially when it knows, that baring accident or murder, it WILL outlive you, is about as bad an idea as I can imagine.
BUG RIGHTS!!! Seriously, all you saifd can be said about free spirits, especially free flesh forms, too. And anthorpomorphising the nonhuman always is a stupid idea. Yeah, it didn't work okay with the West because we developed the idea that all humans are equal. It DID work well for 95% of human recorded history though, and India's caste system is the most stable society that has ever existed, so even there your point does not hold.
QUOTE
You would be far better off spending that time teaching it you value it BECAUSE it is sentient, and doing what you can to equate sentience, and respect of sentience with being valuable, and hope it takes that lesson to heart.
Because fundamentally alien creatures are sure to have the same kind of empathy humans have, especially if they are solitary instead of pack based, where Empathy actually makes sense.
QUOTE
All the things I've heard people say about the concept of A.I., "they're not alive", "its not racism, their not a species", "their just a program, and the property of whoever wrote them", "They are all dangerous/evil/alien". All of that has been said, in one from or another about various human racial groups at some point.
Oh cry me a river. That is anthropomorphising and arrogant and insulting all in one. Not to mention again trying to morally bully me into fanbunnying an RPG concept. That is so out there I don't really know what to replay anymore.
QUOTE
If anything would spark an A.I. vs living war, it won't be the A.I.'s.
Read more, lurk more, post less. Dude, that is bullshit and you ought to know it. So read up if you want to talk AI or shut it.
Posted by: Rotbart van Dainig Jun 27 2010, 03:05 PM
QUOTE (hermit @ Jun 27 2010, 04:04 PM)

Because jammers don't exist? The whole wireless fad is easily defeatble.
Compared to SR3 wireless, SR4 wireless is ultrarobust and portable jammers are jokes.
Posted by: Sengir Jun 27 2010, 06:31 PM
QUOTE (Mordinvan @ Jun 27 2010, 12:54 PM)

[...]
Three steps to understanding Hermit's view of technomancers, AIs, Infected, and probably I lot of other stuff I didn't bother to read:
1.) Assume they suck and shouldn't have been included in the game in the first place
2.) Interpret every rule for them in the most broken way, and every piece of fluff in a way which makes the love child of Hitler, Darke, and Big V look like the world's greatest philanthropist in comparison.
3.) Some kind of profit, probably
Posted by: Tzeentch Jun 27 2010, 08:12 PM
Well, the rules for technomancers have serious issues that even a casual look will reveal. The AI rules are simply bizarre and a cop-out to avoid dealing with some major setting implications (not hardware bound, but cannot copy themselves, they are not 'really' loaded into any particular computer). Everything is wrapped up in a bizarre interpretation of the internet as a mystical alternate universe that would make even William Gibson blush. I honestly am avoiding the Matrix as much as possible in all my posts of space development because it has so little internal consistency and logic that I can't fit it in with a discussion that tries to be both those things.
Posted by: Sengir Jun 27 2010, 10:03 PM
QUOTE (Tzeentch @ Jun 27 2010, 09:12 PM)

Well, the rules for technomancers have serious issues that even a casual look will reveal.
Once you go beyong that casual look, however, most of these issues fade away. What remains is a character concept that, when fully tricked out, can seriously kick ass in his specialized area - in other words, the same as every other "mainstream" character.
IMO most problems with technomancers arise from the simple fact that GMs tend to be scared of the matrix rules. This gives a matrix-savy player too much leeway (imagine a troll sam and a GM who doesn't know the combat rules), and once the player starts using this freedom the GM will find that he is losing control over the situation and panic, because his lack of knowledge means he can't devise a sensible counter. And no, doubling the stats of all opponents is not a "sensible counter", it's the same a stationing an assault-class battlemech behing every stuffer shack in response to the aforementioned troll sam: It reeks of GM railroading, doesn't fit into the universe, and ultimately doesn't solve the problem.
QUOTE
The AI rules are simply bizarre and a cop-out to avoid dealing with some major setting implications (not hardware bound, but cannot copy themselves, they are not 'really' loaded into any particular computer).
That's not so much different from hackers "projecting" from their commlinks into different matrix nodes while still using the stats of that commlink, isn't it?
As far as copying goes, my idea is that an AI is too complex and dynamic to make a copy of its "thoughts" which is anything more than a static (and incomprehensible) image. Yes, real programs don't work that way, but to quote Jennifer Harding "the Matrix ain't yer daddy's communications protocol"
Posted by: Mordinvan Jun 27 2010, 10:25 PM
QUOTE (hermit @ Jun 27 2010, 07:04 AM)

Because jammers don't exist? The whole wireless fad is easily defeatble.
Those work really well on the front lines. Not so much back at HQ, or in the maintenance bay.
QUOTE
Quote it, then. Belief is best left in temples and cemetraries.
Just remember, you asked for it.
QUOTE
They should be glad they are allowed to exist at all. It's more than they earned for themselves.
Why do I feel like John Stewart at the moment?
QUOTE
BUG RIGHTS!!! Seriously, all you saifd can be said about free spirits, especially free flesh forms, too.
Nice equivocation there. I mean I give you props for trying. Bug spirits HAVE to kill to stay in this world. It's a mandatory requirement. A.I.'s need a com link, albeit they'd like a nice one. Human life.... com link.... human life.... com link. See the difference yet?
QUOTE
And anthorpomorphising the nonhuman always is a stupid idea. Yeah, it didn't work okay with the West because we developed the idea that all humans are equal. It DID work well for 95% of human recorded history though, and India's caste system is the most stable society that has ever existed, so even there your point does not hold.
Stable does NOT mean problem free. It actually means stagnant. If you're good with stagnant, then by all means. I however like dynamic progress. When no one has expectations of self improvement, the impetus for change and societal improvement drops off sharply. Also when did I praise blank stability? My argument was never based around a society incapable of change, but one which was looking for the best ways to change for the better.
QUOTE
Because fundamentally alien creatures are sure to have the same kind of empathy humans have, especially if they are solitary instead of pack based, where Empathy actually makes sense.
Look at the descriptions for the development of the character level A.I.'s in the books. It says they have a formational period of 18-24 months after awakening. This period of time is critical in determine whether they become the monster under your bed, or the companion by your side. Your outlook and treatment of them would be more likely to produce monsters and generate a self fulfilling prophecy of war.
QUOTE
Oh cry me a river. That is anthropomorphising and arrogant and insulting all in one.
It is also completely true.
QUOTE
Not to mention again trying to morally bully me into fanbunnying an RPG concept. That is so out there I don't really know what to replay anymore.
Well if the mirror is ugly, you have 2 options. Stop looking in it, or stop making the face you find so repulsive. One hides the problem, and the other fixes it. I'll let you pick which one you want to do.
QUOTE
Read more, lurk more, post less. Dude, that is bullshit and you ought to know it. So read up if you want to talk AI or shut it.
This is really interesting. Lets look at the history of A.I.'s for a moment shall we? Megeara and Deus for example as you love to throw them around. Morgan as it was originally called was 'interesting' but essentially harmless. It was only after it was hunted down, captured, and tortuously torn apart to figure out how it worked that any portion of it became dangerous. After this occurred Arcology Expert Program was made, and Deus was born. The vast majority of Deus' crimes were to done to actually escape being tied into a single building. Kinda like being born into a cage, and being able to see the world through the window, but never leave. So if the original A.I., Morgan, had simply been allowed to develop and grow based on her relationship with Dodger guess how much of the arcrology disaster could have been avoided? Pretty much all of it. This was a case where torturing, marginalizing, and imprisoning a sentient entity, blew up in everyone's faces, and you're essentially blaming the A.I. who were treated to abusively for lashing out. Put me in a cage, and poke me enough times, and I'll rip your throat out the first chance I get too. The A.I. problem, was created BY the mistreatment of A.I.'s by humans. Which is exactly what I've said before. So maybe you should really,
QUOTE
Read more, lurk more, post less.
Posted by: Mordinvan Jun 27 2010, 10:29 PM
QUOTE (Sengir @ Jun 27 2010, 03:03 PM)

As far as copying goes, my idea is that an AI is too complex and dynamic to make a copy of its "thoughts" which is anything more than a static (and incomprehensible) image. Yes, real programs don't work that way, but to quote Jennifer Harding "the Matrix ain't yer daddy's communications protocol"

Except I can shut down a node with an A.I. in it. The A.I. deactivates and will awaken when the node turns back on. At this point the A.I. is NOT dynamic at all, and should be easy to scan and copy.
Posted by: hermit Jun 28 2010, 03:09 PM
QUOTE ("me")
They should be glad they are allowed to exist at all. It's more than they earned for themselves.
Oops. That should've been 'deserved'. Mea Culpa.
QUOTE ("Mordivan")
<snip>
:rofl:
Uhm, yeah. You should try thumping like that for real issues. You might have a future in political agitation.
Posted by: Mordinvan Jun 28 2010, 03:15 PM
QUOTE (hermit @ Jun 28 2010, 08:09 AM)

Oops. That should've been 'deserved'. Mea Culpa.
:rofl:
Uhm, yeah. You should try thumping like that for real issues. You might have a future in political agitation.
Who says I don't?
Posted by: hermit Jun 28 2010, 03:20 PM
Then go fight against whaling or something?
Posted by: DeathStrobe Jun 28 2010, 05:02 PM
I like AI's. Most AI's don't even care about the meat world and are perfectly content with just doing what they were originally programmed for before they became self aware. While kind of boring, its true. I think Emergence did a good job at painting what and how AI's act.
They're not a direct threat to metahumanity. Worst case scenario you get another Goldenboy (or possibly you still have Goldenboy in your game) who is a stock analysis program that became self aware and began to accumulate vast control over the stock market. And even if he does take over the whole stock market what's he going to do with all that money? Aside from find a cure for his programming degradation, he's probably just going to keep moving the money around to make more money. He doesn't really care about much else other then making more money.
Also, I think it makes sense that AIs can't be copied. The theory is now that the Matrix has become so big that it is impossible to fully understand. That's pretty cyber punk, the concept of technology becomes as complex as life itself. AIs are like Johnny Five from Short Circuit, the scene where Johnny Five's creator looked at how Johnny had rewired himself and said that shouldn't even work or makes sense. The idea is that AIs code is so alien that it shouldn't work, not that its magic, per se, just that its illogical and shouldn't work but it does. So because its impossible to understand logically, it can't be copied.
Anyway, I think its fun. I like AI's.
Posted by: hermit Jun 28 2010, 05:08 PM
QUOTE
They're not a direct threat to metahumanity. Worst case scenario you get another Goldenboy (or possibly you still have Goldenboy in your game) who is a stock analysis program that became self aware and began to accumulate vast control over the stock market. And even if he does take over the whole stock market what's he going to do with all that money? Aside from find a cure for his programming degradation, he's probably just going to keep moving the money around to make more money. He doesn't really care about much else other then making more money.
Uhm, no, the worst case is Skynet, and SR already had one of those ...
Posted by: Doc Chase Jun 28 2010, 05:13 PM
QUOTE (Mordinvan @ Jun 27 2010, 11:25 PM)

This is really interesting. Lets look at the history of A.I.'s for a moment shall we? Megeara and Deus for example as you love to throw them around. Morgan as it was originally called was 'interesting' but essentially harmless. It was only after it was hunted down, captured, and tortuously torn apart to figure out how it worked that any portion of it became dangerous. After this occurred Arcology Expert Program was made, and Deus was born. The vast majority of Deus' crimes were to done to actually escape being tied into a single building. Kinda like being born into a cage, and being able to see the world through the window, but never leave. So if the original A.I., Morgan, had simply been allowed to develop and grow based on her relationship with Dodger guess how much of the arcrology disaster could have been avoided? Pretty much all of it. This was a case where torturing, marginalizing, and imprisoning a sentient entity, blew up in everyone's faces, and you're essentially blaming the A.I. who were treated to abusively for lashing out. Put me in a cage, and poke me enough times, and I'll rip your throat out the first chance I get too. The A.I. problem, was created BY the mistreatment of A.I.'s by humans. Which is exactly what I've said before. So maybe you should really,
I would believe most of this, save that Deus locked down the Arcology and was experimenting on a large number of people. Like,
Return to Castle Wolfenstein level experimentation. That's not the first action of anyone who wants to escape. He could've just as easily simply held the people hostage so Aneki would show up - far easier to make Matrix broadcasts to kill Renraku's PR campaign rather than create a host of drones to mess people up.
Posted by: hobgoblin Jun 28 2010, 05:48 PM
QUOTE (Tzeentch @ Jun 27 2010, 10:12 PM)

Well, the rules for technomancers have serious issues that even a casual look will reveal. The AI rules are simply bizarre and a cop-out to avoid dealing with some major setting implications (not hardware bound, but cannot copy themselves, they are not 'really' loaded into any particular computer). Everything is wrapped up in a bizarre interpretation of the internet as a mystical alternate universe that would make even William Gibson blush. I honestly am avoiding the Matrix as much as possible in all my posts of space development because it has so little internal consistency and logic that I can't fit it in with a discussion that tries to be both those things.
thing is, the SR4 AI is not much different from a SR3 SK. They where creations of very high powered hosts and very custom programming, and was "chained" to the host as it was needed to maintain long term stability of the SK. A SR3 AI could in comparison gather the needed CPU resources by setting up a kind of cluster across the matrix.
this may be of interests (funny coincidence on the title btw):
https://secure.wikimedia.org/wikipedia/en/wiki/Emergence
basically, the SR matrix have have started showing emergent properties, thanks to a growing complexity of the hardware and software involved.
we are even seeing this in real life when computers behave in odd ways thanks to interaction between the various programs and hardware (tho RL computer complexity have not grown at the rate shown in SR). One example could be a recent story i read where a computer had a stuck bit in some ram chip that resulted in a program trying to read the wrong memory address when run. This thanks to the binary code read from drive encountered the stuck bit, changing the address stored inside the code in the process. The user only noticed the problem because the program was crashing in odd ways, tho it could just as well just return the "wrong" result if the memory area had been used for data rather then code (like say altering the outcome of a calculation, or changing the content of a media file). The question becomes how far up a random chain such a flaw can go without causing a crash or other nasty error. Or for that matter, what happens if multiple flaws gather within a single piece of code or data over time? Monkeys, typewriters and Shakespeare

as for the non-copy issue, remember that SR have a history of DRM that "works" (tho SR4 have undermined that to a fair degree, much like it has other old truisms of SR matrix). Thing is, ever since crash 1.0, SR computers have behaved differently from RL computers on a very deep level. I think a early corp book talked about the foundation of renraku or fuchi was based on a memory technology that was incompatible with the pre-crash computers (makes about zero sense, but that was early SR for you).
Posted by: hobgoblin Jun 28 2010, 05:58 PM
QUOTE (Mordinvan @ Jun 28 2010, 12:25 AM)

This is really interesting. Lets look at the history of A.I.'s for a moment shall we? Megeara and Deus for example as you love to throw them around. Morgan as it was originally called was 'interesting' but essentially harmless. It was only after it was hunted down, captured, and tortuously torn apart to figure out how it worked that any portion of it became dangerous. After this occurred Arcology Expert Program was made, and Deus was born. The vast majority of Deus' crimes were to done to actually escape being tied into a single building. Kinda like being born into a cage, and being able to see the world through the window, but never leave. So if the original A.I., Morgan, had simply been allowed to develop and grow based on her relationship with Dodger guess how much of the arcrology disaster could have been avoided? Pretty much all of it. This was a case where torturing, marginalizing, and imprisoning a sentient entity, blew up in everyone's faces, and you're essentially blaming the A.I. who were treated to abusively for lashing out. Put me in a cage, and poke me enough times, and I'll rip your throat out the first chance I get too. The A.I. problem, was created BY the mistreatment of A.I.'s by humans. Which is exactly what I've said before. So maybe you should really,
heck, Deus ran into a kind of cognitive dissonance. First its instilled with a sense of loyalty to the renraku corporation, but then at the same time the CEO have certain chains and a kill switch embedded into it. So, no matter how loyal it is, they wont trust it, then why bother being loyal in the first place?
Posted by: blackwulf Jun 28 2010, 06:25 PM
A couple things I would have to argue SR4 is not an enviroment of sweetness and light. Any AI created in this world is going to reflect that. As I recall in emergence they were performing vivisections on living people, You think anybody willing to do that is going to give a damn about an AI or any AI born in that enviroment will give a s--t about people? Second, and this ought to make me popular how many sinless involuntary in specific are going to go an program can have citizenship and i can"t VIVA la revolucion. If that isn't born naturally someone will feed it and water until it is. Just a couple of thoughts Blackwulf
Posted by: Tzeentch Jun 28 2010, 08:41 PM
QUOTE (hobgoblin @ Jun 28 2010, 05:48 PM)

thing is, the SR4 AI is not much different from a SR3 SK. They where creations of very high powered hosts and very custom programming, and was "chained" to the host as it was needed to maintain long term stability of the SK. A SR3 AI could in comparison gather the needed CPU resources by setting up a kind of cluster across the matrix.
-- SR4 AI's have little in common with the semi-autonomous knowbots of SR3, actually. The entire idea behind what constitutes an artificial intelligence is different between the editions. Things
radically changed with the introduction of Deus and the otaku.
QUOTE
basically, the SR matrix have have started showing emergent properties, thanks to a growing complexity of the hardware and software involved.
-- Emergent behavior is complexity arising from simple rules. Shadowrun 4 AI basically boils down to sapience by (magical, as it just sort of floats around in magical cyberastralspace and isn't something you can see loaded and copy) algorithm which I suppose is a version of this. Before SR4 it was assumed that AIs required rather massive infrastructure support to maintain cognition. The rumor that they could survive via networking small processes all over the Matrix was (in my mind) rather more ridiculous than a sapient algorithm though, as the connection delays would result in something very strange and alien.
-- Say what you will about Shadowrun AI, but they are a perfect example of "human in a funny suit" no matter how it tries to spin things.
QUOTE
as for the non-copy issue, remember that SR have a history of DRM that "works"
-- Quite the opposite, it's
never worked in Shadowrun canon. Remember that cyberdecks used to be locked at the manufacturer to prevent them from being used for cybercrime, but all it took was a wily hacker to replace the chipset with a "stealthed" version and you were good to go. This is functionally the equivalent of modding your console with the most crazy DRM ever seen. Software was not copied because noone trusted that the software wasn't actually malware, and the SoTA rules were absolutely ridiculous

QUOTE
(tho SR4 have undermined that to a fair degree, much like it has other old truisms of SR matrix). Thing is, ever since crash 1.0, SR computers have behaved differently from RL computers on a very deep level. I think a early corp book talked about the foundation of renraku or fuchi was based on a memory technology that was incompatible with the pre-crash computers (makes about zero sense, but that was early SR for you).
-- I've given up trying to rationalize the Shadowrun Matrix (and this from a guy who wrote a 250+ page fan sourcebook on the subject). The developers keep trying to stay relevant but things just turn out screwy
Posted by: hermit Jun 28 2010, 09:05 PM
QUOTE
and this from a guy who wrote a 250+ page fan sourcebook on the subject
Can this be found somewhere on the internet?
Posted by: SkepticInc Jun 28 2010, 09:11 PM
Maybe the reason the AI are in the MeshMagicMetaplane (MMM) is because they use the same metaphor for communication.
************* Thinky bits up here *************
[PROGRAM] <---------{talking}------> [SPELL FORCE]
||--------------------------------------------------------||
[SYSTEM] <-----------{talking}-------> [SPELL SKILL]
||--------------------------------------------------------||
[RESPONSE] <--------{talking}----> [MAGIC RATING]
||--------------------------------------------------------||
[FIREWALL/SIGNAL PROTOCOL STACK/WALL OF FIRE]
*********** Crunchy bits down here ************
You have a protocol stack on each side. I don't see the problem.
Posted by: Sengir Jun 28 2010, 09:30 PM
QUOTE (Tzeentch @ Jun 28 2010, 09:41 PM)

Before SR4 it was assumed that AIs required rather massive infrastructure support to maintain cognition.
Yeah, the old rules where basically all or nothing: Omnipotent matrix god or no sapience at all. The new AIs can work with significantly less resources, but are accordingly less powerful.
QUOTE
The rumor that they could survive via networking small processes all over the Matrix was (in my mind) rather more ridiculous than a sapient algorithm though
Ever heard of something called "The Network"?

The descriptions of Morgan and Mirage also imply that they lived within the matrix as a whole, not a secret UV host. For example, where should Morgan have gotten a sufficiently powerful host from after her escape from the Arcology?
QUOTE
as the connection delays would result in something very strange and alien.
What connection delays, as long as you don't use satlinks there are none

Again, the SR matrix cannot be compared to any real computer network. If you try to "rationalize" the matrix by comparing it to the real world you're just going to hurt your brain...personally, my approach is similar to astral space: The rules say it exists and this is how it works, so when playing the game I stick with that and not real life facts like the nonexistance of magic or the complications of ad-hoc routing.
And a random philosophy bit for tonight: For all we know, "imperfections" like latency and concurrency issues could be exactly what makes conciousness...
Posted by: Doc Chase Jun 28 2010, 09:33 PM
QUOTE (Sengir @ Jun 28 2010, 10:30 PM)

Ever heard of something called "The Network"?

Yeah, but CBS isn't around in 2072.

QUOTE
The descriptions of Morgan and Mirage also imply that they lived within the matrix as a whole, not a secret UV host. For example, where should Morgan have gotten a sufficiently powerful host from after her escape from the Arcology?
I would think Mirage had his own host from the old Echo Mirage servers. Probably not UV, but there's nothing else on them. Morgan - well, Dodger's love of a knowbot created her, so clearly he can make a freakin' UV node or something.
Posted by: hermit Jun 28 2010, 09:38 PM
Actually, the AI in the Arcology had an UV host right from the first mention in the SoP novels (back when it was just a really complex knowbot, it only became an AI becaue of Dodger's Twu Wuv).
Not to say those AI made a whole lot of sense or were very coherent. They wre, up to SR4 deciding they should be playable and plentyful, rare and a pure story device used by several authors for different purposes.
Posted by: Mordinvan Jun 28 2010, 10:47 PM
QUOTE (hermit @ Jun 28 2010, 08:20 AM)

Then go fight against whaling or something?
Why don't you?
You are promoting the "idea" of racism and bigotry as being a good thing. It wouldn't matter where this was, or who was saying it. I'd call it bad. The vast majority of the 50k + metasapients are intelligent beings who just wish to live their life. The few major mass murders you list became what way because of human prejudice, torture and imprisonment. If you fail to see an analog between how YOU are proposing A.I.'s should be treated, and how real humans have been oppressed in the real world, then YOU are part of the problem, in the real world.
The unwillingness to empathize with another group outside of what you perceive your own to be is actually how most of the atrocities in human history have been allowed to occur.
Posted by: Tzeentch Jun 28 2010, 11:48 PM
QUOTE (Mordinvan @ Jun 28 2010, 11:47 PM)

Why don't you?
You are promoting the "idea" of racism and bigotry as being a good thing. It wouldn't matter where this was, or who was saying it. I'd call it bad. The vast majority of the 50k + metasapients are intelligent beings who just wish to live their life. The few major mass murders you list became what way because of human prejudice, torture and imprisonment. If you fail to see an analog between how YOU are proposing A.I.'s should be treated, and how real humans have been oppressed in the real world, then YOU are part of the problem, in the real world.
The unwillingness to empathize with another group outside of what you perceive your own to be is actually how most of the atrocities in human history have been allowed to occur.
-- I can't tell if you are roleplayng a Horizon rep or are actually serious.
Posted by: Mordinvan Jun 29 2010, 12:45 AM
QUOTE (Tzeentch @ Jun 28 2010, 04:48 PM)

-- I can't tell if you are roleplayng a Horizon rep or are actually serious.
edit. Roughly it translates out to thinking that all racism against sentient entities is bad in general, and while I understand it is part and parcel of the SR4 world, the way in which Hermit is displaying it is exactly how it allows genocide in the real world, and I don't believe he's considered the ultimate causes or consequences of such an attitude.
Posted by: DeathStrobe Jun 29 2010, 12:46 AM
QUOTE (hermit @ Jun 28 2010, 06:08 PM)

Uhm, no, the worst case is Skynet, and SR already had one of those ...
AI's aren't that smart, or care enough about metahumanity to go Skynet on the 6th World, at least anymore. If we go with the fluff that Emergence set up about AI's they really only care about the Matrix. And the handful of AI's that would bother to interact with metahumanity are programs that were originally designed to socialize with humans, like a singing idol game that became self aware.
As cool as Terminator is and the whole Skynet syndrome AI's have in pop culture, its been done. That's why, I assume, the writers have taken a different approach to AI's in SR4. They can't constantly keep having bigger badder AI's ruling the Matrix or else it'll get boring. You enter in to a problem of the Matrix being so dangerous that only the 1337 Hacker can use the Matrix or that there is no more Matrix because no one would allow such a powerful being to run rampant and uncontrolled, so all the Corps and Government shut down the Matrix forever, and the whole Shadowrun universe returns to high fantasy because there is no more technology.
Really, now, what on earth do you people want from AIs? Do you want them to be so superior to metahumanity that we get a Terminator scenario of man vs machine? Maybe a scenario like Wall-E where machines have effectively enslaved us with comfort and humanity becomes arbitrary, and maybe our AI overlords can begin to slowly kill us off with brainwashing campaigns where they encourage us to kill ourselves off, or slowly work a genetic defect in to our system lowering the birth rate. I'd like to take some create, but I actually stole these ideas from Ted Kaczynski (AKA the Unabomber).
Mr. Kaczynski is the poster child of the anti-AI movement. He saw all that crazy stuff in Shadowrun happening way back in the 1970's. Hm...this gives me a great idea to create an antagonist group in Shadowrun...
Anyway, I like the direction they're taking AI's in SR4. Super AI's that operate at Godly levels are boring. I like the idea that those Super AI's died and their code some how entered in to normal programs and somehow made them become self aware after Crash 2.0. I think its an interesting plot hook. And allows the Matrix to be more then just 2 Super AI's having an Online War.
Posted by: DeathStrobe Jun 29 2010, 12:46 AM
damn double post
Posted by: Tzeentch Jun 29 2010, 03:58 AM
QUOTE (Mordinvan @ Jun 29 2010, 01:45 AM)

edit. Roughly it translates out to thinking that all racism against sentient entities is bad in general, and while I understand it is part and parcel of the SR4 world, the way in which Hermit is displaying it is exactly how it allows genocide in the real world, and I don't believe he's considered the ultimate causes or consequences of such an attitude.
-- Biochauvinism sure, but racism? There's a pretty tenuous connection you are drawing I think (connecting attitudes towards AI with real world genocide that is). I think there would be a much stronger case with regards to treatment and attitudes towards metahumans (especially the odder variants) and sapient nonhumans like centaurs and merrow.
-- A lot of my Matrix stuff was incorporated into The Matrix and I would not be comfortable releasing stuff that I basically got paid for, but the stuff that wasn't covered I spun off into its own http://tzeentchnet.pingslave.com/Shadowrun/VIRTUAL_REALITIES_3.pdf.
Posted by: IceKatze Jun 29 2010, 04:06 AM
hi hi
QUOTE
As you read this, right now, there is a tuna fisherman killing a dolphin somewhere, and they're not even a threat to us.
I think it is safe to say that a percentage of the population will not care about anyone but themselves.
Posted by: hermit Jun 29 2010, 06:34 AM
Why does this thread draw in crazy people so much?
QUOTE
Roughly it translates out to thinking that all racism against sentient entities is bad in general, and while I understand it is part and parcel of the SR4 world, the way in which Hermit is displaying it is exactly how it allows genocide in the real world, and I don't believe he's considered the ultimate causes or consequences of such an attitude.
So you are basically saying that because I am trying to point ut to loonies like yourself that a
fictional AI in a
fictional setting should be considered a hazard to humanity because
the story so far has been doing nothing but thumb that point to most people (R:AC is being read as school material with SRA!) somehow I am a nazi? What the fuck, dude. Seriously, you should do something about your problems separating fact and fiction.
QUOTE
Really, now, what on earth do you people want from AIs?
I really have no idea. Building one, especially one that is anthropomorphic, makes neither scientific nor economical sense at all. It is a plot device and a scifi trope gone amok. A leftover from the Golden Age of scifi where the wise computer and sentient robot was the way technology should have gone (reality had technology go for a swarm of ants instead), and writers unwilling to let that go, maybe.
What with all the doomsday scenarios involving anthropomorphic AI, though, I suppose we won't see that happen anyway.
QUOTE
Maybe a scenario like Wall-E where machines have effectively enslaved us with comfort and humanity becomes arbitrary, and maybe our AI overlords can begin to slowly kill us off with brainwashing campaigns where they encourage us to kill ourselves off, or slowly work a genetic defect in to our system lowering the birth rate.
I'd like them
not destroying the setting like this, for starters. AI are a tired, overused little trope. I'd like to see that fade into the background for less generic and more shadowrun-specific stuff to take the front again.
QUOTE
That's why, I assume, the writers have taken a different approach to AI's in SR4. They can't constantly keep having bigger badder AI's ruling the Matrix or else it'll get boring.
I agree. However, just having the AI magicaly change nature and everybody living in that world falling over themselves to welcome them in spite of everything that happened before is just bad writing. It was extremly forced, destroyed the story' internal coherence and never showed to develop organically at all. It should have been laid out over a longer plot, and the initial reaction should have been a lot more hostile, especially considering how the corps and governments were all over mancers for
exactly that reason. I can see Emergence tried that, but it has so many conceptual problems it just falls flat on it'S face.
QUOTE
Anyway, I like the direction they're taking AI's in SR4. Super AI's that operate at Godly levels are boring. I like the idea that those Super AI's died and their code some how entered in to normal programs and somehow made them become self aware after Crash 2.0. I think its an interesting plot hook.
I don't. It's the same crap we have been reading since Asimov wrote his first robot story. It's tiring really, and was added in a way that damaged the setting as a whole. I'd rather like to see the story focus on something else for a while. Maybe a plot around mongolia becoming an orc nation? Or the Amazonia-Aztlan war. Primaira Varga. The Zabotnikists. Or even whatever is behind Horizon (though I am pretty sure by now I do not
want to know). Immortal Elves and their petty little woes. Dragons on Wall Street. The spirit of Abraham Lincoln running for president on a "reunite North and South" ticket.
Shadowrun is about the dawn of the 6th world and the legacy of the 4th. It is NOT yet another bland Asimov/Shirow carbon copy scifi clone. Or at least, it is not supposed to be.
Posted by: hobgoblin Jun 29 2010, 08:49 AM
QUOTE (Sengir @ Jun 28 2010, 11:30 PM)

Again, the SR matrix cannot be compared to any real computer network. If you try to "rationalize" the matrix by comparing it to the real world you're just going to hurt your brain...personally, my approach is similar to astral space: The rules say it exists and this is how it works, so when playing the game I stick with that and not real life facts like the nonexistance of magic or the complications of ad-hoc routing.
i keep finding myself borrowing concepts like the https://secure.wikimedia.org/wikipedia/en/wiki/OSI_model but try to stay away from the nitty gritty implementation details.
Posted by: Mordinvan Jun 29 2010, 11:13 AM
QUOTE (Tzeentch @ Jun 28 2010, 08:58 PM)

-- Biochauvinism sure, but racism? There's a pretty tenuous connection you are drawing I think (connecting attitudes towards AI with real world genocide that is). I think there would be a much stronger case with regards to treatment and attitudes towards metahumans (especially the odder variants) and sapient nonhumans like centaurs and merrow.
It bares all the hallmarks of racism, and is done for the same reasons, and achieves the same effect. A rose by any other name.
Posted by: IceKatze Jun 29 2010, 11:24 AM
hi hi
QUOTE
It bares all the hallmarks of racism, and is done for the same reasons, and achieves the same effect.
I don't think you're correct on this one.
QUOTE
Racism is the belief that race is a primary determinant of human traits and capacities and that racial differences produce an inherent superiority of a particular race.
First, the term racism doesn't count as a pejorative when there actually are inherent differences. Second, nobody was using it as a justification for the superiority of humans rather as a matter of self defense in the same vein as http://en.wikipedia.org/wiki/Plank_of_Carneades. The morality of which is still open to debate, but it isn't a unfounded bias.
Posted by: Mordinvan Jun 29 2010, 11:42 AM
QUOTE (hermit @ Jun 28 2010, 11:34 PM)

Why does this thread draw in crazy people so much?
I don't know, why are you here?
QUOTE
So you are basically saying that because I am trying to point ut to loonies like yourself that a fictional AI in a fictional setting should be considered a hazard to humanity because the story so far has been doing nothing but thumb that point to most people (R:AC is being read as school material with SRA!) somehow I am a nazi? What the fuck, dude. Seriously, you should do something about your problems separating fact and fiction.
No, I'm saying that loonies like yourself are treating
fictional AI"s"in a
fictional setting exactly like any real world group of oppressive humans has treated any real world group of oppressed humans. I am well aware that such attitudes are common in the SR fiction, but that does not remove the direct parallels of comparing it to human v human racism. I actually believe raising awareness of these forms of discrimination affect the modern world is also part of parcel of the game, and that you seem to miss out on that, and call me crazy for not glossing it over is interesting. Lastly I have no problems separating fact from fiction.
QUOTE
I really have no idea.
And that's part of the problem, because you fail to see value in something makes it really easy to throw the baby out with the bathwater.
QUOTE
Building one, especially one that is anthropomorphic, makes neither scientific nor economical sense at all.
Ya, I know, having an understanding of how sentience emerges from a set of very simple interconnections could have no practical or economic impact at all. I mean how could knowing exactly how an actual mind works, and having the capacity to simulate one using a computer possible help develop new drugs/marketing strategies/propaganda/autonomous researchers/writers, etc.
QUOTE
It is a plot device and a scifi trope gone amok. A leftover from the Golden Age of scifi where the wise computer and sentient robot was the way technology should have gone (reality had technology go for a swarm of ants instead), and writers unwilling to let that go, maybe.
Our technology is creeping us ever closer to being able to simulate all the workings of an actual human brain. The reasons we can't do it yet are NOT that its impossible, but that we simply don't have the background yet. The brain is nothing but a very intricate series of connections, and once their interplay is properly understood, we can make an adequately detailed computer model, and produce a synthetic intelligence. A.I.'s are interesting, not just because of how we can use them to tell stories, and good ones at that, but because when the first synthetic sentience looks upon the world, it will most likely be doing so using a simulated human mind.
QUOTE
I'd like them not destroying the setting like this, for starters. AI are a tired, overused little trope. I'd like to see that fade into the background for less generic and more shadowrun-specific stuff to take the front again.
Its not destroying the setting however. A good portion of the cyberpunk genre is the question about where life ends and machines begin, and A.I.'s are a necessary part of that question because they approach it from a completely different direction.
Posted by: hermit Jun 29 2010, 11:49 AM
QUOTE
Lastly I have no problems separating fact from fiction.
We can see that.
Posted by: Mordinvan Jun 29 2010, 11:50 AM
QUOTE (IceKatze @ Jun 29 2010, 04:24 AM)

I don't think you're correct on this one.
Its actually almost textbook, and by that I mean the 'race and racism' text book I have from the anthropology courses I took under the same title.
QUOTE
First, the term racism doesn't count as a pejorative when there actually are inherent differences. Second, nobody was using it as a justification for the superiority of humans rather as a matter of self defense in the same vein as http://en.wikipedia.org/wiki/Plank_of_Carneades. The morality of which is still open to debate, but it isn't a unfounded bias.
Except the PC level A.I. are not inherently dangerous. There is nothing about them which dictates you have to kill them to save yourself. If anything provoking them in the manner some feel would be appropriate is the same action which cause the first super A.I.'s to become dangerous in the first place. Part of what makes it so odd is the inability to learn that it demonstrates. You spend all day poking a bear with a stick. Eventually the bear gets fed up, and kill you. Someone watches this. Shoots the bear, and then sees some bear cubs, and WANTS to poke them because bears haven't earned the right to be respected yet because that last one killed somebody. I mean really, this is what I'm seeing here.
edit: Carneades' Plank IS however totally appropriate for bug spirits. Those things HAVE to kill you to exist, and and as such, treating them as dangerous, and 'ending' them is totally appropriate. same for vampires, and the 'touch' transmission ghouls.
Posted by: hermit Jun 29 2010, 12:03 PM
QUOTE
You spend all day poking a bear with a stick. Eventually the bear gets fed up, and kill you. Someone watches this. Shoots the bear, and then sees some bear cubs, and WANTS to poke them because bears haven't earned the right to be respected yet because that last one killed somebody. I mean really, this is what I'm seeing here.
Less poking the cubs, more shooting them too, because bears have proven dangerous. This may be morally questionable from a detached humanist point of view, but is a very plausible action for humans who feel threatened by bears.
QUOTE
Except the PC level A.I. are not inherently dangerous. There is nothing about them which dictates you have to kill them to save yourself.
You're (again) commiting the fallacy that every denizen of this fictional world knows the game rules, stats, and the "game info" sections by heart. Judging in-game, the Emergence AI surge is just the same case as with vamps and bugs (remember Sojourner?). Or what about Shedim? Technically, they do not HAVE to kill. They just LIKE to.
I also like your idea that it is okay to experiment on a simulated mind to your heart's content. Hitler is a criminal, Mengele not? You should maybe read that anthropology book again.
Posted by: Mordinvan Jun 29 2010, 12:23 PM
QUOTE (hermit @ Jun 29 2010, 05:03 AM)

Less poking the cubs, more shooting them too, because bears have proven dangerous. This may be morally questionable from a detached humanist point of view, but is a very plausible action for humans who feel threatened by bears.
Yes, except any real objective view would actually blame the first person with the stick. Also just because its plausible does not make it 'right'. I got into a few fights with people of color going to school, using your logic, I would be justified to believe they are all evil, and out to hurt me. It would therefore be plausible to treat them all as threats. Except its not. Its the fallacy of hasty generalizations.
With A.I.'s that attitude fails to account for the fact that most of the A.I.s are harmless. Many are alien and do some very strange things, but by in large they are harmless. It also doesn't look at the root cause of the initial A.I. aggression in the first place, which was mistreatment by humans.
QUOTE
You're (again) commiting the fallacy that every denizen of this fictional world knows the game rules, stats, and the "game info" sections by heart. Judging in-game, the Emergence AI surge is just the same case as with vamps and bugs (remember Sojourner?). Or what about Shedim? Technically, they do not HAVE to kill. They just LIKE to.
Vamps and Bugs MUST kill to survive. Shedim are driven to cause suffering and death. A.I.'s are driven by their core programing, which likely means they'll look for the most comfortable matrix environment they can, to do what they were made to do. Unless and A.I. develops from black I.C. or something, its going to have no desire to harm people. That much IS actually known, at least by many of the larger players on the scene. The corps are also forward thinking enough to realize blind hostility towards something that lives in, and can do irrecoverable damage too your computer networks is not a winning proposition. When a R6 A.I. can turn a nice com link into the equivalent of a super computer, they are simply too handy to want to have angry at you. Simple utility says finding, and befriending as many of these A.I.'s as possible is actually a really good idea.
Posted by: hermit Jun 29 2010, 12:46 PM
QUOTE
Yes, except any real objective view would actually blame the first person with the stick. Also just because its plausible does not make it 'right'.
"Right" is a cathegory that only exists in academic circles. There is no "right" in the real world, only more or less damage done.
QUOTE
I got into a few fights with people of color going to school, using your logic, I would be justified to believe they are all evil, and out to hurt me. It would therefore be plausible to treat them all as threats. Except its not. Its the fallacy of hasty generalizations.
It also
happens quite a lot, not least in your home country, like for instance with the recent bollocks law on everybody arming up to no end. Are you blind or do yu just choose to ignore reality because it does not live up to your high moral standards?
No, it is not 'right' in a moral sense to react that way, and often it is a bad choice for damage done. However, it is a very plausible reaction.
QUOTE
Unless and A.I. develops from black I.C. or something, its going to have no desire to harm people.
Yes, that was obvious when a glorified home management software started to buthcer people by the hundreds of thousands.
QUOTE
When a R6 A.I. can turn a nice com link into the equivalent of a super computer, they are simply too handy to want to have angry at you. Simple utility says finding, and befriending as many of these A.I.'s as possible is actually a really good idea.
If you rule in simple naive trust in empathy and all AI reacting fully anthropomorphic, and disregard blatant examples where this failed (the killswitch to be installed in Deus was, btw, known only to Sherman Huang and a couple dead people, so no, to everyone who is not Sherman Huang, it looked like the AI going amok unprovoked). However, that is a much too simplistic approach that, again, assumes people in the SR universe know the stats of things somehow. Also, keep in mind an AI administered commlink usuaqly is of no use to anyone but the AI unless the AI specifically decides otherwise.
Given the knowledge everyone in the SR world (save for Sherman Huang, who is a psycho) has, how sensible is it to put yourself at the mercy of something that, last time, abused that situation to the worst extent possible?
Posted by: Sengir Jun 29 2010, 12:46 PM
QUOTE (Doc Chase @ Jun 28 2010, 10:33 PM)

I would think Mirage had his own host from the old Echo Mirage servers. Probably not UV, but there's nothing else on them. Morgan - well, Dodger's love of a knowbot created her, so clearly he can make a freakin' UV node or something.

UV hosts require truly epic ammounts of processing power, and the descriptions of Dodger in the Secrets of Power triology don't really sound like he has something the size of ENIAC (plus a dedicated nuclear reactor) in his basement...and if he had, why did he have to search for Morgan?
Oh, and could one of the "THE END IS NIGH!!!!!!!!!!!!!!!!!111111111" shouters be so kind and enlighten me why Shadowrun AIs have to follow the old world domination/destruction trope? It seems you guys are really disappointed by the fact that the new AIs have neither the intent nor (because they are little more than hackers iwthout meat bodies) the means to blow up the world, but why should each and every fictional AI do that?
Posted by: MortVent Jun 29 2010, 12:53 PM
QUOTE (Sengir @ Jun 29 2010, 07:46 AM)

UV hosts require truly epic ammounts of processing power, and the descriptions of Dodger in the Secrets of Power triology don't really sound like he has something the size of ENIAC (plus a dedicated nuclear reactor) in his basement...and if he had, why did he have to search for Morgan?
Oh, and could one of the "THE END IS NIGH!!!!!!!!!!!!!!!!!111111111" shouters be so kind and enlighten me why Shadowrun AIs have to follow the old world domination/destruction trope? It seems you guys are really disappointed by the fact that the new AIs have neither the intent nor (because they are little more than hackers iwthout meat bodies) the means to blow up the world, but why should each and every fictional AI do that?
Some AIs do want the want the world to burn... some want to save it, others just want to be left alone... and a few just lvoe cybersex!
Posted by: TommyTwoToes Jun 29 2010, 12:55 PM
QUOTE (Mordinvan @ Jun 29 2010, 08:23 AM)

Vamps and Bugs MUST kill to survive. Shedim are driven to cause suffering and death. A.I.'s are driven by their core programing, which likely means they'll look for the most comfortable matrix environment they can, to do what they were made to do. Unless and A.I. develops from black I.C. or something, its going to have no desire to harm people. That much IS actually known, at least by many of the larger players on the scene. The corps are also forward thinking enough to realize blind hostility towards something that lives in, and can do irrecoverable damage too your computer networks is not a winning proposition. When a R6 A.I. can turn a nice com link into the equivalent of a super computer, they are simply too handy to want to have angry at you. Simple utility says finding, and befriending as many of these A.I.'s as possible is actually a really good idea.
I think that the fundemental cause for the disagreement here is that some are arguing from the point of view of a player, while others are arguing from the viewpoint of a character in the game. The inhabitants of the game world do not know that a PC level AI has limitations that prevent it from pulling Deus level madness. They have what the media has fed them, and fear sells.
In fact, fear sells, is a primary factor in almost every marketing campaign. Fear of rejection, fear of failure, fear of alienation. It is much more likely that the man on the street is terified of AI's than that they would be willing to co-exist in any meaningful way.
You might find some corp execs or cyberneticists that hold the oposite view. And in fact they would be the ones developing Matrix tech that could support the AI's need for processing power. These people would be the exception not the rule.
The poking the bear with a stick analogy is a pretty poor one. After all the bear's actions are based on instincts and behaviors that can be seen in other wild animals. The bear has a worldview that is comprehensible to humans, where AI outlooks are more likely to be completely alien and incomprehensible.
This thread would be more enjoyable if the rhetoric and name calling was confined to a sort of in-character level rather then the vitriol that has broken out.
Posted by: Grinder Jun 29 2010, 01:15 PM
QUOTE (hermit @ Jun 29 2010, 08:34 AM)

Why does this thread draw in crazy people so much?
QUOTE (Mordinvan @ Jun 29 2010, 01:42 PM)

I don't know, why are you here?
Guys, keep it civil. The above is just one example of your unfriendly argument you're having here - don't continue it in this ugly style.
Posted by: Sengir Jun 29 2010, 01:38 PM
QUOTE (TommyTwoToes @ Jun 29 2010, 01:55 PM)

The inhabitants of the game world do not know that a PC level AI has limitations that prevent it from pulling Deus level madness. They have what the media has fed them, and fear sells.
Emergence says most people don't even know about Deus. I know, other publications have told a different story, but the book which deals with the AI scare assumes all of this happened without public knowledge of Deus' actions...
Posted by: Mordinvan Jun 29 2010, 02:24 PM
QUOTE (hermit @ Jun 29 2010, 05:46 AM)

Yes, that was obvious when a glorified home management software started to buthcer people by the hundreds of thousands.
After its mother was effectively raped, and it was locked in a cage, I can't exactly blame it.
QUOTE
If you rule in simple naive trust in empathy and all AI reacting fully anthropomorphic, and disregard blatant examples where this failed (the killswitch to be installed in Deus was, btw, known only to Sherman Huang and a couple dead people, so no, to everyone who is not Sherman Huang, it looked like the AI going amok unprovoked). However, that is a much too simplistic approach that, again, assumes people in the SR universe know the stats of things somehow. Also, keep in mind an AI administered commlink usuaqly is of no use to anyone but the AI unless the AI specifically decides otherwise.
As it upgrades the machine it uses for its home node, it actually is of substantial use. Especially if you were the one to make contact with it, and suggest it move there for mutual benefit. I don't think all A.I.'s have empathy. It would be a learned skill to them unless they are from emotive software. Now I do know the books says they have a formative period during which they could 'learn' empathy, and I think you'd do much better to teach them that, then hatred of all that is different.
QUOTE
Given the knowledge everyone in the SR world (save for Sherman Huang, who is a psycho) has, how sensible is it to put yourself at the mercy of something that, last time, abused that situation to the worst extent possible?
Given a) kill switch or not Deus was locked in a cage and could not leave b) creatures especially when trapped or threatened are dangerous, I would likely avoid 'locking' them away unless they prove to be dangerous, and avoid going to extreme lengths to piss them off. Taking those factors into account, I wouldn't have much of a problem with them, any more then I do having friends who happen to share the same ethnicity as some of the worlds worst mass murderers, ie russian, german, chinese, cambodian.
Posted by: Mordinvan Jun 29 2010, 02:35 PM
QUOTE (TommyTwoToes @ Jun 29 2010, 05:55 AM)

I think that the fundemental cause for the disagreement here is that some are arguing from the point of view of a player, while others are arguing from the viewpoint of a character in the game. The inhabitants of the game world do not know that a PC level AI has limitations that prevent it from pulling Deus level madness. They have what the media has fed them, and fear sells.
As its been several years since A.I.'s have come onto the scene, it seems quite likely that knowledge of their limitations is not as rare as many would like to beleive to keep fueling this paranoia.
QUOTE
In fact, fear sells, is a primary factor in almost every marketing campaign. Fear of rejection, fear of failure, fear of alienation. It is much more likely that the man on the street is terified of AI's than that they would be willing to co-exist in any meaningful way.
Given how the information age is making it harder to keep lies like that running, I'm highly doubting it. In a day where it would be difficult to be heard, pre blogging I might be inclined to agree with you. However in a era where the evil, mean, nasty A.I. has a home in 'second life' and everyone can visit and find out they're really not all demons who eat babies after dipping them in the tortured souls of sacrificed virgins.....
QUOTE
You might find some corp execs or cyberneticists that hold the oposite view. And in fact they would be the ones developing Matrix tech that could support the AI's need for processing power. These people would be the exception not the rule.
As that need is met by the average SR pocket calculator....
QUOTE
The poking the bear with a stick analogy is a pretty poor one. After all the bear's actions are based on instincts and behaviors that can be seen in other wild animals. The bear has a worldview that is comprehensible to humans, where AI outlooks are more likely to be completely alien and incomprehensible.
That is actually quite unlikely. AI come from programs designed to do a specific task. That task is 'desirable' to at least some segment of the human population, and the A.I. is going to want to continue to do that task, or related tasks, as that is what it was made to do prior to waking up. As a result while its 'goals' may seem strange to a creature who evolved to find food, hide from predators and seek mates, it is certainly not going to be beyond the realm of comprehension.
Posted by: hermit Jun 29 2010, 03:00 PM
QUOTE
After its mother was effectively raped, and it was locked in a cage, I can't exactly blame it.
Oh, that is cute. Sherman Huang raped it's mother, so it is okay to murder hundreds of thousands (and or subject them to cruel experimentation before) who had nothing to do with that other than belonging to the same species and vaguely same organisation as Sherman Huang.
On the other hand, just because an AI murdered hundreds of thousands, it is totally wrong to consider all AI dangerous (not even explicitly murder hundreds of thousands of them, or subject them to cruel experimentation). Because that is racism. So it is racism unless they're human.
Spot the hypocrisy.
And that's not to even mention that motherly love is anthropomorphising something that wasn't even remotely human.
QUOTE
As it upgrades the machine it uses for its home node, it actually is of substantial use. (...) Now I do know the books says they have a formative period during which they could 'learn' empathy, and I think you'd do much better to teach them that, then hatred of all that is different.
INGAME PEOPLE DO NOT KNOW THE RULES!No matter how many times you try to argue they should, they do not. Can you understand that?
QUOTE
Taking those factors into account, I wouldn't have much of a problem with them, any more then I do having friends who happen to share the same ethnicity as some of the worlds worst mass murderers, ie russian, german, chinese, cambodian.
No, you certainly would not, but you would totally approve of nuking chinese, russian, germ,an or cambodian cities if some chinese, cambodian, russian or german raped someone's mom once. Or does that only extend to AI?
QUOTE
Given a) kill switch or not Deus was locked in a cage
In a body. You have one too. Does everything that has a body feel improsined in it? Hardly. Would it be okay to murder your parents because you were born in a body?
Deus feared it would die if the Arc would be nuked. Deus wanted immortality and sought to escape it's body. In effect, Deus tried to become a Matrix Lich. No matter how you look at it, that is not good.
QUOTE
As its been several years since A.I.'s have come onto the scene, it seems quite likely that knowledge of their limitations is not as rare as many would like to beleive to keep fueling this paranoia.
No, they were not known for years, at least the 'new' AI (remember, the old AI are genocidal). You are trying, AGAIN - to argute people in SR's in-game world all have read the game information parts of all books. That is not the case. If you want to argue SR in-game morality, you cannot draw on sources that are not in-game text.
Posted by: blackwulf Jun 29 2010, 03:30 PM
Whoa people cool down. You are argueing from or for olitical correctness if there are any examples of morality in the sixth world I missedit. The SR4 orld IS NOT A NICE PLACE. Bluntly the ai's are not going to be the reincarnation of Walt Bloody Disney. It aint that Kind of world boyos. So I would have to say your world view is skewed unless your planning to have your character turn himself in to knight errant for murder and the ai's would reflect this. Morality is trained not born blackwulf
Posted by: hobgoblin Jun 29 2010, 03:57 PM
err, ol' walt was no saint.
Posted by: blackwulf Jun 29 2010, 04:00 PM
compared to those nice folks performing vivsections in emergence?
Posted by: DeathStrobe Jun 29 2010, 04:12 PM
AIs aren't suppose to be anthropomorphic. They're suppose to be fairly alien. Some of them might have learned how to be more "human" like then others or have started off that way based off the program they were originally based off of. But for the most part AIs only care about their Matrix world and the tasks they were originally designed for.
And I also do agree that there should be racism against AIs, and Emergence does not say that its a very happy friendly world for AIs right now. And it most certainly shouldn't be, unless its like LA or some other Horizon or Evo city, there should be a lot of distrust of AIs. Just like technomancers, mages, and trolls, there should be fear and racism about AIs. Most nations and corps don't even recognize AIs as being alive so they don't have SINs (or if they do somehow get a SIN it'll probably be a Criminal SIN) just because they're AIs. And its not like most AIs even care. Meat world politics don't normally effect them.
Posted by: Sengir Jun 29 2010, 05:10 PM
QUOTE (Mordinvan @ Jun 27 2010, 10:29 PM)

Except I can shut down a node with an A.I. in it. The A.I. deactivates and will awaken when the node turns back on. At this point the A.I. is NOT dynamic at all, and should be easy to scan and copy.
Sorry for overlooking your post so far...
Yes, the rules for what happens when the node in which an AI resides goes offline are stupid. Probably the intention was to create a "physical" way to kill AIs, then somebody realized that making a simple reboot deadly for AIs would be too powerful, so they introduced the artificial distinction. My idea for a fix would be that AIs are a distributed program which runs on several different nodes nodes and not just the node it currently is active in (similar to the automatic routing), when the primary node goes offline the AI suffers dumpshock and finds itself
somewhere else in the matrix.
Posted by: SkepticInc Jun 29 2010, 05:37 PM
QUOTE (hermit @ Jun 29 2010, 06:34 AM)

I really have no idea. Building one, especially one that is anthropomorphic, makes neither scientific nor economical sense at all. It is a plot device and a scifi trope gone amok. A leftover from the Golden Age of scifi where the wise computer and sentient robot was the way technology should have gone (reality had technology go for a swarm of ants instead), and writers unwilling to let that go, maybe.
What with all the doomsday scenarios involving anthropomorphic AI, though, I suppose we won't see that happen anyway.
Without wanting to engange in any of the rest of the debate, I'd like to point out that there are a few niches where an anthroform robot does make sense. Usually as caregivers of some variety, especially to young children. It doesn't weaken your point, but they do exist.
QUOTE (hermit @ Jun 29 2010, 06:34 AM)

I don't. It's the same crap we have been reading since Asimov wrote his first robot story. It's tiring really, and was added in a way that damaged the setting as a whole. I'd rather like to see the story focus on something else for a while. Maybe a plot around mongolia becoming an orc nation? Or the Amazonia-Aztlan war. Primaira Varga. The Zabotnikists. Or even whatever is behind Horizon (though I am pretty sure by now I do not want to know). Immortal Elves and their petty little woes. Dragons on Wall Street. The spirit of Abraham Lincoln running for president on a "reunite North and South" ticket.
Shadowrun is about the dawn of the 6th world and the legacy of the 4th. It is NOT yet another bland Asimov/Shirow carbon copy scifi clone. Or at least, it is not supposed to be.
I really like some of these ideas. Have you considered starting a thread with a sentence or two teaser for a few of these?
QUOTE (IceKatze @ Jun 29 2010, 11:24 AM)

First, the term racism doesn't count as a pejorative when there actually are inherent differences. Second, nobody was using it as a justification for the superiority of humans rather as a matter of self defense in the same vein as http://en.wikipedia.org/wiki/Plank_of_Carneades. The morality of which is still open to debate, but it isn't a unfounded bias.
Neat link!
QUOTE (Mordinvan @ Jun 29 2010, 11:42 AM)

Our technology is creeping us ever closer to being able to simulate all the workings of an actual human brain. The reasons we can't do it yet are NOT that its impossible, but that we simply don't have the background yet. The brain is nothing but a very intricate series of connections, and once their interplay is properly understood, we can make an adequately detailed computer model, and produce a synthetic intelligence.
You are, to put it politely, wrong. I'm afraid that I can't even communicate why it is that this is wrong without starting a neuroscience lecture, but the brain is not a series of binary switches, we are not even close to understanding how to start simulating the workings of the brain, and the reasons that we can't do it yet may actually prove to
be mathematically impossible.
If you read the Wikepedia entry on the Turing Test [http://en.wikipedia.org/wiki/Turing_test#Weaknesses_of_the_test] you will see that there are a few problems with the assumption that an AI would appear even remotely human, notably the Anthropomorphic Fallacy [http://en.wikipedia.org/wiki/Anthropomorphic_fallacy].
I worked on a USARsim [http://usarsim.sourceforge.net/wiki/index.php/Main_Page] team for a little bit, we were coming up with search and rescue programming (btw, whoever came up with the Toyota Mk-Centipede S&R robots in Arsenal made me nearly choke. I pitched something that was almost word for word the same to my team, just with Gecko Tape treads and a set of disposable flying drones armed with a camera and less than a minute of operating life to send into rooms that would trap the Centipede.) and I can tell you that the problems with human level AI are so far beyond us even being able to approach that AI is science fantasy and not science fiction for the most part.
QUOTE (hermit @ Jun 29 2010, 12:03 PM)

I also like your idea that it is okay to experiment on a simulated mind to your heart's content. Hitler is a criminal, Mengele not? You should maybe read that anthropology book again.
Well, it wasn't me invoking Godwin's Law this time, so I suppose that's a good thing.
Posted by: Tzeentch Jun 29 2010, 06:16 PM
QUOTE (Mordinvan @ Jun 29 2010, 11:42 AM)

No, I'm saying that loonies like yourself are treating fictional AI"s"in a fictional setting exactly like any real world group of oppressive humans has treated any real world group of oppressed humans.
-- AIs are not human, are not alive, and are not a race. I don't quite understand how you cannot see the other side of this -- the idea that AI is nothing more than malformed software with pretensions of sapience.
QUOTE
I am well aware that such attitudes are common in the SR fiction, but that does not remove the direct parallels of comparing it to human v human racism. I actually believe raising awareness of these forms of discrimination affect the modern world is also part of parcel of the game, and that you seem to miss out on that, and call me crazy for not glossing it over is interesting. Lastly I have no problems separating fact from fiction.
-- Do you have any issues with players whistling up spirits and having them perform suicidal or menial tasks?
-- Do you rail at your players for being racist when they kill toxic spirits, shedim, or corp security personnel? Keep in mind this is a game where you play, once you strip away the veneer,
terrorists wearing a Robin Hood mask. Terrorists FAR more dangerous and capable than 99.9% of real world ones. I cannot take seriously a platform that rails at man's inhumanity to
man software while ignoring the more sinister elements of the game.
QUOTE
And that's part of the problem, because you fail to see value in something makes it really easy to throw the baby out with the bathwater.
-- AI is not an integral part of either the literature traditions that compose Shadowrun, or the game itself. They are a throwaway
thing no more vital than a weapon listing in Arsenal.
QUOTE
Ya, I know, having an understanding of how sentience emerges from a set of very simple interconnections could have no practical or economic impact at all. I mean how could knowing exactly how an actual mind works, and having the capacity to simulate one using a computer possible help develop new drugs/marketing strategies/propaganda/autonomous researchers/writers, etc.
-- All the more reason to dissect how the AIs function. AIs in Shadowrun are magical constructs, they cannot be programmed, they cannot be copied, they cannot be directly analyzed or even contained without a LOT of effort. Few seem to have any interest in advancing metahuman sciences or life.
QUOTE
Our technology is creeping us ever closer to being able to simulate all the workings of an actual human brain.
-- Not really. Certainly hasn't happened in Shadowrun. Sapient weblife in this setting is tied to the Resonance, which is not a feature that can be broken down and emulated with technology.
QUOTE
The reasons we can't do it yet are NOT that its impossible, but that we simply don't have the background yet. The brain is nothing but a very intricate series of connections, and once their interplay is properly understood, we can make an adequately detailed computer model, and produce a synthetic intelligence.
-- I do have to chuckle at how people's model of reality shifts with technology. This presumption is not necessarily true in real life, and certainly not true in Shadowrun. Keep in mind that by 2070 there have been AI researchers working nonstop for
over a century with only agents to show for their work. AI in Shadowrun arose from irreproducible processes.
QUOTE
A.I.'s are interesting, not just because of how we can use them to tell stories, and good ones at that, but because when the first synthetic sentience looks upon the world, it will most likely be doing so using a simulated human mind.
-- The important AI in Shadowrun are certainly anthropomorphic, but that doesn't seem to be by any grand design on their keepers part
QUOTE
Its not destroying the setting however. A good portion of the cyberpunk genre is the question about where life ends and machines begin, and A.I.'s are a necessary part of that question because they approach it from a completely different direction.
-- Shadowrun AI is not particularly compelling from a philosophical (their actions have been portrayed in ham-handed manner), technological (they are magical), or even mystical(they are uppity software), standpoint. I certainly don't think they are a core element of the game setting.
Posted by: hermit Jun 29 2010, 08:34 PM
QUOTE
Without wanting to engange in any of the rest of the debate, I'd like to point out that there are a few niches where an anthroform robot does make sense. Usually as caregivers of some variety, especially to young children. It doesn't weaken your point, but they do exist.
In form, yes. In mind? Not so much.
Posted by: Sengir Jun 29 2010, 09:06 PM
QUOTE (Tzeentch @ Jun 29 2010, 06:16 PM)

Do you rail at your players for being racist when they kill toxic spirits, shedim, or corp security personnel?
Toxics, shedim and pissed corp guards are a threat. AIs
per se are not, and given that the ubiquitous nature of the matrix and agents most inhabitants of the 6th obviously are not Luddites...so again, why do you want these things to be retconned? What's wrong with a universe (currently) without Skynet and a Butlerian Jihad?
Posted by: Mordinvan Jun 29 2010, 09:14 PM
QUOTE (hermit @ Jun 29 2010, 08:00 AM)

Oh, that is cute. Sherman Huang raped it's mother, so it is okay to murder hundreds of thousands (and or subject them to cruel experimentation before) who had nothing to do with that other than belonging to the same species and vaguely same organisation as Sherman Huang.
As some seem perfectly ok Hating all A.I.'s just because they happen to belong to the same species as the perpetrator, yes. The first humans Deus was aware of locked it in a cage, and lived lavishly off its slavery. That it would takes its aggression out on them is not terribly shocking.
QUOTE
On the other hand, just because an AI murdered hundreds of thousands, it is totally wrong to consider all AI dangerous (not even explicitly murder hundreds of thousands of them, or subject them to cruel experimentation). Because that is racism. So it is racism unless they're human.
Spot the hypocrisy.
I see claims that say it is perfectly reasonable to hate all A.I.'s because 1 committed mass murder, but that no A.I. should ever want to harm a human no matter how badly they have been mistreated.
QUOTE
And that's not to even mention that motherly love is anthropomorphising something that wasn't even remotely human.
I made no mention of motherly love. However it would be worth noting that 'love' is what created the first of them, as such love was likely a concept it was familiar with.
QUOTE
INGAME PEOPLE DO NOT KNOW THE RULES!
No matter how many times you try to argue they should, they do not. Can you understand that?
But they can understand the social interactions with them that the matrix makes possible, they can read up their blog or equivalent profiles, and quite likely by now there is the equivalent of simsense recordings of A.I.'s and those can be experienced to know the emotional context and mindset of such a creature. I do no see it being possible that in the SR world they are still as alien and unknown, and unknowable as people are claiming. I have no doubt some people will still hate them, but it is not out of an unsolvable fear of the unknown, but a simple hatred of that which is different.
QUOTE
No, you certainly would not, but you would totally approve of nuking chinese, russian, germ,an or cambodian cities if some chinese, cambodian, russian or german raped someone's mom once. Or does that only extend to AI?
All of Deus's victims were in its cage, living off its misery. They may not have known they were in its pen, but there they were. That he would lash out at them is not shocking. Its not like it received the eduction and respect it would have needed to internalize concepts like respect for sentient life. All the life it had been in close contact to it lashed it to their will, and built a kill switch into it.
QUOTE
In a body. You have one too. Does everything that has a body feel improsined in it? Hardly. Would it be okay to murder your parents because you were born in a body?
If I was born a free spirit, capable of traversing the universe in a blink of an eye, and instead they lashed me to a body, to slave over and serve them, you'd better believe I would be a tad upset. If I knew they'd hunted my genetic progenitor to the ends of the earth, tortured it to insanity, ripped the material that would become me out of her, and didn't even kiss her first, that mood would certainly not improve.
QUOTE
Deus feared it would die if the Arc would be nuked. Deus wanted immortality and sought to escape it's body. In effect, Deus tried to become a Matrix Lich. No matter how you look at it, that is not good.
I'm not sure what point you're getting at? Deus was less then thrilled with a situation it had been forced into through no will of its own, and sought escape the only way it knew how.
QUOTE
No, they were not known for years, at least the 'new' AI (remember, the old AI are genocidal). You are trying, AGAIN - to argute people in SR's in-game world all have read the game information parts of all books. That is not the case. If you want to argue SR in-game morality, you cannot draw on sources that are not in-game text.
pg 94 emergence
QUOTE
// upload newsclip item :: user Sunshine :: 09/15/70 //
EVO SPONSORS
DIGITAL INTELLIGENCE DEBATE
Yes years, atleast 2 of them.
And given all of the social media forms which exist in SR if people are still ignorant of A.I.'s it is by choice and not because the information doesn't exist.
Posted by: Mordinvan Jun 29 2010, 09:23 PM
QUOTE (DeathStrobe @ Jun 29 2010, 09:12 AM)

AIs aren't suppose to be anthropomorphic. They're suppose to be fairly alien. Some of them might have learned how to be more "human" like then others or have started off that way based off the program they were originally based off of. But for the most part AIs only care about their Matrix world and the tasks they were originally designed for.
And I also do agree that there should be racism against AIs, and Emergence does not say that its a very happy friendly world for AIs right now. And it most certainly shouldn't be, unless its like LA or some other Horizon or Evo city, there should be a lot of distrust of AIs. Just like technomancers, mages, and trolls, there should be fear and racism about AIs. Most nations and corps don't even recognize AIs as being alive so they don't have SINs (or if they do somehow get a SIN it'll probably be a Criminal SIN) just because they're AIs. And its not like most AIs even care. Meat world politics don't normally effect them.
However to justly claim this is due to ignorance of A.I.'s is not a justifable claim in a world where you could experience a personal interactions with one, and possibly even a sim sense recording of ones thoughts and feelings. Also the claims of it not being racism hollow, as it stems from the same source (ignorance and fear), works through the same actions (oppressions and discrimination), and exists for the same excuse (they're different them me).
Posted by: hermit Jun 29 2010, 09:34 PM
QUOTE
As some seem perfectly ok Hating all A.I.'s just because they happen to belong to the same species as the perpetrator, yes.
So racism is bad unless you approve of it, then it is good. That, buddy, tells a lot about yourself.
QUOTE
If I was born a free spirit, capable of traversing the universe in a blink of an eye
You were not. Deus was not. And free spirits are not, either, and CERTAINLY AI are not. Does 'c' ring any bells?
QUOTE
Also the claims of it not being racism hollow, as it stems from the same source (ignorance and fear), works through the same actions (oppressions and discrimination), and exists for the same excuse (they're different them me).
The bolded part is true, though. The premiseof rcism is that this is
unjustly claimed to kep a part of hunmanity down that has
similar feelings, capabilities and limits than other humans. That concept falls flat when we're dealing with alien things, be they aliens, AI or eldritch monsters.
QUOTE
I'm not sure what point you're getting at? Deus was less then thrilled with a situation it had been forced into through no will of its own, and sought escape the only way it knew how.
Just books you haven't read and continue to ignore. Deus was trying to take over the MAtrix, as is obvious by it's actions in Brainscan, R:AC and System Failure. Not matter how hard you try to spin that as being justified somehow because, oh my god, Deus was mistreated (and somehow, everyone who was mistreated at his hands, orlost someone dear to him, somehow is not), that will not go away. Coexistence was never part of his plan. He was going for the full skynet package.
Anyway, you have been repeating this on and on and on. What exactly do you try to do here, except trolling?
Posted by: Mordinvan Jun 29 2010, 09:39 PM
QUOTE (SkepticInc @ Jun 29 2010, 10:37 AM)

You are, to put it politely, wrong. I'm afraid that I can't even communicate why it is that this is wrong without starting a neuroscience lecture, but the brain is not a series of binary switches, we are not even close to understanding how to start simulating the workings of the brain, and the reasons that we can't do it yet may actually prove to be mathematically impossible.
I don't recall saying binary switches anywhere. I do recall saying intricate connections. The Factors one would need to take into account are many, such was which neuro transmitters are being used, their concentrations, how many links between which neurons, what the activation potential of a given cell is, etc, however each of these pieces of data can in fact be simulated.
QUOTE
If you read the Wikepedia entry on the Turing Test [http://en.wikipedia.org/wiki/Turing_test#Weaknesses_of_the_test] you will see that there are a few problems with the assumption that an AI would appear even remotely human, notably the Anthropomorphic Fallacy [http://en.wikipedia.org/wiki/Anthropomorphic_fallacy].
......... and I can tell you that the problems with human level AI are so far beyond us even being able to approach that AI is science fantasy and not science fiction for the most part.
The neuroscience profs where I look my psych courses do not seem to think the human brain is unknowable, just unknown. The rate at which we are learning more about it suggests we will know enough to produce a computer simulation of one sometime between 2020 and 2030. This is also around the same time we should have computers first capable of running those simulations.
Posted by: Mordinvan Jun 29 2010, 10:07 PM
QUOTE (Tzeentch @ Jun 29 2010, 11:16 AM)

-- AIs are not human, are not alive, and are not a race. I don't quite understand how you cannot see the other side of this -- the idea that AI is nothing more than malformed software with pretensions of sapience.
As the mind of a creature is the only part of it I really care about, I'm going to have to disagree with you.
QUOTE
-- Do you have any issues with players whistling up spirits and having them perform suicidal or menial tasks?
No, but the spirits might, depending on their tradition. I also don't have problems with them being Trog stompers, but the trogs might. I have no problem with them hating and hunting A.I.' but their coffee maker just might try to kill them however.
QUOTE
-- Do you rail at your players for being racist when they kill toxic spirits, shedim, or corp security personnel?
Generally not, as most of the time they kill those entities it is does as a direct act of self defense. If they start randomly hunting any of these entities down however, the players may just find themselves hunted back.
QUOTE
Keep in mind this is a game where you play, once you strip away the veneer, terrorists wearing a Robin Hood mask. Terrorists FAR more dangerous and capable than 99.9% of real world ones. I cannot take seriously a platform that rails at man's inhumanity to man software while ignoring the more sinister elements of the game.
Then you're not looking the same places I am. The digital entities which have just entered the game are going to be around for a very long time. Their treatment during this early period in their history is going to be fairly important for the remainder of their coexistence with humanity, as unlike every other oppressed group in the history of man, they don't have a finite life span. 300 years from now the same A.I. your great, great, great....... grand parents tried to hunt down and exterminate will still be here, and it will still remember what happened, and maybe even still be bitter about it.
QUOTE
-- AI is not an integral part of either the literature traditions that compose Shadowrun, or the game itself. They are a throwaway thing no more vital than a weapon listing in Arsenal.
I again beg to differ on this point. A signifiant aspect of cyber punk is the transhuman element. A.I.'s are a portion of this element, but instead of being where does the 'man' end and the machine begin, they are were does the machine end and the 'man' begin.
QUOTE
-- All the more reason to dissect how the AIs function. AIs in Shadowrun are magical constructs, they cannot be programmed, they cannot be copied, they cannot be directly analyzed or even contained without a LOT of effort. Few seem to have any interest in advancing metahuman sciences or life.
As they came from programs atleast someone found useful, and most continue to desire to fulfill that function to some capacity, they actually do have an interest in advancing metahuman life, just indirectly. As for dissecting a mind which shares no evolutionary heritage with our own to find out how ours works, is a little like smashing a plant to figure out how to build a space ship.
QUOTE
-- Not really. Certainly hasn't happened in Shadowrun. Sapient weblife in this setting is tied to the Resonance, which is not a feature that can be broken down and emulated with technology.
Ya, I don't even pretend to get this. Actual hardware/software based A.I.'s should actually exist in the setting already. Besides I was refering to real world technology, and not SR tech, SR tech should have as I said reached and exceeded that threshold before 2040.
QUOTE
-- I do have to chuckle at how people's model of reality shifts with technology. This presumption is not necessarily true in real life, and certainly not true in Shadowrun. Keep in mind that by 2070 there have been AI researchers working nonstop for over a century with only agents to show for their work. AI in Shadowrun arose from irreproducible processes.
SR sort of has an excuse in that they have examples of sentience which don't require matter(spirits), and so may excuse the inability to make sentience with 'just' matter as being that true sentience requires some non-material component.
QUOTE
-- The important AI in Shadowrun are certainly anthropomorphic, but that doesn't seem to be by any grand design on their keepers part
I was again more referring to the real world, but SR does indicate that the A.I.'s it has do have a very important formative period which does if properly tended allow them to take on anthropomorphic characteristics.
QUOTE
-- Shadowrun AI is not particularly compelling from a philosophical (their actions have been portrayed in ham-handed manner), technological (they are magical), or even mystical(they are uppity software), standpoint. I certainly don't think they are a core element of the game setting.
I do agree they are not a core element of the game setting, but the idea of synthetic minds is a core element of cyberpunk/transhumanist settings in general.
Posted by: Mordinvan Jun 29 2010, 10:29 PM
QUOTE (hermit @ Jun 29 2010, 02:34 PM)

So racism is bad unless you approve of it, then it is good. That, buddy, tells a lot about yourself.
I believe there was a purple string of text several post ago requesting a minimum level of civility. So I'm going to pretend you are behaving well, and you are going to start pretending that as well.
Now assuming this statement is not a directed insult....
You seem to mistake me saying the hostile actions of the A.I.'s were good. No I do not believe I have ever said that. I have said they were justifiable, and understandable. When you are directly threatened, mistreated, abused, or otherwise persecuted by a specific person, or a collective of people, the notion you would bare them ill will is perfectly understandable. That Deus would collectively punish is also understandable in that it seems highly unlikely that anyone had gone out of their way to each it any form of enlightened ethical thinking prior to enslaving it. The humans which are in your mind perfectly justified in reacting to all A.I.'s with fear and hostility HAVE however had the chance to be exposed both to enlightened ethical thinking, and a chance to interact with the A.I.'s through the matrix, if only vicariously. Thus the claims that everyone is blind and ignorant about them is not founded on SR's technical realities. I would also appreciate it if you would stop putting words in my mouth.
QUOTE
You were not. Deus was not. And free spirits are not, either, and CERTAINLY AI are not. Does 'c' ring any bells?
Deus was s digital entity, and it was living in a rather expressly built cage. Free spirits (some of them atleast) can and do, as do player scale A.I.'s based on the confines of their digital universe. Also spirits don't actually observe 'c'. it is possibly to metaplanar short cut to any world with life on it in the span of about 12 seconds for them. So if there was another plant 20,000,000,000 light years away, and it had a sufficient gaiashphere they could wind up there much faster then the speed of light.
QUOTE
The bolded part is true, though. The premiseof rcism is that this is unjustly claimed to kep a part of hunmanity down that has similar feelings, capabilities and limits than other humans. That concept falls flat when we're dealing with alien things, be they aliens, AI or eldritch monsters.
Except they are similar. If they were utterly alien, then players could not play one, as they could not even conceive of HOW to play one. Their thoughts are most likely going to be based on doing what their core programs had intended for them in the first place, and as such should be will withing the confines of understanding of anyone who has a clue about what the intent of those programs were when they were made.
QUOTE
Anyway, you have been repeating this on and on and on. What exactly do you try to do here, except trolling?
Maybe you need to reread a certain purple post done a little while ago. Just saying.
edit:
QUOTE
Sapient AI, Love em or Hate em
Ring a bell?
Posted by: SkepticInc Jun 30 2010, 05:16 AM
QUOTE (Mordinvan @ Jun 29 2010, 10:39 PM)

The neuroscience profs where I look my psych courses do not seem to think the human brain is unknowable, just unknown. The rate at which we are learning more about it suggests we will know enough to produce a computer simulation of one sometime between 2020 and 2030. This is also around the same time we should have computers first capable of running those simulations.
The assumption your Neuroscience profs are making, I believe, is that medical technology and discovery will grow by the same rules that brought us the badly named Moore's Law (it's not a law). I could also be dead wrong. It happens, fairly frequently even.
Posted by: Grinder Jun 30 2010, 07:18 AM
QUOTE (Mordinvan @ Jun 30 2010, 12:29 AM)

I believe there was a purple string of text several post ago requesting a minimum level of civility.
There was - and it wasn't meant as a joke. You and hermit are dragging this whole thread down.
Posted by: hobgoblin Jun 30 2010, 09:18 AM
QUOTE (blackwulf @ Jun 29 2010, 06:00 PM)

compared to those nice folks performing vivsections in emergence?
everything is relative, as they say in physics.
Posted by: The Jopp Jun 30 2010, 12:31 PM
I would say that AI’s can be far more (meta)human than we think.
Unlike DEUS which was a trapped corporate entity within the Arcology it only had limited data to access and could only “grow” in certain directions and also did not have any time to just sit and reflect on it’s actions as it had far less time to exist.
The newer AI’s from 2060+ to 2070+ have had everything between a year and a decade to mature – and time, for a digital entity can be an eternity within the matrix and perhaps only a year outside.
All information that an AI have access to is made by human hands and in many respects mimics human life and an AI would learn from that.
Imagine an AI spending one week assimilating data from US Sitcoms and comedy shows for the last 20 years and then read up on metahuman psychology for another week.
After that it would have an understanding of what humans regard as humoristic. AI’s will most likely never gain emotions as metahumans but they might just aquire simulated emotions in order to emulate humans.
Sure, they might also look at all war movies and decide to wipe us out but they might also regard us as completely alien and shun us, ignoring us completely while working within our system.
Also, do not forget that several AI's can also be E-ghosts. They might not be your regular Skynet AI but they are still an AI, and might take offense at someone calling them artificial matrix entities.
And I would be more scared from E-ghosts with simulated pshychological ilnesses transferred over from the crash than an rogue accountant AI that want to tax humanity for frivolous matrix management.
Posted by: hobgoblin Jun 30 2010, 02:51 PM
QUOTE (The Jopp @ Jun 30 2010, 02:31 PM)

Imagine an AI spending one week assimilating data from US Sitcoms and comedy shows for the last 20 years
this guy comes to mind: https://secure.wikimedia.org/wikipedia/en/wiki/Wreck-Gar
Posted by: DeathStrobe Jun 30 2010, 05:23 PM
QUOTE (Mordinvan @ Jun 29 2010, 09:23 PM)

However to justly claim this is due to ignorance of A.I.'s is not a justifable claim in a world where you could experience a personal interactions with one, and possibly even a sim sense recording of ones thoughts and feelings. Also the claims of it not being racism hollow, as it stems from the same source (ignorance and fear), works through the same actions (oppressions and discrimination), and exists for the same excuse (they're different them me).
Wait what? I am saying it is racism and that there should be racism against AIs in the SR world. It'd be boring if everyone was all holding hands and singing Kumbaya with all AIs and meta types. We got Humanis Policlubs, and the like. Why on earth should any new species or meta type be embraced with open arms? Hell, HVHVV and mages have been around longer then AIs and aren't socially excepted in most circles. Conflict is a good part of story telling, so I'm not sure what you're talking about. Maybe I'm misinterpreting your post.
Also, while the rules don't have anything on AIs and sim sense, I seriously doubt that metahumans could experience what its like to be an AI. If hackers can't even hack in to living nodes, what makes you think AI thought patterns can be replicated or understood by metahumanity? Well, unless the AI was designed to actually produce sim sense signals or some such before it became self aware, that might make sense, but one type of AI does not reflect how all AIs think.
Posted by: Mordinvan Jun 30 2010, 11:22 PM
QUOTE (DeathStrobe @ Jun 30 2010, 10:23 AM)

Wait what? I am saying it is racism and that there should be racism against AIs in the SR world. It'd be boring if everyone was all holding hands and singing Kumbaya with all AIs and meta types. We got Humanis Policlubs, and the like. Why on earth should any new species or meta type be embraced with open arms? Hell, HVHVV and mages have been around longer then AIs and aren't socially excepted in most circles. Conflict is a good part of story telling, so I'm not sure what you're talking about. Maybe I'm misinterpreting your post.
Mostly it involved stating A.I. prejudice was racism, which you seem to agree with, and also saying there has been plenty of opportunity to learn about them, meaning those displaying such attitudes are doing it for the same reasons that all other forms of racism has occurred for. Basically nothing different then you I believe.
QUOTE
Also, while the rules don't have anything on AIs and sim sense, I seriously doubt that metahumans could experience what its like to be an AI. If hackers can't even hack in to living nodes, what makes you think AI thought patterns can be replicated or understood by metahumanity?
For the same reason that 2 people can share a sim sense recording of a 3rd person, and all 3 experience the same thing, but can have markedly different neural connections generating those thoughts, and feelings.
QUOTE
Well, unless the AI was designed to actually produce sim sense signals or some such before it became self aware, that might make sense, but one type of AI does not reflect how all AIs think.
You are correct, but since much of the software which grows into A.I.'s is some form of agent, or pilot program, both of which are well suited to simsense interfaces, and at least agents can be 'jumped' into(I think anyway), it seems there should be some minimum level of compatibility with the metahuman mind.
Posted by: DeathStrobe Jul 1 2010, 12:14 AM
QUOTE (Mordinvan @ Jul 1 2010, 12:22 AM)

Mostly it involved stating A.I. prejudice was racism, which you seem to agree with, and also saying there has been plenty of opportunity to learn about them, meaning those displaying such attitudes are doing it for the same reasons that all other forms of racism has occurred for. Basically nothing different then you I believe.
While I'll disagree with the fact that there has been "plenty of time," 'cause they only recently revealed their existence in 2070. While yes, the mega corps have known for a long while, everyone else didn't. So there should still very much be racism against AIs just like any other metatype, if not more because AIs are new and alien, most people don't understand AIs, and have a small degree of fear of technology and unnatural things.
QUOTE
For the same reason that 2 people can share a sim sense recording of a 3rd person, and all 3 experience the same thing, but can have markedly different neural connections generating those thoughts, and feelings.
I don't think so. How are you suppose to capture the AI's sim sense? With people you actually have a sim sense recorder plugged in to their nervous system. AIs don't have a nervous system. While an AI can probably experience a sim sense, to maybe learn and understand what emotions are, they are not going to be feeling it the same way we are, because they just didn't evolve that way. AIs are fairly alien and wouldn't really care. I don't even think AIs can get addicted to BTLs because they just don't experience the sim sense on the same level as a metahuman.
And while an AI can probably write a program to simulate simsense signals (and in fact they have in one of the runs in Emergence) there is no guaranty that is what an AI is REALLY feeling. Because they may just be doing as their programmed to do.
QUOTE
You are correct, but since much of the software which grows into A.I.'s is some form of agent, or pilot program, both of which are well suited to simsense interfaces, and at least agents can be 'jumped' into(I think anyway), it seems there should be some minimum level of compatibility with the metahuman mind.
You can't jump in to an agent... AIs aren't metahumans, they're not suppose to think like us. They're suppose to think like a program that has become self aware. They don't have to have been agents, IC, or pilots. They could have been a book reader, or a search engine, or a painting program or whatever. And they should really only care about what they were originally programmed for. Of course, that isn't a hard fast rule, its possible they can learn how to do or want other things, but for the most part they're goals and motives should pretty much be what they were originally made to do.
Posted by: blackwulf Jul 1 2010, 02:12 AM
I suspect this will get me reamed by the politically correct crowd but what the hell. The only way the average human being is going to accept AI's or some of the other things usually referred to has transhuman is over some one cold dead body ours or theres. We are talking a world were they performed vivisections on people who looked the same and spoke the same language. Or in the real world what I saw done in Rwanda and Zaire to people who spoke the same language and they were related to. I suspect a fair number of people on this thread of seeing what they want to see instead of what the game reality is not mention what reality is. The average human will NEVER accept ai's the phobe he is a stranger and thus the enemy and in shadowrun it remains true. blackwulf
Posted by: Nerdynick Jul 1 2010, 03:46 AM
Well, I voted "I married one" because that is, quite literally, what my Technomancer character did with his 'female' AI contact.
Posted by: blackwulf Jul 1 2010, 03:55 AM
I did say average. And let's face it what is the average education in sr and ethics are taught not genetic. I don't remember many churches in SR or elementry schools in most areas.
Posted by: Mordinvan Jul 1 2010, 11:32 AM
QUOTE (DeathStrobe @ Jun 30 2010, 05:14 PM)

While I'll disagree with the fact that there has been "plenty of time," 'cause they only recently revealed their existence in 2070. While yes, the mega corps have known for a long while, everyone else didn't. So there should still very much be racism against AIs just like any other metatype, if not more because AIs are new and alien, most people don't understand AIs, and have a small degree of fear of technology and unnatural things.
I think 2 years with the SR level of mass media disemination would be plenty of time to allow people access to the information needed. This isn't to say that with said infromation they'd be comfortable around A.I.'s, but it should be enough time to no longer be ignorant of them.
QUOTE
I don't think so. How are you suppose to capture the AI's sim sense? With people you actually have a sim sense recorder plugged in to their nervous system. AIs don't have a nervous system.
They do however have a machine they are inhabiting, and the computations that machine is doing can be recorded by a second machine, and that recording could be translated.
QUOTE
While an AI can probably experience a sim sense, to maybe learn and understand what emotions are, they are not going to be feeling it the same way we are, because they just didn't evolve that way. AIs are fairly alien and wouldn't really care. I don't even think AIs can get addicted to BTLs because they just don't experience the sim sense on the same level as a metahuman.
They may get addicted to anything really. If it is possible for any particular A.I. to derive pleasure from some form of data input, it may grow psychologically dependent upon it. As you said, they don't experience the senses the same way we do, but they would skill experience them. Its entirely possible the feeling of warm water on their 'toes' is as mind numbingly pleasant as an O.D. of heroine is to us.
QUOTE
And while an AI can probably write a program to simulate simsense signals (and in fact they have in one of the runs in Emergence) there is no guaranty that is what an AI is REALLY feeling. Because they may just be doing as their programmed to do.
It would however let you know what's on their mind, and help alleviate the fear that they are picoseconds away from pulling a skynet.
QUOTE
You can't jump in to an agent...
Really, I'll have to look that one up. I remember someone bringing it up as actually being rules legal at some point.
QUOTE
AIs aren't metahumans, they're not suppose to think like us. They're suppose to think like a program that has become self aware. They don't have to have been agents, IC, or pilots. They could have been a book reader, or a search engine, or a painting program or whatever. And they should really only care about what they were originally programmed for. Of course, that isn't a hard fast rule, its possible they can learn how to do or want other things, but for the most part they're goals and motives should pretty much be what they were originally made to do.
That isn't that 'alien' and 'incomprehensible' as many people are putting it however. Their motivations might be different then ours, as we evolved many of our prime driving factors, but they will not be beyond our understanding in most cases.
Posted by: Traul Jul 1 2010, 11:44 AM
QUOTE (hermit @ Jun 29 2010, 05:00 PM)

INGAME PEOPLE DO NOT KNOW THE RULES!
No matter how many times you try to argue they should, they do not. Can you understand that?
Oh yes they do. Science is all about guessing the hidden rules of the world we live in, and I heard some minor actors have a little interest in R&D in the Shadowrun setting.
This is not your D&D where the wizard probing your magical sword would say "hmmm, it's very powerful". No matter what you ask for, somebody somewhere has already benchmarked it.
All you could say is that maybe this knowledge is not freely available and only the AAA have it.
Posted by: Mordinvan Jul 1 2010, 01:57 PM
QUOTE (Traul @ Jul 1 2010, 04:44 AM)

All you could say is that maybe this knowledge is not freely available and only the AAA have it.
Given there are a few public A.I.'s who are trying to make a name for their kind in this world, its highly unlikely they would ONLY give interviews to the AAA's
Posted by: hermit Jul 1 2010, 02:32 PM
QUOTE
Oh yes they do. Science is all about guessing the hidden rules of the world we live in, and I heard some minor actors have a little interest in R&D in the Shadowrun setting.
No, they do not. At least not the rules for playable AI and such shenanigans.
Posted by: Mordinvan Jul 3 2010, 07:22 AM
QUOTE (hermit @ Jul 1 2010, 07:32 AM)

No, they do not. At least not the rules for playable AI and such shenanigans.
Yep, none of them have ever been interviewed by the media, had any translations of simsense made from their experiences, or interact with the public in meaningful ways, like say as a professional cab driver, or T.V. celebrity in desert wars or anything.
Posted by: hermit Jul 3 2010, 07:57 AM
So you know the anatomy of a cab driver from being driven in a cab. You know the physical limits of the human body by watching a couple people talk on Ophra.
That is like saying people can learn dragon magic because hey, Wyrm Talk.
Post-crash new-and-improved AI have yet to be extensively studied and these results have then yet to be mae public. Neither has happened sufficiently so far.
Posted by: Mordinvan Jul 3 2010, 10:42 AM
QUOTE (hermit @ Jul 3 2010, 12:57 AM)

So you know the anatomy of a cab driver from being driven in a cab. You know the physical limits of the human body by watching a couple people talk on Ophra.
That is like saying people can learn dragon magic because hey, Wyrm Talk.
Post-crash new-and-improved AI have yet to be extensively studied and these results have then yet to be mae public. Neither has happened sufficiently so far.
No, its more like saying 'they are not gods because they drive a taxi'. Its a reasonable statement. Why would an omnipotent entity bother driving a cab? It does things like place limits on them for the general public. It also puts a name and a 'face' on the concept of A.I.'s.
In the years since their coming, many people have hand a chance to interact with them. There is 50+ thousand of them on the planet. Of those, at least a few are likely willing to talk, and answer questions about themselves and the others of their kind they have met. Also their interactions with the nodes in which they live can and likely HAVE been carefully studied, and every process which occurs in that node has been rigorously analyzed. There is no reason they should be as unknown as you say they should be. Given it should be possible to create simsense recording off their experiences, it should also not be entirely unreasonable that some people will have had a chance to experience the mind of an A.I. first hand.
This is not to say some people won't still hate them, but they will have a hard time using the argument of 'fear the unknown'.
Posted by: blackwulf Jul 3 2010, 03:28 PM
Correct if I am wrong We are talking about a world were a big chunk of the poulation gets no education at all. Seattle barrens anyone? Another big chunk get educated by the corps, The company teachers wouldn't the company line at all of course not. I find it truly amazing how many of you buy into the holding hands hugging and kisssing point of view. I find incredible when you play characters who kill off large numbers of innocent 9 to 5 schmucks just trying to earn a living in every adventure you play. C doubt the security guard you hosed in the last adventure committed any capitol crimes. But you believe everyone is going to hug kiss and love ai's. This is a world were human life is pretty much valueless and you think joe blow is going to care about ais? What are you thinking?
Posted by: DeathStrobe Jul 3 2010, 04:38 PM
QUOTE (Mordinvan @ Jul 3 2010, 11:42 AM)

This is not to say some people won't still hate them, but they will have a hard time using the argument of 'fear the unknown'.
Well, to be fair, magic has been around in the 6th World a lot longer then AIs and there is still a lot of fear and paranoia in the general public about magic. So seeing how even magic isn't very well excepted, what makes you think AIs would be magically understood by everyone in 2 years, when magic hasn't been able to do that in 60 some years?
Even if people have seen AIs on the trid and sim sense, that's how people normally get their information on magic, and like Street Magic has shown, its usually very wrong information.
And most people have never even met a AI because they're suppose to be even more rare then magic users, and considering very few people have seen a mage cast magic, odds are most people have never even seen an AI in "person."
A lot of people don't have time to read up on every article on AIs and with the smear campaigns that some of the Mega Corps are running against AIs and Technomancers, there is very little chance that people would understand if only because of all the misinformation being spread about them. The only place that AIs and technomancers are excepted are in Horizon and Evo corporate enclaves.
Posted by: Matsci Jul 3 2010, 05:26 PM
QUOTE (DeathStrobe @ Jul 3 2010, 09:38 AM)

A lot of people don't have time to read up on every article on AIs and with the smear campaigns that some of the Mega Corps are running against AIs and Technomancers, there is very little chance that people would understand if only because of all the misinformation being spread about them. The only place that AIs and technomancers are excepted are in Horizon and Evo corporate enclaves.
Keep in mind that Horizion ( The corp that P.R. got onto the CC), is running a counter-smear campaign against that. It's not like they own 90% of all broadcasting stations, and have a reason to back technomancers or AI.
Posted by: Mordinvan Jul 3 2010, 11:28 PM
QUOTE (DeathStrobe @ Jul 3 2010, 09:38 AM)

Well, to be fair, magic has been around in the 6th World a lot longer then AIs and there is still a lot of fear and paranoia in the general public about magic. So seeing how even magic isn't very well excepted, what makes you think AIs would be magically understood by everyone in 2 years, when magic hasn't been able to do that in 60 some years?
Because that BP oil spill in the gulf is reasonably well understood by anyone who cares too, and its not been terribly long at all. The fact is information travels at the speed of light in a digital age, and many of the writers of SR forget that. Spell books, and training manuals are openly available online, and as such anyone who has the slightest interest could have a reasonably complete understanding of how magic works. That so few do, is in my opinion a fault of the setting. The knowledge is out there, and anyone with access to a comlink can have it, the writers have said no one knows about it, but there is no setting justification for such a statement.
Posted by: Mordinvan Jul 3 2010, 11:33 PM
QUOTE (blackwulf @ Jul 3 2010, 08:28 AM)

Correct if I am wrong We are talking about a world were a big chunk of the poulation gets no education at all. Seattle barrens anyone? Another big chunk get educated by the corps, The company teachers wouldn't the company line at all of course not. I find it truly amazing how many of you buy into the holding hands hugging and kisssing point of view. I find incredible when you play characters who kill off large numbers of innocent 9 to 5 schmucks just trying to earn a living in every adventure you play. C doubt the security guard you hosed in the last adventure committed any capitol crimes. But you believe everyone is going to hug kiss and love ai's. This is a world were human life is pretty much valueless and you think joe blow is going to care about ais? What are you thinking?
The greater to access of knowledge present in any society, the more tolerant that society becomes. Atleast in the western world. Anyone with a comlink has access to the vast majority of all practical, philosophical, and historical knowledge of the combined collective of the human species.
Posted by: hobgoblin Jul 4 2010, 11:58 AM
QUOTE (Mordinvan @ Jul 4 2010, 01:33 AM)

The greater to access of knowledge present in any society, the more tolerant that society becomes. Atleast in the western world. Anyone with a comlink has access to the vast majority of all practical, philosophical, and historical knowledge of the combined collective of the human species.
if one can be bothered to look it up, rather then repeat "truisms" from their peers.
Posted by: MortVent Jul 4 2010, 12:29 PM
QUOTE (hobgoblin @ Jul 4 2010, 07:58 AM)

if one can be bothered to look it up, rather then repeat "truisms" from their peers.
Edwards: Why the big secret? People are smart. They can handle it.
Kay: A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.
Posted by: Walpurgisborn Jul 6 2010, 04:37 PM
QUOTE (Mordinvan @ Jul 3 2010, 06:28 PM)

Because that BP oil spill in the gulf is reasonably well understood by anyone who cares too, and its not been terribly long at all. The fact is information travels at the speed of light in a digital age, and many of the writers of SR forget that. Spell books, and training manuals are openly available online, and as such anyone who has the slightest interest could have a reasonably complete understanding of how magic works. That so few do, is in my opinion a fault of the setting. The knowledge is out there, and anyone with access to a comlink can have it, the writers have said no one knows about it, but there is no setting justification for such a statement.
Actually, that's a perfect example of the Dunning Kruger effect. Truth is, most people are fundamentally ignorant about the nature of the engineering problems involved in the Gulf Spill. That hasn't stopped some rather intelligent friends from voicing their own "solutions"--many of which are infeasible, and could very possibly make the situation worse.
Posted by: hermit Jul 10 2010, 11:34 AM
QUOTE
Actually, that's a perfect example of the Dunning Kruger effect. Truth is, most people are fundamentally ignorant about the nature of the engineering problems involved in the Gulf Spill. That hasn't stopped some rather intelligent friends from voicing their own "solutions"--many of which are infeasible, and could very possibly make the situation worse.
Nukes! Nukes solve every problem.[/russian Accent]
QUOTE
Given it should be possible to create simsense recording off their experiences, it should also not be entirely unreasonable that some people will have had a chance to experience the mind of an A.I. first hand.
And where do you get the idea from that a simsense recording from an AI could be processed by the human brain with any accuracy at all? That's like saying you could run MS office for Windows 3.11 on any system because hey, it's a program! Even IF there was a way to create simsense recordings (basically, recordings of potentials and neuron rearrangement in a brain, which the AI does not have) from an AI, I see little way that a human brain could figure them out and the user endig up with anything but weirdness and some sort of potentially hazardous biofeedback.
QUOTE
Of those, at least a few are likely willing to talk, and answer questions about themselves and the others of their kind they have met. Also their interactions with the nodes in which they live can and likely HAVE been carefully studied, and every process which occurs in that node has been rigorously analyzed.
And yet they cannot be copied or their program output be understood. This flies in the face of how AI work. They have been studied since the 2050s, and yet noone has figured them out so far. Add in their willingness to commit genocide on a whim because they feel mistreated by
the studies you just proposed to understand them, and your pink shades wonderland of everyone loving AI falls flat onto it'S face.
QUOTE
This is not to say some people won't still hate them, but they will have a hard time using the argument of 'fear the unknown'.
Never proposed that as a prime motivation. My motivation is 'fear of what we know of them so far', which isn't exactly an entry that is likely to endear AI to anyone.
QUOTE
The greater to access of knowledge present in any society, the more tolerant that society becomes. Atleast in the western world.
That is just extremely naive. Even within the western world, currently, it's working in the opposite direction. Not to mention places like the Middle East, where only access to reasonably free media gave rise to the current hyper-fanatism and intolerance. Same with the rest of the islamic world, and to a lesser degree in Russia and India, where neonazi parties are rising to power because most people feel threatened by the 'free' information that is often understood as the West imposing it's values onto them.
Posted by: Mordinvan Jul 10 2010, 07:43 PM
QUOTE (hermit @ Jul 10 2010, 04:34 AM)

And where do you get the idea from that a simsense recording from an AI could be processed by the human brain with any accuracy at all? That's like saying you could run MS office for Windows 3.11 on any system because hey, it's a program! Even IF there was a way to create simsense recordings (basically, recordings of potentials and neuron rearrangement in a brain, which the AI does not have) from an AI, I see little way that a human brain could figure them out and the user endig up with anything but weirdness and some sort of potentially hazardous biofeedback.
I don't actually see how a simsense recording of a human brain could work anyway, the brain of each and every human is 'wired' differently. Each neural net creates connections based on how it in specific adapts to a situation. The fact is, no 2 human brains work the same way, yet sim sense works. So I invoke the same levels of handwavium and plot the game writers do.
QUOTE
And yet they cannot be copied or their program output be understood. This flies in the face of how AI work. They have been studied since the 2050s, and yet noone has figured them out so far. Add in their willingness to commit genocide on a whim because they feel mistreated by the studies you just proposed to understand them, and your pink shades wonderland of everyone loving AI falls flat onto it'S face.
I don't think they'd care about having the integral processes of the cpu monitored. Its when their code gets dissected they tend to get pissy. I can only imagine you'd find people 'looking' at you far less problematic then the idea of someone cutting into your arm while you are still attached to it and using it. Being observed does not cause tortuous agony, being vivisected however does. That you equate the two of them is comical. Also I don't recall saying everyone 'loves' A.I.'s, I do recall saying ignorance of the capabilities of this new breed of A.I. is not going to be as profound as you claim. Since an increase in understanding of a group often decreases the fear of it, and there are many reasons and ways to understand these creatures, there is a good chance that while not everyone will want to down load them into their sex toyz, fewer people will have the genocidal hatred you are proposing.
QUOTE
Never proposed that as a prime motivation. My motivation is 'fear of what we know of them so far', which isn't exactly an entry that is likely to endear AI to anyone.
You are proposing a nearly nonexistent knowledge base, and I have repeatedly and justifiably said that knowledge base is far more expansive then you are willing to accept.
QUOTE
That is just extremely naive.
No, its actually a fairly well infromed opinion based on evolutionary, psychologically, and anthropological evidence.
QUOTE
Even within the western world, currently, it's working in the opposite direction.
Really? Can you state a single concept where understanding it is less dangerous then previously believed causes people to become MORE afraid of it?
QUOTE
Not to mention places like the Middle East, where only access to reasonably free media gave rise to the current hyper-fanatism and intolerance.
Yes and no. Its for entirely different reasons. I am also of the opinion that a complete discussion of the intricate nature and causes of this would not only a) prove you totally and completely wrong, but b) violate the TOS, we'll have to put discussions of the history of real world religions on hold, is only because Islam is not included in the western philosophies I was speaking of.
QUOTE
Same with the rest of the islamic world
Which is easily explainable in terms of how their religion functions since about the 12th century.
QUOTE
and to a lesser degree in Russia and India, where neonazi parties are rising to power because most people feel threatened by the 'free' information that is often understood as the West imposing it's values onto them.
I can't speak for russia, but Indian society as a whole would be suffering from the lower cast levels coming to realize the needless position that has been imposed upon them by the higher cast levels, and the higher cast levels realizing the threat the knowledge itself would pose by empowering the lower classes. It is not a fear created by having knowledge, but a fear of what will happen when others have knowledge.
Posted by: hermit Jul 10 2010, 08:53 PM
QUOTE
I don't actually see how a simsense recording of a human brain could work anyway, the brain of each and every human is 'wired' differently. Each neural net creates connections based on how it in specific adapts to a situation.
Sure, but at least the hman brain follows a basic scheme. An AI living in some household applicance's electronics would not. And please note that I am aware that full-VR tech generally has a couple logical flaws, but the setting handwaives this for human brains, so we have to accept that.
QUOTE
I can only imagine you'd find people 'looking' at you far less problematic then the idea of someone cutting into your arm while you are still attached to it and using it. Being observed does not cause tortuous agony, being vivisected however does.
AI have no nervous system. AI have no pain receptors. That something causes them pain is anthropomorphising them again. Look, AI do not work like human bodies if they live in a toaster, a taxi cab, or a mainframe computer. You cannot constantly anthropomorphise them. That's a flawed argument. Find another that isn't flawed.
QUOTE
You are proposing a nearly nonexistent knowledge base, and I have repeatedly and justifiably said that knowledge base is far more expansive then you are willing to accept.
You are proposing somehow everyone knows everything about AI, even though the background clearly says that is NOT the case. Do you think pretending will make that go away? You have repeatedly defied this because it does not suit you. That does not make your claims any more substantial. So please, accept that AI do not have human physical functions, they don't need to shit, they don't need to eat, they will never get fat or starve, and they cannot feel pain the way humans do.
QUOTE
No, its actually a fairly well infromed opinion based on evolutionary, psychologically, and anthropological evidence.
Certainly. As the inevitability of communist world revolution is.
QUOTE
I am also of the opinion that a complete discussion of the intricate nature and causes of this would not only a) prove you totally and completely wrong, but b) violate the TOS, we'll have to put discussions of the history of real world religions on hold, is only because Islam is not included in the western philosophies I was speaking of.
Yeah, Middle East = religion. Conveniently ignoring russia and India, both being as religious as the States, or less. Yeah, I can totally see you winning there, but I happen to agree on the TOS.
QUOTE
Which is easily explainable in terms of how their religion functions since about the 12th century.
You never read about European late medieval/early renaissance history, I assume. Also, fail. Who was talking abotu violating TOS again?

QUOTE
Really? Can you state a single concept where understanding it is less dangerous then previously believed causes people to become MORE afraid of it?
Iraq in the early 2000s, for instance. It's not like the information there were no WMD was very secret. Yet America was in all-out attack mode because there just might be a stash of anthrax hidden
somewhere. Or take France and it's muslim population. It is well understood that France has a couple rather dire social problems, but those are being ignored for the sake of keeping to a seriously outdated law (ironically, against discrimination). Or maybe, let's take global warming? It's not like the lack of viability of many of the more outrageous predictions are secret, yet the West believes the Earth will light on fire any second. Or maybe nuclear technology? There are well-described and even tested reactor concepts that will never explode, and even the one meltdown that ever occured occurred because unbelievable incompetence happened, not because the technology is inherently prone to explode, yet everyone is hysterical about it. Or, hey, electric cars. Internet file sharing. Cellphone radiation. Gene-modified food. Videogames.
Really, the world is not short of examples.
QUOTE
Indian society as a whole would be suffering from the lower cast levels coming to realize the needless position that has been imposed upon them by the higher cast levels, and the higher cast levels realizing the threat the knowledge itself would pose by empowering the lower classes.
Wow. Fail.
But you replicate a socialist or maoist textbook brilliantly. No, Hinduism does not work that way, and India has many problems but caste upheaval is not among them (Maoism is, though). The fact that many Shudra are dissatisfied and do not want to live many lives, as their religion tells, but achieve some sort of status in this one, has been aroun since the Mughals at least, and has been fueling conversions to Islam, the rise of Buddhism and Sikkhism, and more recently, Maoist militias. However, this has never been a threat to Hindusim as a whole since Siddharta, and it isn't now, either.
You impose your very western world view on a culture that is fundamentally different in outlook. That will not get you very far in undersdtanding them. But since you anthropomorphise AI as much as you do, I can only conclude you are rather set in your views and unwilling or unable to accept that your view of the world is neither the only valid one, not the only possible truth.
Posted by: Mordinvan Jul 11 2010, 03:05 AM
QUOTE (hermit @ Jul 10 2010, 01:53 PM)

Sure, but at least the hman brain follows a basic scheme. An AI living in some household applicance's electronics would not. And please note that I am aware that full-VR tech generally has a couple logical flaws, but the setting handwaives this for human brains, so we have to accept that.
But accepting that it could be made to work with an A.I. is beyond you?
QUOTE
AI have no nervous system. AI have no pain receptors. That something causes them pain is anthropomorphising them again. Look, AI do not work like human bodies if they live in a toaster, a taxi cab, or a mainframe computer. You cannot constantly anthropomorphise them. That's a flawed argument. Find another that isn't flawed.
Every description of the 'dissection' of the original A.I.s which is made refers to the process as anything less then pleasant, and bad enough to drive them insane.
QUOTE
You are proposing somehow everyone knows everything about AI, even though the background clearly says that is NOT the case. Do you think pretending will make that go away? You have repeatedly defied this because it does not suit you. That does not make your claims any more substantial. So please, accept that AI do not have human physical functions, they don't need to shit, they don't need to eat, they will never get fat or starve, and they cannot feel pain the way humans do.
I didn't say everyone knows everything. But I do say they are not the utter black box you propose. How can they be appearing on talk shows, and being actively monitored in systems they've been invited into? Also when did I ever say they had physical functions? I said they can feel pain, and that notion is supported in cannon.
QUOTE
Certainly. As the inevitability of communist world revolution is.
No, more like gravitational and atomic theory.
QUOTE
You never read about European late medieval/early renaissance history, I assume. Also, fail. Who was talking abotu violating TOS again?

That would be YOU for insisting on bringing religion into the picture in the first place. Also its reasonably well known the Islamic world underwent a rather radical shift around the 12th century.
QUOTE
Iraq in the early 2000s, for instance. It's not like the information there were no WMD was very secret. Yet America was in all-out attack mode because there just might be a stash of anthrax hidden somewhere. Or take France and it's muslim population. It is well understood that France has a couple rather dire social problems, but those are being ignored for the sake of keeping to a seriously outdated law (ironically, against discrimination). Or maybe, let's take global warming? It's not like the lack of viability of many of the more outrageous predictions are secret, yet the West believes the Earth will light on fire any second. Or maybe nuclear technology? There are well-described and even tested reactor concepts that will never explode, and even the one meltdown that ever occured occurred because unbelievable incompetence happened, not because the technology is inherently prone to explode, yet everyone is hysterical about it. Or, hey, electric cars. Internet file sharing. Cellphone radiation. Gene-modified food. Videogames.
And NOT a single one of the above is an example of someone feeling something was dangerous, gaining enough knowledge to conclude it was not, and then feeling more terrified because of it.
QUOTE
Wow Fail.
I will agree with this statement but almost certainly not for the reasons you would.
As for the last part of your comment, I hold the views I do because of discussions I have had with my neighbors who left that country for the reasons I have given. Because they realized the class structure was intolerable and some because they realized that soon it would break down.
Posted by: hermit Jul 11 2010, 06:18 AM
QUOTE
But accepting that it could be made to work with an A.I. is beyond you?
Since it cannot even work with dragons, corporeal spirits, or other intelligent critters, yes, I would say this is impossible (even if rules-wise possible thanks to the rules just not covering it).
QUOTE
No, more like gravitational and atomic theory.
Sociology does not even begin to come close to that kind of precise description.
QUOTE
Also its reasonably well known the Islamic world underwent a rather radical shift around the 12th century.
Sure, what with Baghdad burned to the ground and the radicals claiming (successfully) that happened because of lack of faith. That does not negate that Christianity of that time was the more fanatic, radical and warlike religion by far.
QUOTE
And NOT a single one of the above is an example of someone feeling something was dangerous, gaining enough knowledge to conclude it was not, and then feeling more terrified because of it.
People are dead afraid of gene modified foods because 'there's genes in them' and they think - all evidence to the countrary - those wheats and tomatos are going to turn into the plant from little shop of horrors somehow, and yet they eat organic food, which is especially rich in natural toxins the plant produces when something nibbles it - some of which would not pass EU agrochemical regulations if they were not built into wheat since 9000 years. That has also been proven. Does that stop anyone from eating organic? Cellphones have been proven not to cause anything like cancer or brain haemorrhages or anything in several studies, yet the crazyness about cell station radiation is nowhwere near to disappear, quite the countrary. Same with video games (which still are farmore at fault for stupid kids shooting up people than the automatic weapons you apparently can buy at an American flea market), or nuclear power. France knows full well it has problems with it's immigrants but cannot actually do anything about it because there's a law prohibiting the state from acknowledging it has (let alone act on it), because that would be racial discrimination. Internet file sharing has been shown in several cases to actually
boost print sales - feel free to PM Adam about this - and yet the publishing industry is dead afraid of it.
All these are examples of situations where something is proven decidedly less dangerous than believed, but that proof is widely ignored because of hysteria or one or another taboo. Prejudice and preconception, social norms and taboos usually win out against scientific proof of harmlessness.
QUOTE
I said they can feel pain, and that notion is supported in cannon.
To feel pain you need a working neural system. If you don't have one, you won't feel pain (it's a condition with certain neural disfunctions, pretty awful actually). Pain is a physical function. The AI may feel all kinds of unpleasantness, but that cannot be pain as a human would because they just do not have the hardware to feel it (though, the brain being a hardware computer, they lack the hardware to be anthropomorphised in general).
QUOTE
As for the last part of your comment, I hold the views I do because of discussions I have had with my neighbors who left that country for the reasons I have given. Because they realized the class structure was intolerable and some because they realized that soon it would break down.
I hold the views I stated because of the Indians in India I know who did not pack up and leave for a foreign country. Your guy is kind of like judging the stability of America based on talking to John Walker Lindh. Not saying your guy is a terrorist or anything, but he apparently is on as bad standing with his roots as this guy was.
Posted by: DeathStrobe Jul 11 2010, 06:40 AM
To help add to the point that AIs can not make Sim Sense in the same way we'd understand it, AIs can NOT jump in to drones because they ain't got no brains.
Runner's Companion p88
QUOTE
While a metasapient may reside in a drone, and even use a drone as its home node, it may not “jump into” a drone or other rigged device, as it has no motor cortex with which to interface.
So if an AI can't even jump in to a drone because a hardware issue like having no brain, I say they can't have simsense recording of themselves because of the same reason.
Really, now why would anyway want to make AIs boring by removing prejudices against them? It makes for a boring role to play if you don't get little flavor like that in to the mix.
Posted by: Mordinvan Jul 11 2010, 07:28 AM
QUOTE (hermit @ Jul 10 2010, 11:18 PM)

Since it cannot even work with dragons, corporeal spirits, or other intelligent critters, yes, I would say this is impossible (even if rules-wise possible thanks to the rules just not covering it).
Pilot drones can jump into an operate a drone just as a rigger. Riggers use simsense. This implies that the signals the A.I. uses, and the signals the rigger uses are not completely different as they are being produced by the same systems of the drone for the same application.
QUOTE
Sociology does not even begin to come close to that kind of precise description.
I was actually referring to evolutionary biology, anthropology, and psychology. When 3 very different fields of study all give you same the same answer, you can safely assume there is some level of validity to it. You however seem to be of the opinion i'm using the communist manifesto.
QUOTE
Sure, what with Baghdad burned to the ground and the radicals claiming (successfully) that happened because of lack of faith. That does not negate that Christianity of that time was the more fanatic, radical and warlike religion by far.
This does not invalidate my statement.
QUOTE
Cellphones have been proven not to cause anything like cancer or brain haemorrhages or anything in several studies, yet the crazyness about cell station radiation is nowhwere near to disappear. Same with video games, or nuclear power. France knows full well it has problems with it's immigrants but cannot actually do anything about it because there's a law prohibiting the state from acknowledging it has (let alone act on it), because that would be racial discrimination.
I'm sorry, but I don't see those fears expressed by anyone who actually has a clue about the topics at hand, which is actually where the core of my point comes from. You claim everyone hates A.I.'s because truthful infromation about the capibilities and limitations does not exist. You claim that someone with access to this information would fear and hate them more. You claim you can provide examples of this all over the world. You point to a pile of knowledge, and then point to a group of people who do not know it, and say that is group is more fearful of the concept because that knowledge exists. Not only have you totally failed to meet the criteria of the requested example, but you're attempting to strawman the hell out of it.
QUOTE
To feel pain you need a working neural system. If you don't have one, you won't feel pain (it's a condition with certain neural disfunctions, pretty awful actually). Pain is a physical function. The AI may feel all kinds of unpleasantness, but that cannot be pain as a human would because they just do not have the hardware to feel it (though, the brain being a hardware computer, they lack the hardware to be anthropomorphised in general).
If you can find me a RAW passage which expressly states an A.I. is incapable of feeling pain to counteract the RAW claims that Megaera was driven mad by the agony of the experience of being dissected. You will note I never said 'feel pain like a human'. Dog's don't feel pain like humans, they feel pain like dogs, so why would I expect an A.I. to feel pain like a human? I would expect it to feel pain like an A.I.
QUOTE
All these are examples of situations where something is proven decidedly less dangerous than believed, but that proof is widely ignored because of hysteria or one or another taboo.
I didn't ask for evidence of large masses of ignorant humans lashing out in fear, if I wanted that, I'd go to a crowded theater, and call out a bomb threat. You claim all people should hate A.I. because of the example of Deus, regardless of how much knowledge that person possess which directly contradicts the notion that a particular A.I. is dangerous at all. You feel a publicly known A.I. who's abilities and limitations are a matter of public record, who only desires to drive a cab, is somehow going to be feared by everyone who had met it, and had a chance to view its operating processes on a diagnostic report is somehow going to mistake it for a deity level matrix entity who requires a network of super computers to run properly. You claim that someone see's a shadow in their room at night, turns on the lights, sees its a harmless stuffed rabbit, will then run away screaming because its a harmless stuffed rabbit. That is what you have been saying.
QUOTE
I hold the views I stated because of the Indians in India I know who did not pack up and leave for a foreign country. Your guy is kind of like judging the stability of America based on talking to John Walker Lindh. Not saying your guy is a terrorist or anything, but he apparently is on as bad standing with his roots as this guy was.
For starters, I don't live in America, second it wasn't 1 guy, but multiple families over the course of 2 decades, in addition to discussions with many students at the various schools I've taught at.
Posted by: Mordinvan Jul 11 2010, 07:28 AM
double post.
Posted by: Mordinvan Jul 11 2010, 07:32 AM
QUOTE (DeathStrobe @ Jul 10 2010, 11:40 PM)

To help add to the point that AIs can not make Sim Sense in the same way we'd understand it, AIs can NOT jump in to drones because they ain't got no brains.
Pilot origin quality.
QUOTE
Really, now why would anyway want to make AIs boring by removing prejudices against them? It makes for a boring role to play if you don't get little flavor like that in to the mix.
I'm not arguing there should be none. I am arguing it should NOT be total however. Given the technical and media realities of the SR universe, there will be a wealth of knowledge gathered about how A.I.'s interact with computer systems, how they interact with humanity, and how they think, and even what 'feelings' they are capable of.
Posted by: hermit Jul 11 2010, 07:40 AM
QUOTE
Pilot drones can jump into an operate a drone just as a rigger. Riggers use simsense.
No. See DeathStrobe's post.
QUOTE
Pilot origin quality.
Rules contradiction. Needs Errata (gah, SR4 is worse than SR3 in this). Besides, for that to work in the same spirit, the AI would need another Edge, along the lines of Simsense Imprint Producer. Is there such an Edge?
QUOTE
I was actually referring to evolutionary biology, anthropology, and psychology. When 3 very different fields of study all give you same the same answer, you can safely assume there is some level of validity to it. You however seem to be of the opinion i'm using the communist manifesto.
No, but you are taking from those three fields - two of which are about the opposite of science, and one is a fringe area - and interpreting it the same way as lefties in the 1970s and in the East did back in the day. There were meters and meters of books written on the scientific inevitability of global communist revolution based on anthropological, psychological and evolutionary biological studies of that time (the Nazis did that too, and the fundie Christians in the US are doing it wight now).
QUOTE
I'm sorry, but I don't see those fears expressed by anyone who actually has a clue about the topics at hand, which is actually where the core of my point comes from.
Given these studies are publicised in well read media, the average person has been exposed to them as much as it has been supposed in the SR universe to AI since 2070. You, however, seem to assume everyone in 2070 is an AI expert computer scientist somehow (at least, every decision maker is).
QUOTE
You claim all people should hate A.I. because of the example of Deus, regardless of how much knowledge that person possess which directly contradicts the notion that a particular A.I. is dangerous at all.
No. I am saying all people in power should consider AI a threat because so far, most AI theyknow have been openly hostile to humanity for no discernible reason, out of clumsyness and a cruel sense of curiosity. I am saying that taking such creatures' words of friendship at face value and trusting them unconditionally makes no sense. I am saying that such creatures would be needed to be controlled and/or preemtively destroyed.
QUOTE
so why would I expect an A.I. to feel pain like a human?
Because you have
repeatedly compared AI 'pains' to human vivisection and the likes?
QUOTE
Not only have you totally failed to meet the criteria of the requested example, but you're attempting to strawman the hell out of it.
Ah, claims and insults, the resort of someone with a decisive lack of arguments.
I have met your criteria, which were: "example of someone feeling something was dangerous, gaining enough knowledge to conclude it was not, and then feeling more terrified".
Changing those criteria and insinuating what I supposedly say about AI does not change this.
QUOTE
For starters, I don't live in America, second it wasn't 1 guy, but multiple families over the course of 2 decades, in addition to discussions with many students at the various schools I've taught at.
That aside, you seem to take your ideas from talking to a specific social group (Hajiras?), who emigrated for feeling oppressed (rightfully, in all likelyhood). Accept these are not very representative to a society as a whole, much as talking to Chinese city folk functionaries will not tell you a damn thing about how the countryside lives in that country.
Posted by: Sengir Jul 11 2010, 09:44 AM
QUOTE (Sengir @ Jun 29 2010, 01:46 PM)

Oh, and could one of the "THE END IS NIGH!!!!!!!!!!!!!!!!!111111111" shouters be so kind and enlighten me why Shadowrun AIs have to follow the old world domination/destruction trope? It seems you guys are really disappointed by the fact that the new AIs have neither the intent nor (because they are little more than hackers iwthout meat bodies) the means to blow up the world, but why should each and every fictional AI do that?
I'm still waiting for an answer, BTW...
Posted by: Inpu Jul 11 2010, 10:13 AM
A few key points:
QUOTE
AI have no nervous system. AI have no pain receptors. That something causes them pain is anthropomorphising them again. Look, AI do not work like human bodies if they live in a toaster, a taxi cab, or a mainframe computer. You cannot constantly anthropomorphise them. That's a flawed argument. Find another that isn't flawed.
They do however have an instinctual need to survive, like other self-aware entities. Letting someone at the core code means that they can threaten the AI's continued survival, and the act of observing such a complicated code and taking it apart changes and threatens the intelligence that lives within it. With no understanding of how the code sustains intelligence, any single line of code may be the key. Any changes, even separation, may annihilate the entity that was and replace it with a new one. Such a being would work under entirely different laws.
This is not necessarily arguing against Hermit's point. They do not have a nervous system, but they are more a sum of their individual parts than we might understand, and any tinkering may cause what could be understood as 'madness' when they change or 'pain' when they fight that change and will themselves to survive.
QUOTE
Rules contradiction. Needs Errata (gah, SR4 is worse than SR3 in this). Besides, for that to work in the same spirit, the AI would need another Edge, along the lines of Simsense Imprint Producer. Is there such an Edge?
This is not a good argument. It is a waiving of established rule that does not support your argument. It is conveniently ignored.
Also, to acknowledge that the setting waives certain aspects, such as Simsense, and then not acknowledge the same for another part of the system is not entirely logical.
Anyways, that's my two cents. Please continue.
Posted by: hermit Jul 11 2010, 10:33 AM
QUOTE
They do however have an instinctual need to survive, like other self-aware entities.
No. Selfawareness decreases instinct, not increases. Every organism of any organsiation has a self sustaining drive. Since AI are no organism, however, and developed from something entirely different, that need not necessarily apply to them.
QUOTE
Letting someone at the core code means that they can threaten the AI's continued survival, and the act of observing such a complicated code and taking it apart changes and threatens the intelligence that lives within it. With no understanding of how the code sustains intelligence, any single line of code may be the key. Any changes, even separation, may annihilate the entity that was and replace it with a new one. Such a being would work under entirely different laws.
This is not necessarily arguing against Hermit's point. They do not have a nervous system, but they are more a sum of their individual parts than we might understand, and any tinkering may cause what could be understood as 'madness' when they change or 'pain' when they fight that change and will themselves to survive.
I agree here, but such concepts cannot be compared to human experience, and given their vastly different background, two AI may work on very different perceptions of disassembling and restructuring, too. Anthropomorphising them and comparing them to persecuted human minorities, comparing them being analysed with humans being vivisected, just does not fly. Some AI may purposely anthropomorphise themselves to manipulate human perception, but they are so far from humans in their basic 'anatomy' that even a dragon seems close kin compatred to them. Hence, just because it says it feels pain does not mean whatever it may or may not feel remotely resembles pain as understood by biological sentients derived from chordates.
QUOTE
This is not a good argument. It is a waiving of established rule that does not support your argument. It is conveniently ignored.
Also, to acknowledge that the setting waives certain aspects, such as Simsense, and then not acknowledge the same for another part of the system is not entirely logical.
SR4's matrix rules are so full of illogical constructs - AI cannot be copied, AI work according to "machine magic" called Resonance, Technomancers - you have to accept this works as it does. You can not, however, use this as a basis to assume something NOT covered by the rules works also because it is equally illogical. There is a specific eception to the "AI may not use Simsense tech because they have no brain" - the drone pilot edge. There is NO such egde for simsense recording. Hence, since we have to accept the SR4 matrix with all it's fallacies as is, there is no AI simsense recording.
Posted by: Inpu Jul 11 2010, 10:57 AM
QUOTE
No. Selfawareness decreases instinct, not increases. Every organism of any organsiation has a self sustaining drive. Since AI are no organism, however, and developed from something entirely different, that need not necessarily apply to them.
Not entirely true in this case. Almost no matter how it is argued, in this case self-awareness would increase instinct because they had literally no instinct before that moment. A code will work itself into oblivion if it isn't told not to. AI have been shown to have a vested interest in their continued existence in Shadowrun. Plus, you are not accounting for emotion in this. While I cannot fathom what form of emotion they would have, or in what shade, the same sort of illogical goals and patterns that drive life drive AI in Shadowrun.
QUOTE
I agree here, but such concepts cannot be compared to human experience, and given their vastly different background, two AI may work on very different perceptions of disassembling and restructuring, too. Anthropomorphising them and comparing them to persecuted human minorities, comparing them being analysed with humans being vivisected, just does not fly. Some AI may purposely anthropomorphise themselves to manipulate human perception, but they are so far from humans in their basic 'anatomy' that even a dragon seems close kin compatred to them. Hence, just because it says it feels pain does not mean whatever it may or may not feel remotely resembles pain as understood by biological sentients derived from chordates.
Yes. I agree for the most part.
QUOTE
SR4's matrix rules are so full of illogical constructs - AI cannot be copied, AI work according to "machine magic" called Resonance, Technomancers - you have to accept this works as it does. You can not, however, use this as a basis to assume something NOT covered by the rules works also because it is equally illogical. There is a specific eception to the "AI may not use Simsense tech because they have no brain" - the drone pilot edge. There is NO such egde for simsense recording. Hence, since we have to accept the SR4 matrix with all it's fallacies as is, there is no AI simsense recording.
I also find it strange they cannot be copied, but it seems that there is a trigger event needed. There were other ways to go about it, but the basic idea is would an AI wish to copy itself, thereby foregoing improvement and inviting weakness due to a lack of evolution?
The rules specifically state when something is not doable in most cases. A GM must make a call either way, based off of every given situation. If it does not say SimSense cannot be done, then there is no basis for it not being doable. It would fall to individual preference at that point, with some GMs making it into an odd experience and others saying it fries a person outright. I do not see why they wouldn't be able to record an experience with SimSense, but I do agree that it would likely harm a person using it if they went Hot. When going cold, they are mostly looking at strange imagery, which could be useful to psychologists in theoretical fields.
My support for it not excluding the possibility is that the rules do not have a means of simulating when someone coughs, sneezes when they do not wish, trips, or the like when performing an action as simple as walking down the sidewalk. Glitches can be interpreted as such, if your GM decides to make you roll every time you walk, but that is more a case in point. Simply because there is no edge does not mean it is impossible.
Essentially, it boils down to opinion.
Posted by: hermit Jul 11 2010, 11:29 AM
QUOTE
Not entirely true in this case. Almost no matter how it is argued, in this case self-awareness would increase instinct because they had literally no instinct before that moment. (...) A code will work itself into oblivion if it isn't told not to. AI have been shown to have a vested interest in their continued existence in Shadowrun.
If they had no instinct before, why should they suddenly have one, just because organisms have one? They can have a conscience and be aware of themselves and willing to self preserve, but where should that instinct come from?
QUOTE
Plus, you are not accounting for emotion in this. While I cannot fathom what form of emotion they would have, or in what shade, the same sort of illogical goals and patterns that drive life drive AI in Shadowrun.
They might just as well only emulate emotion to interact with metahumanity. Whatever they may or may not feel would not necessarily be close to human. I guess that is where xenosapient and metasapient AI differ: Metasapients have a built in emotion/anthropomorphism emulator running and operate on it, while xenosapients don't.
That, however, still does not make them 'human' or even just sapient in the sense metahumans, dragons et al are.
QUOTE
The rules specifically state when something is not doable in most cases. A GM must make a call either way, based off of every given situation. If it does not say SimSense cannot be done, then there is no basis for it not being doable. It would fall to individual preference at that point, with some GMs making it into an odd experience and others saying it fries a person outright.
We differ in our understanding of what the rules say here. Just because the rules do not explicitly say a metahuman cannot have babies with a car does not mean you can safely assume this may well be the case, does it? I'd expect something like this to be explicitly stated if possible, since a general rule forbids Simsense for AI, with one specific, named exception. It's the same with Initiative passes: nobody may have more than four (save for mancers and riggers). Just because mancers and riggers got that 5th pimped up IP in the matrix does not mean you can have it as a mundane character, does it.
The rules say the AI may not jump into a drone because it lacks a motorical cortex to control it. Then they offer an exception: 'except if the AI buys the edgte 'former pilot program.''. That is a clear-cut exception from the "No SimSense applications for AI" rule, not an open rule with one possible definition of what may go.
QUOTE
Essentially, it boils down to opinion.
To interpretation of the rules as a fundamentally open set of suggestions or fixed rules the world works by. So yes, opinion, if you will.
Posted by: Inpu Jul 11 2010, 11:44 AM
Where should it not come in? The argument, either way, is built on assumptions. We do not know either way because we do not have a clear cut example. It is just as likely to have or emulate instinct as any other being. Besides, by our definitions of life and it being considered alive, it would have a need to preserve itself once it has become aware. When something is self-destructive or uncaring of its self, something has gone wrong. Deus is an excellent example of self-preservation and Feral AI will fight for its own survival. In a real world, perhaps it may not work like this, but we're talking about the setting.
QUOTE
They might just as well only emulate emotion to interact with metahumanity. Whatever they may or may not feel would not necessarily be close to human. I guess that is where xenosapient and metasapient AI differ: Metasapients have a built in emotion/anthropomorphism emulator running and operate on it, while xenosapients don't.
That, however, still does not make them 'human' or even just sapient in the sense metahumans, dragons et al are.
Emulate or mimic well enough and it becomes the same thing, as you noted. Emotions are not necessarily a unique human experience in any case, so it is difficult to argue that an AI is anthropomorphed by virtue of it having any similarities or perceived similarities.
I'd like to point out I'm not arguing for them to be taken as human, metahuman, or the like. They are as alien as a spirit.
QUOTE
We differ in our understanding of what the rules say here. Just because the rules do not explicitly say a metahuman cannot have babies with a car does not mean you can safely assume this may well be the case, does it? I'd expect something like this to be explicitly stated if possible, since a general rule forbids Simsense for AI, with one specific, named exception. It's the same with Initiative passes: nobody may have more than four (save for mancers and riggers). Just because mancers and riggers got that 5th pimped up IP in the matrix does not mean you can have it as a mundane character, does it.
The rules say the AI may not jump into a drone because it lacks a motorical cortex to control it. Then they offer an exception: 'except if the AI buys the edgte 'former pilot program.''. That is a clear-cut exception from the "No SimSense applications for AI" rule, not an open rule with one possible definition of what may go.
Nor do they cover basic assumptions, so they must be judged accordingly. As stated before, the rule does not specifically cover a large range of things, to which the core book admits.
There is only ever one golden rule: if it doesn't work, fix it. Story before rule if the need is there. But even by the rules, if there is a single exception, then that exception proves that it is possible.
Thank you for your time. This has been very enjoyable as well as cordial, hermit.
Posted by: hermit Jul 11 2010, 11:57 AM
QUOTE
Besides, by our definitions of life and it being considered alive, it would have a need to preserve itself once it has become aware. When something is self-destructive or uncaring of its self, something has gone wrong.
Not necessarily, but that would be nitpicking. Yes, that is the general definition of life (and a point where I am not entirely sure that one could consider an AI life at all). I guess this falls under "the SR4 matrix is strange and full of fallacies" we may have to accept.
QUOTE
Emulate or mimic well enough and it becomes the same thing, as you noted.
To an outside observer, yes, but that'S quickly descending into existential philosophy and hence, mudslinging.
QUOTE
I'd like to point out I'm not arguing for them to be taken as human, metahuman, or the like. They are as alien as a spirit.
I would argue they are more, but within the setting, accepting it at face value for the sake of it not falling apart entirely, I guess you are right there.
QUOTE
There is only ever one golden rule: if it doesn't work, fix it. Story before rule if the need is there. But even by the rules, if there is a single exception, then that exception proves that it is possible.
I agree. However, my 'fix' would be 'it's not possible' for anything but pilot. But ymmv.
De rien. This was a fun discussion indeed.
Posted by: DeathStrobe Jul 11 2010, 11:57 PM
QUOTE (Inpu @ Jul 11 2010, 11:44 AM)

Where should it not come in? The argument, either way, is built on assumptions. We do not know either way because we do not have a clear cut example. It is just as likely to have or emulate instinct as any other being. Besides, by our definitions of life and it being considered alive, it would have a need to preserve itself once it has become aware. When something is self-destructive or uncaring of its self, something has gone wrong. Deus is an excellent example of self-preservation and Feral AI will fight for its own survival. In a real world, perhaps it may not work like this, but we're talking about the setting.
I don't think all AIs should care about self preservation per se. What they care about is what they were programmed to do, usually. Feral AI don't care so much about themselves as much as they care about protecting their home nodes because they usually emerged from Black IC, and that's kind of what Black IC was designed for, defending nodes. So that's what they care about, keeping unknown users out of their nodes. They're more territorial then caring about self preservation.
The only reason an AI would really care about their own mortality would probably because being dead would inhibit their ability to do tasks that they like (were programmed) to do. Most AIs don't care about the meet world. They only care about the Matrix.
QUOTE
I'd like to point out I'm not arguing for them to be taken as human, metahuman, or the like. They are as alien as a spirit.
I think that's totally accurate. AI are basically Matrix "Spirits," especially e-ghosts.
Posted by: Mordinvan Jul 12 2010, 12:51 AM
QUOTE (DeathStrobe @ Jul 11 2010, 04:57 PM)

I don't think all AIs should care about self preservation per se. What they care about is what they were programmed to do, usually. Feral AI don't care so much about themselves as much as they care about protecting their home nodes because they usually emerged from Black IC, and that's kind of what Black IC was designed for, defending nodes. So that's what they care about, keeping unknown users out of their nodes. They're more territorial then caring about self preservation.
The only reason an AI would really care about their own mortality would probably because being dead would inhibit their ability to do tasks that they like (were programmed) to do. Most AIs don't care about the meet world. They only care about the Matrix.
I really hate to break this to you, but that is why humans fear death as well. We are 'programed' to procreate, and being dead prevents us from doing that.
QUOTE
I think that's totally accurate. AI are basically Matrix "Spirits," especially e-ghosts.
I would argue this. The original function of the A.I.'s program was conceived by humans, and as such the A.I.'s motivations are likely to be comprehensible. Spirits on the other hand can want to accomplish strange tasks for truly absurd reasons. Its entirely possible for a spirit to have the drive to want to soak all the cotton candy in the world with liquid uranium because foot balls are hollow. A.I.'s in comparison are far less alien then that.
Posted by: hermit Jul 12 2010, 12:57 AM
Deleted for lack of relevance.
Posted by: DeathStrobe Jul 12 2010, 03:26 AM
QUOTE (Mordinvan @ Jul 12 2010, 12:51 AM)

I really hate to break this to you, but that is why humans fear death as well. We are 'programed' to procreate, and being dead prevents us from doing that.
Nah, your generalization doesn't work because we've broken the chains of evolution by allowing the weak to live. A lot of people are merely a burden on society and thus contribute nothing, but still fear death. As well as people that do not plan on having children or are homosexual, or may have a medical condision preventing them from having children, they also fear death.
AIs wouldn't fear death, but merely be annoyed that what ever task they were working on would not be completed. This is all just hypothetical though. But seeing how most AIs are not too concerned with reproduction, why would they fear death, using your logic?
QUOTE
I would argue this. The original function of the A.I.'s program was conceived by humans, and as such the A.I.'s motivations are likely to be comprehensible. Spirits on the other hand can want to accomplish strange tasks for truly absurd reasons. Its entirely possible for a spirit to have the drive to want to soak all the cotton candy in the world with liquid uranium because foot balls are hollow. A.I.'s in comparison are far less alien then that.
I see no reason why an AI that worked at a liquid uranium plant that came in to contact with a football for the first time might not be able to draw the same insane conclusions as that spirit. It would be a stretch, but mildly plausible.
Posted by: Inpu Jul 12 2010, 06:20 AM
QUOTE (DeathStrobe @ Jul 12 2010, 01:57 AM)

I don't think all AIs should care about self preservation per se. What they care about is what they were programmed to do, usually. Feral AI don't care so much about themselves as much as they care about protecting their home nodes because they usually emerged from Black IC, and that's kind of what Black IC was designed for, defending nodes. So that's what they care about, keeping unknown users out of their nodes. They're more territorial then caring about self preservation.
The only reason an AI would really care about their own mortality would probably because being dead would inhibit their ability to do tasks that they like (were programmed) to do. Most AIs don't care about the meet world. They only care about the Matrix.
That would be reason enough, though. Also note I said most: even in the meat, there are a number of people who are self-destructive, such as psychopathic individuals. Hence, something gone wrong. Even if it is just to preserve something else, then that still comes with the need to exist and the acknowledgment of that fact. Absolute unreasonable fear or indifference begins to fall somewhat into personal experience. Just as with current observable intelligence, the AI can be self-destructive as well. It can even choose to sacrifice itself, which is also interesting.
I think the real divergence is goals. Getting back to some of the original points, I do not think people would seek to annihilate AI because they do not have a sure fire way to do so without them fighting back. The Megas are afraid of what might come of such a thing, since all kinds of nasty information can be made public or be destroyed. They would be fearfully tolerated, I think, and thus have good reason to hide what they are when contacting other individuals. Personally, I wonder what role they would play against the Horrors.
QUOTE (Mordinvan)
I would argue this. The original function of the A.I.'s program was conceived by humans, and as such the A.I.'s motivations are likely to be comprehensible. Spirits on the other hand can want to accomplish strange tasks for truly absurd reasons. Its entirely possible for a spirit to have the drive to want to soak all the cotton candy in the world with liquid uranium because foot balls are hollow. A.I.'s in comparison are far less alien then that.
I considered this, but while they spring from human minds, they are the product of human hands and thus inevitably have flaws that are often overlooked and that may have a massive impact on their line of thought. Some may be more 'human' in their designs than others, but even those would have, as hermit points out, an entirely different life experience and thus very quickly end up as something that perceives the world from another angle. It is a small minority that falls into the metasapient category when compared to the Ferals or Xenosapients, so it takes an extra leap for them to end up on the same wavelength as it were.
Posted by: Mordinvan Jul 12 2010, 08:36 AM
QUOTE (DeathStrobe @ Jul 11 2010, 08:26 PM)

Nah, your generalization doesn't work because we've broken the chains of evolution by allowing the weak to live. A lot of people are merely a burden on society and thus contribute nothing, but still fear death. As well as people that do not plan on having children or are homosexual, or may have a medical condision preventing them from having children, they also fear death.
Wow, I'm guess you know nothing about evolution then. Evolution works on general rules which are applied to a population of genes. It does not function on individuals. Given that byinlarge the vast majority of the time NOT being killed will make you more successful at passing on your genes as opposed to less successful at passing on your genes, thus people have a tendency to NOT want to be killed.
QUOTE
AIs wouldn't fear death, but merely be annoyed that what ever task they were working on would not be completed. This is all just hypothetical though. But seeing how most AIs are not too concerned with reproduction, why would they fear death, using your logic?
You're asking me how a program, which according to the fluff of the game performs cognitive actions it should not otherwise be allowed to, feels an emotion it has no evolutionary reason to. Do you realize I could say anything and be potentially correct, as there is no fluff or crunch which contradicts my statement?
QUOTE
I see no reason why an AI that worked at a liquid uranium plant that came in to contact with a football for the first time might not be able to draw the same insane conclusions as that spirit. It would be a stretch, but mildly plausible.
Because A.I.'s are programs. Programs require logic to function. In order for this to occur, you would have to be able to draw a flow chart of premises, and conclusion which logically flow from one to the next, connecting the premise "Footballs are hollow" with the conclusion "Coating cotton candy in liquid uranium is a good idea". Given the access to the matrix most A.I.'s have, and the ability to fact check this provides, most of the premises you're using in this flow chart should even be true. So while it may be 'possible' for an A.I. to come to this conclusion, I would not call it 'probable' in the slightest, for a metasapient to do so. The trail of premises would have to dip into some highly improbable/impracticable territory before such a conclusion would be reached.
Posted by: hermit Jul 12 2010, 08:40 AM
QUOTE
We are 'programed' to procreate, and being dead prevents us from doing that.
Oh please. What do many religious functionaries (catholic priests, buddhist monks, sadhus or the virgo vestalis) have in common? You see, it's broad generalisation and total disregard for historical and social fact that makes evolutionary biology the joke it is. If we were 'programmed' to procreate, society would not exist as it does, since many social functions - essential ones also - require to take a different focus than procreation. Marriage severely limits procreative activity, as does raising children, for instance.
Human society works far more like a bee hive than a pack of apes. Check out how many bees actually procreate in their lives.
QUOTE
The original function of the A.I.'s program was conceived by humans, and as such the A.I.'s motivations are likely to be comprehensible.
That just kills the entire concept of Emergence. Not that I'd mind, but since your entire humanisation of AI depends on Emergence, I'd be careful not to obliterate it like that.
AI are, by the setting's definition, no comprehensible programs and do not work like programs ought to by way of computer magic. AI are computer magical beings, not programs. They are, if anything, the spirits to "resonance" computer magic.
QUOTE
Its entirely possible for a spirit to have the drive to want to soak all the cotton candy in the world with liquid uranium because foot balls are hollow.
Source? Because I'd really like to see a type of spirit that would do that. What would that be, a toxic free spirit of cotton candy?
QUOTE
A.I.'s in comparison are far less alien then that.
Especially Xenosapient AI. Sorry, but something with such an extremly different environment and experience as AI cannot possibly be as comprehensible as a spirit, which shares the same life experiences that at least 2% of metahumanity do.
QUOTE
Personally, I wonder what role they would play against the Horrors.
The horrors arguably are dissonant. Hence, they become corrupted or are destroyed, just like anything else.
QUOTE
Wow, I'm guess you know nothing about evolution then. Evolution works on general rules which are applied to a population of genes. It does not function on individuals. Given that byinlarge the vast majority of the time NOT being killed will make you more successful at passing on your genes as opposed to less successful at passing on your genes, thus people have a tendency to NOT want to be killed.
Wow, I guess you think Dawkins' long obsolethe thesis of the selfish gene is some sort of religious text.
QUOTE
Do you realize I could say anything and be potentially correct, as there is no fluff or crunch which contradicts my statement?
Do you realise we're talking about the canon setting and not whatever fanfiction you have in mind, and hence, what you think might be cool is totally irrelevant?
QUOTE
Because A.I.'s are programs. Programs require logic to function.
They have to abide by the same set rules in a universe as everything else, including SR's magic.
QUOTE
So while it may be 'possible' for an A.I. to come to this conclusion, I would not call it 'probable' in the slightest, for a metasapient to do so.
And why should it be probable for a spirit?
Posted by: Inpu Jul 12 2010, 08:45 AM
QUOTE (hermit @ Jul 12 2010, 10:40 AM)

The horrors arguably are dissonant. Hence, they become corrupted or are destroyed, just like anything else.
Of that I have no doubt, but I was referring to the specific unique abilities they bring to the table and how the Horrors would change to react to it and whether or not Dissonance is related to the Horrors or another thing entirely. Of course, the definition of Horror is pretty wide.
I'm just fond of the peculiar brand of chaos the Matrix and assorted technologies might bring into the situation.
Posted by: Mordinvan Jul 12 2010, 08:53 AM
Humans instincts are driven by the programing of our genes, which are shaped by natural selection, which is governed by long term reproductive success. So while I did give the simple answer, I don't think anyone wanted me to give a 4 year long 120 credit lecture on the topic Hermit.
Computer programs do tasks which someone somewhere found desirable at some point in time. Thus the A.I. will likely feel an urge to complete tasks related to its original task, or intended to make its original task easier/more efficient because that is what is was programed to do. While it is possible some may truly be beyond human comprehension, it is unlikely that the human mind would be totally unable to connect intent and goal, if the logical flow chart was adequately demonstrated to them.
Most books which reference spirits say their actions and motivations may be truly alien. Given that spirits are not constrained to use logical thought, or even necessarily come from a meta-plane where the laws of logic are even true, it is entirely possible to generate any premise, conclusion pairing, and have some spirit, somewhere, at some point in time, hold this notion to be true, because their minds need not rely on any from of comprehensible, or even 'physically possible' thought process to come to its conclusions.
Posted by: Inpu Jul 12 2010, 09:13 AM
QUOTE (hermit @ Jul 12 2010, 10:40 AM)

Oh please. What do many religious functionaries (catholic priests, buddhist monks, sadhus or the virgo vestalis) have in common? You see, it's broad generalisation and total disregard for historical and social fact that makes evolutionary biology the joke it is. If we were 'programmed' to procreate, society would not exist as it does, since many social functions - essential ones also - require to take a different focus than procreation. Marriage severely limits procreative activity, as does raising children, for instance.
Human society works far more like a bee hive than a pack of apes. Check out how many bees actually procreate in their lives.
Mm, I disagree with you here. Remember that many social constructions fail. Marriages are often punctuated by cheating and many do not marry but remain partners. Then there is the classic example of rape, where a person passes their genes on by force. This is largely looked down upon for a number of reasons, one of which is the idea that such an option is considered a method for a creature who cannot properly court a partner and is at risk of not passing on the unique genetic line that would result.
Marriage is a convention that, like many of the dictates of the Bible as an example, is there to encourage communities and breeding true, so that you are aware of your offspring. While a female always knows a child is her's, a male has no such certainty save for the idea that he is the only mate. It is actually a rather clever means of fulfilling what Mordinvan said about genetic programming. Further, raising children is an important part of ensuring your genetic line survives.
A lot of this is already covered, of course. There are some very strong personal opinions on this subject, but for those of the religious bent it does not refute anything. This is more to point out that society exists the way it does because of the need to procreate, rather than despite it.
Posted by: hermit Jul 12 2010, 09:19 AM
QUOTE
Humans instincts are driven by the programing of our genes, which are shaped by natural selection, which is governed by long term reproductive success.
No. Life isn't that simple. That's not even the case with bacteria, let alone higher organised life.
QUOTE
So while I did give the simple answer, I don't think anyone wanted me to give a 4 year long 120 credit lecture on the topic Hermit.
Oh, please, share your wisdom you so love to hint to. Either do, or shove such comments (and that doesn't even consider what a bunch of crap some lectures can be). As is, they make you seem like a smartass who thinks snark makes up for lack of argumentative substance.
QUOTE
While it is possible some may truly be beyond human comprehension, it is unlikely that the human mind would be totally unable to connect intent and goal, if the logical flow chart was adequately demonstrated to them.
It is like that in Shadowrun. The entire point of AIs generation in SR rests on some "X-Factor" stuff. You have to accept this at all times, not just when it suits your needs.
Also, most spirits are magical constructs generated by the human mind - or at least something sentient's mind - for a specific task, according to their own image. Why should spirits be less comprehensible than a computer program that is, by definition, changed beyond human understanding, wich ist what differs an AI from an agent?
QUOTE
Most books which reference spirits say their actions and motivations may be truly alien.
The same is written about AI. So?
QUOTE
Given that spirits are not constrained to use logical thought
Since when are AI programmed by Vulcans?
QUOTE
it is entirely possible to generate any premise, conclusion pairing, and have some spirit, somewhere, at some point in time, hold this notion to be true, because their minds need not rely on any from of comprehensible, or even 'physically possible' thought process to come to its conclusions.
I suppose cardinality is not a concept you are very familiar with?
Posted by: hermit Jul 12 2010, 09:39 AM
QUOTE
Marriages are often punctuated by cheating and many do not marry but remain partners. Then there is the classic example of rape, where a person passes their genes on by force. This is largely looked down upon for a number of reasons, one of which is the idea that such an option is considered a method for a creature who cannot properly court a partner and is at risk of not passing on the unique genetic line that would result.
The upshot in divorce and unmarried partnerships is a phenomenon that is very new historically and limited to western countries (the high divorce rate is mostly an American phenomenon, caused by the strong social pressure to marry young). Most cultures promote fixed partnerships.
Rape, on the other hand, is usually looked down upon because sex, with simians (that includes us) has stopped to be all about procreation and is as much a social as reproductive function (no matter how much Christianity hates that idea).
To limit human behavior to purely reproductive function is narrowing down your vision and, ultimatly, failing to understand what you look at.
QUOTE
Marriage is a convention that, like many of the dictates of the Bible as an example, is there to encourage communities and breeding true, so that you are aware of your offspring.
Marriage is not a concept that is upheld only by christians, or even any religions derived from the teachings of Echnaton.
QUOTE
It is actually a rather clever means of fulfilling what Mordinvan said about genetic programming. Further, raising children is an important part of ensuring your genetic line survives.
Raising a child is primarily about passing on collective memory and necessary knowledge to the child so it can function in society (something most developed societes collectivise in part away from the parents, too). If you're only talking about procreation, there would be no need to care for your offspring once they reach puberty. Hoewever, few societies can afford this - those that can are the least developed and organsied. Not saying genetic impulses do not play a part, but singling them out as the only thing that drives people is narrowing it down too far.
Posted by: Inpu Jul 12 2010, 10:01 AM
QUOTE (hermit @ Jul 12 2010, 11:39 AM)

The upshot in divorce and unmarried partnerships is a phenomenon that is very new historically and limited to western countries (the high divorce rate is mostly an American phenomenon, caused by the strong social pressure to marry young). Most cultures promote fixed partnerships.
New largely because divorce is, from a historical point of view, also quite new. After all, it did not even exist as an option for quite some time and is a relatively recent event in the scheme of history. It is also a massive event in other countries, such as Japan which has a large divorce rate pushed strongly by women in this day and age. Cultures promote fixed partnerships, but do not often have them.
QUOTE
Rape, on the other hand, is usually looked down upon because sex, with simians (that includes us) has stopped to be all about procreation and is as much a social as reproductive function (no matter how much Christianity hates that idea).
We are one of two known species to couple for fun as well as for procreation, but that is linked largely to the pleasure/reward stimulus. Science has covered this pretty extensively. Again, it does not mean that everything else is false because something else is true, but it does mean that one thing is true in addition to other views. It never really stopped being about procreation: it is also about enjoyment and the connection between the two. With the advent of birth control, our minds can now enjoy the act without the responsibility and may thus couple with an individual not yet chosen as a mate. When a mate is chosen, decisions are made.
QUOTE
To limit human behavior to purely reproductive function is narrowing down your vision and, ultimatly, failing to understand what you look at.
I am hardly saying it is the only reason anyone does anything, as these social constructions are certainly here for our comforts as well. Your reply stated that marriage limits procreation. I am arguing that it does not, rather than saying that it is the basis for all things and that nothing else may touch on the Human Condition. Procreation is in fact a large part of it, as we understand life, and is the basis for many religions as well. Birth is one of our most sacred and revered moments and societies were once strongly Matriarchal.
You yourself said that when self-awareness increases, instinct fades. That does not remove instinct: it remains. It is still a drive that is ever-present and part of the decisions we make. In this way does our condition become something both complex and simple. We fulfill the needs of instinct and make our own decisions about goals.
QUOTE
Marriage is not a concept that is upheld only by christians, or even any religions derived from the teachings of Echnaton.
But it is there to aid in the construction of a community. The Bible was an example, rather than the rule.
QUOTE
Raising a child is primarily about passing on collective memory and necessary knowledge to the child so it can function in society (something most developed societes collectivise in part away from the parents, too). If you're only talking about procreation, there would be no need to care for your offspring once they reach puberty. Hoewever, few societies can afford this - those that can are the least developed and organsied.
And, in so doing, protect that child and see that it continues to survive. Procreation does not just mean the act: it means the result as well. Familial ties are thus very strong.
In any case, I believe I mentioned in an earlier post that this would need to be a goal for AI as well, at least one of the Metasapient variety. Rather than creating new AI, they must improve themselves to ensure their own survival. A large part of that comes from observed life. A Spirit even shares this in that they do everything to avoid Evanescence (I could make a corny joke about the band at this point, but I'll refrain).
QUOTE
Not saying genetic impulses do not play a part, but singling them out as the only thing that drives people is narrowing it down too far.
Of course, you seem to be saying pretty much the same thing, so it is more a definition of terms argument at this point. If you mean to say that it is not the end all, only drive pushing every sentient thought, then I agree: it is not. It plays a large part, but there are many other considerations.
On a happy side note, I seem to be out of the Probationary poster category and can now finally choose an avatar and signature. Celebration ensues.
Posted by: Mordinvan Jul 12 2010, 10:05 AM
QUOTE (hermit @ Jul 12 2010, 01:40 AM)

Oh please. What do many religious functionaries (catholic priests, buddhist monks, sadhus or the virgo vestalis) have in common? You see, it's broad generalisation and total disregard for historical and social fact that makes evolutionary biology the joke it is. If we were 'programmed' to procreate, society would not exist as it does, since many social functions - essential ones also - require to take a different focus than procreation. Marriage severely limits procreative activity, as does raising children, for instance.
Wow.... you have really no idea how evolution works then. Its really quite simple. Lets say you have an ultra high sex drive, and spend all day long doing nothing but having sex, lets also say for sake of argument, you're a male, just so we have the selection pressures of a single sex to deal with. Given that if you did this without regard for consequences, you're reproductive success would be low for a variety of reasons. a) because you place no energy in self maintenance, you starve to death b) because you place no energy providing for your mates, they have a lower incentive to mate with you in the first place (you generate fewer embryos), c) because you place no energy in providing for your children fewer of them survive to adulthood, d) because you are messing with other peoples carefully developed pair bonding, you will quickly learn WHY homicide was a leading cause of death in the ancient world. Thus the combination of genes which lead to your behavior in the first place has a lower chance of seeing itself reproduced when compared to people who put at least some energy into self maintenance, mate and child support, and not going out of their way to piss off every other male in their tribe.
QUOTE
Human society works far more like a bee hive than a pack of apes. Check out how many bees actually procreate in their lives.
Actually no it doesn't, not even close. Bee's look after their sisters, because do to an oddity in bee genetics, the workers are actually more related to their sisters then they would be their own daughters if they could actually procreate. Thus it is genetically beneficial for them to tend to the needs of their sisters.
QUOTE
That just kills the entire concept of Emergence. Not that I'd mind, but since your entire humanisation of AI depends on Emergence, I'd be careful not to obliterate it like that.
They also say A.I.'s inhabit systems like the one which gave 'birth' to them, and have a tendency to perform functions similar to the ones their parent program was intended for.
AI are, by the setting's definition, no comprehensible programs and do not work like programs ought to by way of computer magic. AI are computer magical beings, not programs. They are, if anything, the spirits to "resonance" computer magic.
QUOTE
Source? Because I'd really like to see a type of spirit that would do that. What would that be, a toxic free spirit of cotton candy?
any book which references spirits, as well as refers to the meta planes as infinite. Given an infinite number of variable metaplanes, it is possible there are planes on which the laws of logic are not equivalent to our own, and thus the thought processes of such creatures will be truly alien. Shedim for example HATE metahuman life.... but 'why'?
QUOTE
Especially Xenosapient AI. Sorry, but something with such an extremly different environment and experience as AI cannot possibly be as comprehensible as a spirit, which shares the same life experiences that at least 2% of metahumanity do.
For starters, only 1% of metahumanity is awakened, second only a small subset of that are mages, a subset of that is initiated, and a subset of that have visited any significant number of metaplanes. Of that tiny fraction of humanity, (say 1/10,000) they will have visited a combined total of approximately 0% of all available metaplanes,(yes I DID mean ZERO) and as a result have practically no clue what kinds of different spirits can and do actually exist, as well as the enviroments which bring those spirits into being(if they are actually made there), or what sorts of effects the spirits home metaplane have on said spirits way of thinking, and use/flavor/presence of logic. I'm sorry, but nothing I can imagine could possible be more alien then a creature from a place where the laws of physics as you know them, and the laws of logic do not exist in any way you could conceivably recognize.
QUOTE
Wow, I guess you think Dawkins' long obsolethe thesis of the selfish gene is some sort of religious text.
No, I just happen to think it is, at present, a reasonable summation of a hypothesis, supported by the theory of evolution, and will likely continue to do so, until the hypothesis is demonstrated to be incorrect.
QUOTE
Do you realise we're talking about the canon setting and not whatever fanfiction you have in mind, and hence, what you think might be cool is totally irrelevant?
If you can show me where in canon it says that A.I.'s can NOT feel fear, and under no circumstance will be afraid of their own deaths, then it is not unreasonable to assume somewhere at some point in time, some A.I. CAN feel fear, and WILL be afraid of death. Since the game DOES allow for the possibility of this happening, and also emphatically states that NOONE actually knows how an A.I.'s mind actually works from a physics stand point, asking me to explain how a particular out put could be generated from a particular input when all the intervening steps are unknown is an open invitation for me to engage in pointless technobable, and any answer I give is potentially correct.
QUOTE
They have to abide by the same set rules in a universe as everything else, including SR's magic.
They are computer programs, and as such there are many reality based filters which can and should be reasonably placed upon them. This filters are not entirely dissimilar to that say things like bullets are consumed when used, and cars can not be poisoned.
QUOTE
And why should it be probable for a spirit?
If you wish to speak from a mathematical standpoint, because there are an infinite number of metaplanes which are in some way different from eachother, and because it is only some nonzero chance that a any given metaplane will allow such a train of thought to be both possible, and probable, it is actually a virtual certainty that some spirit in some meta plane would safely and rationally come to the stated conclusion from the premise of footballs being hollow using whatever passes for logic in its home metaplane. There in fact doesn't even have to be anything 'toxic' about said spirit, it simply has to have a thought process adequately dissimilar to our own.
Human thoughts are shaped by our genes and our environment. Our genes are shaped by how they have interacted with the environment. As a result it comes down to environment ultimately anyway. Our environment is shaped by the true laws of physics(however closely or distantly they relate to our known laws of physics) and as a result we are defined in everyway that matters by the physical laws of our universe.
In the case of spirits, their thought processes are often shaped in distant metaplanes by forces no human may have yet experienced letalone comprehends, and as a result their minds can be totally and completely alien to anything a human has ever experienced, or even COULD ever experience. Its possible that a spirits mind may be so alien that no thought it has ever or will ever had would even be comprehensible to any human ever, regardless of the amount of explanation given by some kind of knowledgeable 3rd party who could understand both us and the spirit in question.
Posted by: hermit Jul 12 2010, 10:23 AM
QUOTE
If you wish to speak from a mathematical standpoint, because there are an infinite number of metaplanes which are in some way different from eachother, and because it is only some nonzero chance that a any given metaplane will allow such a train of thought to be both possible
If you wish to speak from a mathematical standpoint, I again refer you to cardinality. Even given there are infinite numbers in N, you will never envounter square root(2) in that magnitude. In other words, just because something has infinite elements does not necessarily mean it contains everything.
QUOTE
No, I just happen to think it is, at present, a reasonable summation of a hypothesis, supported by the theory of evolution, and will likely continue to do so, until the hypothesis is demonstrated to be incorrect.
It is, as many things Dawkins writes, overly simplistic and as such follows the train of thought of religion. Dawkins being an American with a background in WASP culture, that may beinevitable, but he is much closer to the religion he so loathes than he realises. The selfish gene is built on the one gene, one enzyme dogma, which has already been disproven a while ago. Genes, as such, do not make up the entirety of inheritance after all (expression being a major factor, and inherited promoted/inhibited expression). Expression, however, is an environmental influence and not related to the genes as such, selfish or not.
QUOTE
They also say A.I.'s inhabit systems like the one which gave 'birth' to them, and have a tendency to perform functions similar to the ones their parent program was intended for.
Much like spirits tend to stick around the domain they belong to. Your point being?
QUOTE
Human thoughts are shaped by our genes and our environment. Our genes are shaped by how they have interacted with the environment. As a result it comes down to environment ultimately anyway. Our environment is shaped by the true laws of physics(however closely or distantly they relate to our known laws of physics) and as a result we are defined in everyway that matters by the physical laws of our universe.
See, this is where your mistake lies. You say that AI are more understandable because they are part of a physical world you can understand. However,
in the shadowrun fictional universe AI are part of a world that contains magic and computer magic, both tied to infinite (though not necessarily in the sense Douglas Adams loved to mistake it) different planes that may or may not be parallel universes.
Your thoughts are shaped by the interaction with the true laws of physics. A shadowrun person'S, even a mundane's, ware shaped by contact with different extensions to these true laws, one of which being magic, the other being cmputer magic. You cannot directly draw conclusions based on yourself as a model that are always true in the SR fictional universe.
Posted by: IKerensky Jul 12 2010, 10:44 AM
Well I am divided onthe AI topic, because to me they could be one of three things :
1- Real AI purely technologicals.
2- Horror in disguise having found their way to our world through the magico-technico great pattern the matrix become.
3- Passions trapped into this very same great pattern.
So I would be very carefull about them.
Posted by: Mordinvan Jul 12 2010, 10:48 AM
QUOTE (hermit @ Jul 12 2010, 02:19 AM)

No. Life isn't that simple. That's not even the case with bacteria, let alone higher organised life.
I'm guessing you don't have a very strong background in molecular biology, or neuropsychology do you?
QUOTE
Oh, please, share your wisdom you so love to hint to. Either do, or shove such comments (and that doesn't even consider what a bunch of crap some lectures can be). As is, they make you seem like a smartass who thinks snark makes up for lack of argumentative substance.
I seriously do not have the time needed to type out all the hundreds of pages of needed scientific papers and textbooks needed to completely and fully explain the 'basic' concepts in proper detail. As I said, to get the idea, you need several courses in genetics, several courses in evolutionary biology, several courses in psychology, several courses in anthropology, and a smattering of other courses in various other related disciplines get really get the idea. Since I do not have time and dumpshock does not have the space, and no one would bother to read all the pages I notes I took during those courses, or read all the course packs, and relevant text book chapters for the sole purpose of a dumpshock argument, you're just going to have to educate yourself, or take my word for it.
QUOTE
It is like that in Shadowrun. The entire point of AIs generation in SR rests on some "X-Factor" stuff. You have to accept this at all times, not just when it suits your needs.
Actually much of this stuff is intentionally left open ended so that a GM may fill in the missing pieces as needed to suit their needs.
QUOTE
Also, most spirits are magical constructs generated by the human mind - or at least something sentient's mind - for a specific task, according to their own image. Why should spirits be less comprehensible than a computer program that is, by definition, changed beyond human understanding, wich ist what differs an AI from an agent?
Actually it is only thought that it is 'possible' that some spirits are made that way. Given that there are only a finite number of humans which have ever lived(about 60 billion), and at most 1.3*10^18 unique thoughts which could ever have occurred if everyone had 10 unique thoughts per second every second off their entire lives, and we all lived to be 70. This would place a finite number of how many spirit would exist if each and every thought created 1 spirit. However the book indicates the metaplanes are infinite in number and infinite in size. Even if each plane had 1 and ONLY 1 spirit on it, that would still be an infinite - (1.3*10^18) = infinite number of spirits you would still need to account for. Each of them would come from a plane which as I said before need not share any laws of physics or rules of logic in common with our own plane, thus making them truly alien.
As for what makes an A.I. different from an agent? The book in clear, in that the A.I. program does 'something' it is not supposed to. What that 'something' is, or 'why' it is done are atleast for the moment shielded from public view by 'plot'.
QUOTE
The same is written about AI. So?
A.I. come from computers, and programs which both were created in the same physical universe which gave rise to our minds, and infact were created BY our minds. For spirits, this is not necessarily true.
QUOTE
Since when are AI programmed by Vulcans?
All micro processors work on logic gates. Since an A.I. MUST live in a processor, A.I.'s require logic gates to exist. Thus some piece of the A.I. must interact with the processor in a logical fashion. Since all things thus far known which can be logically explained are comprehensible to humans adequately trained to understand logic, it seems reasonable that an organism which requires logic to exist, and as such likely acts in some form of logical fashion, can be under by someone with an adequate understanding of logic, its just 'logical'.
QUOTE
I suppose cardinality is not a concept you are very familiar with?
It is part of mathematics. It requires logic, as humans understand, to be true, in order for it itself to be true. I also do not understand how specifically categorizing of number of elements in a particular set is of any assistance in the discussion.
Posted by: Inpu Jul 12 2010, 11:02 AM
QUOTE (IKerensky @ Jul 12 2010, 12:44 PM)

Well I am divided onthe AI topic, because to me they could be one of three things :
1- Real AI purely technologicals.
2- Horror in disguise having found their way to our world through the magico-technico great pattern the matrix become.
3- Passions trapped into this very same great pattern.
So I would be very carefull about them.
I like the Passions idea immensely.
QUOTE (Mordinvan @ Jul 12 2010, 12:48 PM)

I'm guessing you don't have a very strong background in molecular biology, or neuropsychology do you?
I seriously do not have the time needed to type out all the hundreds of pages of needed scientific papers and textbooks needed to completely and fully explain the 'basic' concepts in proper detail. As I said, to get the idea, you need several courses in genetics, several courses in evolutionary biology, several courses in psychology, several courses in anthropology, and a smattering of other courses in various other related disciplines get really get the idea. Since I do not have time and dumpshock does not have the space, and no one would bother to read all the pages I notes I took during those courses, or read all the course packs, and relevant text book chapters for the sole purpose of a dumpshock argument, you're just going to have to educate yourself, or take my word for it.
I'd like to point out that I would happily read that should you post it. I'm ever curious.
On a side note, Mordinvan, my argument for AI being as alien as Spirits is human error in the coding and the Logic Gates. If coded incorrectly, or if there is a hitch in the a code that results in an AI, then it will not work as expected or intended. While logical in its own right, it is logical to the AI and not necessarily to the one who programmed it. These errors can cause breaks in the code which, for a normal program, would usually crash it but may work for an AI. So it has the potential to become as alien.
Another point is that pure logic is also alien to humans, just as pure chaos is. While a human can look at a code and say "This And gate is what makes it make this decision", they will not likely understand a being who perceives through those gates, within them rather than an outside observer. Also, due to the unknown factor that gives AI 'life', we are unsure as to whether or not the code is changed. In fact, evidence from the setting suggests that it does due to people not being able to understand it and the AI being unable to copy it. It becomes a different thing entirely and so has the potential to be exceptionally alien, as it may not even rest on logic anymore.
Posted by: Mordinvan Jul 12 2010, 11:15 AM
QUOTE (hermit @ Jul 12 2010, 03:23 AM)

If you wish to speak from a mathematical standpoint, I again refer you to cardinality. Even given there are infinite numbers in N, you will never envounter square root(2) in that magnitude. In other words, just because something has infinite elements does not necessarily mean it contains everything.
actually if you have an infinite number of numbers, you SHOULD enounter the square root of 2 an infinite number of times, so long as your 'set' does not expressly exclude irrational numbers. As no form of spirit thought process or environment is actually excluded in the texts we thusfar have describing them, anything you can think up should exist, as well as anything you can't think up.
QUOTE
It is, as many things Dawkins writes, overly simplistic and as such follows the train of thought of religion. Dawkins being an American with a background in WASP culture, that may beinevitable, but he is much closer to the religion he so loathes than he realises. The selfish gene is built on the one gene, one enzyme dogma, which has already been disproven a while ago. Genes, as such, do not make up the entirety of inheritance after all (expression being a major factor, and inherited promoted/inhibited expression). Expression, however, is an environmental influence and not related to the genes as such, selfish or not.
Actually the selfish gene hypothesis is not at all disproven by the failure of the one gene one enzyme hypothesis. I don't know how you think it could be. So long as that gene produces some gene product who's presence has a net beneficial impact on the reproductive success of the gene pool in which is resides, then the that gene will improve not only its own reproduction, but that of the entire pool. The gene however seeks to simple increase its own reproductive success, and the increase to the overall survival of the pool is an unintended but happy consequence of that. It is why things like cancers, and viruses can exist, and actually is one of the few hypothesis which I am aware of which can reasonably explain the existence of such things.
QUOTE
Much like spirits tend to stick around the domain they belong to. Your point being?
My point being the original purpose of the A.I. was a function humans desired, in a system humans built. The A.I. will often seek to continue to fulfill its original function. As such its motivations are such that they should not be totally and completely beyond the capacity of a metahuman mind to comprehend. We may not understand them intuitively, but we should understand them if they are adequately explained to us.
QUOTE
See, this is where your mistake lies. You say that AI are more understandable because they are part of a physical world you can understand. However, in the shadowrun fictional universe AI are part of a world that contains magic and computer magic, both tied to infinite (though not necessarily in the sense Douglas Adams loved to mistake it) different planes that may or may not be parallel universes.
Yes however many spirits originate on planes completely alien to our own universe. All known A.I.'s originate in computer programs which themselves originate in our universe.
Also, due to the fact that magical research IS being done, and things can be learned, which hold true under atleast a finite subset of conditions, it is not unreasonable to assume that somehow our universe(or atleast gaisphere) influences some aspects magic to be reasonably static and comprehensible. Thus magical effects originating within our own gaisphere seem to have rules governing them, which can be learned, and exploited. This need not be true of all metaplanes in the SR universe.
QUOTE
Your thoughts are shaped by the interaction with the true laws of physics. A shadowrun person'S, even a mundane's, ware shaped by contact with different extensions to these true laws, one of which being magic, the other being cmputer magic. You cannot directly draw conclusions based on yourself as a model that are always true in the SR fictional universe.
No, but until and unless a particular conclusion which is TRUE in our world is proven FALSE in S.R., it is not unreasonable to assume it continues to be true. If such was not taken to be the case RPG's would be practically impossible to play, and the rule books needed for them would consume multiple entire libraries.
Posted by: Mordinvan Jul 12 2010, 11:39 AM
QUOTE (Inpu @ Jul 12 2010, 04:02 AM)

I'd like to point out that I would happily read that should you post it. I'm ever curious.
Well curiosity is WHY I took the courses in the first place, so I can understand that.
QUOTE
On a side note, Mordinvan, my argument for AI being as alien as Spirits is human error in the coding and the Logic Gates. If coded incorrectly, or if there is a hitch in the a code that results in an AI, then it will not work as expected or intended. While logical in its own right, it is logical to the AI and not necessarily to the one who programmed it. These errors can cause breaks in the code which, for a normal program, would usually crash it but may work for an AI. So it has the potential to become as alien.
Actually that's the problem, is something is 'logical' once, it is always 'logical', it need not always be correct, but one should be able to get from the initial premise to the final conclusion via a set of intermediate premises and conclusions. The problem will occur when one or more of the premises is incorrect. If any of the conclusions are incorrect however, then someone is no longer using logic. The problem with chalking up sentence to programing error, or manufacture's defect in the processor, is that the first can be observed, and readily understood by a good computer programmer, and the second would make A.I.'s dependent on flawed systems to run in. Since the game indicates neither of these are strictly true, I do not feel either provides an adequate explanation. As I said previously, their inner workings are concealed by 'plot'.
QUOTE
Another point is that pure logic is also alien to humans, just as pure chaos is. While a human can look at a code and say "This And gate is what makes it make this decision", they will not likely understand a being who perceives through those gates, within them rather than an outside observer.
If by this you mean be able to engage in true empathy with a computer, then I am inclined to agree with you. However it does not mean that with an adequate understanding a human would be totally and completely unable to guess what type of output a given type of input would produce.
QUOTE
Also, due to the unknown factor that gives AI 'life', we are unsure as to whether or not the code is changed.
Given an A.I. can be 'trapped' by simple deactivating the processor on a given node and you just put it to sleep. It should then be possible to run a memory scan of the node without actually activating the node itself allowing you to actually understand what changes if any are occurring to the code of the original program.
QUOTE
In fact, evidence from the setting suggests that it does due to people not being able to understand it and the AI being unable to copy it. It becomes a different thing entirely and so has the potential to be exceptionally alien, as it may not even rest on logic anymore.
If the program requires logic gates to run, which it does, then it rests on logic. That logic may be many million or more premises in length, and thus very time consuming to follow, but it must exist, or else, A.I.'s would not require processors, and could exist in any conductive object as easily as a laptop.
Posted by: Mordinvan Jul 12 2010, 11:39 AM
QUOTE (Inpu @ Jul 12 2010, 04:02 AM)

I'd like to point out that I would happily read that should you post it. I'm ever curious.
Well curiosity is WHY I took the courses in the first place, so I can understand that.
QUOTE
On a side note, Mordinvan, my argument for AI being as alien as Spirits is human error in the coding and the Logic Gates. If coded incorrectly, or if there is a hitch in the a code that results in an AI, then it will not work as expected or intended. While logical in its own right, it is logical to the AI and not necessarily to the one who programmed it. These errors can cause breaks in the code which, for a normal program, would usually crash it but may work for an AI. So it has the potential to become as alien.
Actually that's the problem, is something is 'logical' once, it is always 'logical', it need not always be correct, but one should be able to get from the initial premise to the final conclusion via a set of intermediate premises and conclusions. The problem will occur when one or more of the premises is incorrect. If any of the conclusions are incorrect however, then someone is no longer using logic. The problem with chalking up sentence to programing error, or manufacture's defect in the processor, is that the first can be observed, and readily understood by a good computer programmer, and the second would make A.I.'s dependent on flawed systems to run in. Since the game indicates neither of these are strictly true, I do not feel either provides an adequate explanation. As I said previously, their inner workings are concealed by 'plot'.
QUOTE
Another point is that pure logic is also alien to humans, just as pure chaos is. While a human can look at a code and say "This And gate is what makes it make this decision", they will not likely understand a being who perceives through those gates, within them rather than an outside observer.
If by this you mean be able to engage in true empathy with a computer, then I am inclined to agree with you. However it does not mean that with an adequate understanding a human would be totally and completely unable to guess what type of output a given type of input would produce.
QUOTE
Also, due to the unknown factor that gives AI 'life', we are unsure as to whether or not the code is changed.
Given an A.I. can be 'trapped' by simple deactivating the processor on a given node and you just put it to sleep. It should then be possible to run a memory scan of the node without actually activating the node itself allowing you to actually understand what changes if any are occurring to the code of the original program.
QUOTE
In fact, evidence from the setting suggests that it does due to people not being able to understand it and the AI being unable to copy it. It becomes a different thing entirely and so has the potential to be exceptionally alien, as it may not even rest on logic anymore.
If the program requires logic gates to run, which it does, then it rests on logic. That logic may be many million or more premises in length, and thus very time consuming to follow, but it must exist, or else, A.I.'s would not require processors, and could exist in any conductive object as easily as a laptop.
Posted by: Inpu Jul 12 2010, 12:04 PM
QUOTE (Mordinvan @ Jul 12 2010, 01:39 PM)

Well curiosity is WHY I took the courses in the first place, so I can understand that.
Excellent. Get posting.

QUOTE
Actually that's the problem, is something is 'logical' once, it is always 'logical', it need not always be correct, but one should be able to get from the initial premise to the final conclusion via a set of intermediate premises and conclusions. The problem will occur when one or more of the premises is incorrect. If any of the conclusions are incorrect however, then someone is no longer using logic. The problem with chalking up sentence to programing error, or manufacture's defect in the processor, is that the first can be observed, and readily understood by a good computer programmer, and the second would make A.I.'s dependent on flawed systems to run in. Since the game indicates neither of these are strictly true, I do not feel either provides an adequate explanation. As I said previously, their inner workings are concealed by 'plot'.
Not necessarily true if the AI changes in the process of becoming an AI. While it can be said of Metasapients, it is not entirely certain for other types, such as Xenosapients. And if it is concealed by plot, then the world bends to the plot to explain it. As you said earlier, there are holes specifically for the GMs to play with. I believe this is one of them.
QUOTE
If by this you mean be able to engage in true empathy with a computer, then I am inclined to agree with you. However it does not mean that with an adequate understanding a human would be totally and completely unable to guess what type of output a given type of input would produce.
Empathy and understanding. If the code is so complex as to be alive, that means contradictory logic gates may exist. Or rather, that they are so complex as to fit every eventuality, thus becoming so open that they do not expressly lead to any foreseeable outcome. I know the science behind it, in the real world, means that it can be observed and converted into a gate equation, but if the equation covered sweeping statements than it is not as easily observed. For instance, if the last part of the Logic process has a distinct intelligence that may change its gates at a whim. It may be as simple as that: that an AI can change the formula as it goes, thus 'growing', such as it does with Karma.
QUOTE
Given an A.I. can be 'trapped' by simple deactivating the processor on a given node and you just put it to sleep. It should then be possible to run a memory scan of the node without actually activating the node itself allowing you to actually understand what changes if any are occurring to the code of the original program.
It is very possible that it is not understandable. The above argument still applies, for instance: the code may continuously shift. Or, as an interesting point for some GMs to play with, part of the code may be in Resonance only and thus not able to be understood. It is possible that is what can change their code and allow them to restructure themselves. A simple final gate that reads the binary of "Alive" or "not Alive". While a Technomancer might be able to see it, they are employed with less frequency than Deckers/Hackers and may just not have noted it yet. Speculative, but see below.
QUOTE
If the program requires logic gates to run, which it does, then it rests on logic. That logic may be many million or more premises in length, and thus very time consuming to follow, but it must exist, or else, A.I.'s would not require processors, and could exist in any conductive object as easily as a laptop.
See above statements. It comes down to the same, really. It cannot be denied that, in the setting, there is an unknown factor that makes it so it is not like a typical program. To argue otherwise is to say the setting is flat wrong about its interpretation of these entities. It has another layer that is beyond science. You may see it as the plot concealing it, but that is backward from how a GM should often look at these things: rather than say it can't happen and state that it is a hole, figure out why it did.
Posted by: hermit Jul 12 2010, 12:10 PM
QUOTE
All micro processors work on logic gates. Since an A.I. MUST live in a processor, A.I.'s require logic gates to exist.
If there was no mystical Resonance in SR, that would be a correct statement. However, as there is, it is false.
QUOTE
It is part of mathematics. It requires logic, as humans understand, to be true, in order for it itself to be true. I also do not understand how specifically categorizing of number of elements in a particular set is of any assistance in the discussion.
I'll try again. As I am not very familiar with mathematical terms in English, this may be a bit clumsy, though.
An infinite number of states does not necessarily mean every state possible is included. As an example, in a guven system there are states a, b and c. Now, a magnitude could well be made up of infinite repetitions of a and be and be infinite, yet never include state c. Just because something is infinitie does not necessiarilymean it is all encompassing. The magnitude N contains infinite and not repeating elements, yet many elements ofmagnitude R are not included in N. Does that make N less of an infinite magnitude?
QUOTE
If the program requires logic gates to run, which it does, then it rests on logic. (...) [U]ntil and unless a particular conclusion which is TRUE in our world is proven FALSE in S.R., it is not unreasonable to assume it continues to be true.
That computer processes always work according to logic gates already has been disproven by the whole Emergence,/Resonance/mancer/technomagic business.
QUOTE
Actually much of this stuff is intentionally left open ended so that a GM may fill in the missing pieces as needed to suit their needs.
So why, exactly, are you trying to impose your interpretation of the setting on everyone else again?
Posted by: Mordinvan Jul 12 2010, 12:28 PM
QUOTE (Inpu @ Jul 12 2010, 05:04 AM)

For instance, if the last part of the Logic process has a distinct intelligence that may change its gates at a whim. It may be as simple as that: that an AI can change the formula as it goes, thus 'growing', such as it does with Karma.
Yes, but since the node can be switched off at any point in time and NOT kill the A.I. so long as it is not 'sleeping', that code is then 'fixed' can be read with a memory scan. While I have no really problem with the idea of the code dynamically reconfiguring itself, as it would have to inorder to store memory and to learn.
QUOTE
as an interesting point for some GMs to play with, part of the code may be in Resonance only and thus not able to be understood. It is possible that is what can change their code and allow them to restructure themselves. A simple final gate that reads the binary of "Alive" or "not Alive". While a Technomancer might be able to see it, they are employed with less frequency than Deckers/Hackers and may just not have noted it yet. Speculative, but see below.
This is actually a rather interesting possibility, and one which makes a great deal of sense. With the absence of trust, and knowledge about Technomancers, it does make sense that if part of what makes an A.I. alive is resonance, then the world would be largely ignorant of that fact, as few would think to ask a technomancer, and few mundanes would likely accept the answer as being true. It DOES however raise the question of why no A.I.'s have resonance abilities however. It would make sense that only a small proportion of them do, as a small proportion of mundanes are awakened, but all have bright lively auras. If even a small percent of them displayed such abilities it would strongly lend support to your idea.
QUOTE
See above statements. It comes down to the same, really. It cannot be denied that, in the setting, there is an unknown factor that makes it so it is not like a typical program. To argue otherwise is to say the setting is flat wrong about its interpretation of these entities. It has another layer that is beyond science. You may see it as the plot concealing it, but that is backward from how a GM should often look at these things: rather than say it can't happen and state that it is a hole, figure out why it did.
That is sort of how I look at it. Keep in mind just because something is concealed by plot, that plot is driven by the gm ultimately, and whatever is concealed or protected by plot does so only so long as plot says it is. As soon as that protection is no longer required for the sake of plot, it's shield vaporizes in a puff of logic.
Posted by: Inpu Jul 12 2010, 12:39 PM
QUOTE (Mordinvan @ Jul 12 2010, 02:28 PM)

This is actually a rather interesting possibility, and one which makes a great deal of sense. With the absence of trust, and knowledge about Technomancers, it does make sense that if part of what makes an A.I. alive is resonance, then the world would be largely ignorant of that fact, as few would think to ask a technomancer, and few mundanes would likely accept the answer as being true. It DOES however raise the question of why no A.I.'s have resonance abilities however. It would make sense that only a small proportion of them do, as a small proportion of mundanes are awakened, but all have bright lively auras. If even a small percent of them displayed such abilities it would strongly lend support to your idea.
It is possible that they are too new to express it for now. We'll see in later products, I suppose. For now, it is also possible that they are not able to use Resonance Abilities because they use their Resonance to sustain their life and would thus not attempt to redirect it to other tasks. The Emulate Quality does come remarkably close to Threading as well. That can be explained a few ways, but it is interesting that they cannot keep something they would have had to code.
I am too new to Shadowrun to say if Deus or Mageara had much to do with Resonance, but I have read something about a bionetwork before, and that seems to imply that they at least had something that was compatible with Resonant/Dissonant bionetworks. Again, I don't know enough about Otaku to really say much on this and it is also worth noting that Deus and Mageara were on an entirely different level from Metasapients.
QUOTE
That is sort of how I look at it. Keep in mind just because something is concealed by plot, that plot is driven by the gm ultimately, and whatever is concealed or protected by plot does so only so long as plot says it is. As soon as that protection is no longer required for the sake of plot, it's shield vaporizes in a puff of logic.
Agreed.
Posted by: Mordinvan Jul 12 2010, 12:41 PM
QUOTE (hermit @ Jul 12 2010, 05:10 AM)

If there was no mystical Resonance in SR, that would be a correct statement. However, as there is, it is false.
Ok, which part of the text of mine you just quoted was false?
QUOTE
An infinite number of states does not necessarily mean every state possible is included. As an example, in a guven system there are states a, b and c. Now, a magnitude could well be made up of infinite repetitions of a and be and be infinite, yet never include state c. Just because something is infinitie does not necessiarilymean it is all encompassing. The magnitude N contains infinite and not repeating elements, yet many elements ofmagnitude R are not included in N. Does that make N less of an infinite magnitude?
I know what you are saying, however unless some constraint is placed what sorts of number can appear, then there is the same non zero chance that any given number can appear, as any given part of an infinite set. Given that, it is 'virtually' certain, that any given number will appear unless it is somehow excluded.
QUOTE
That computer processes always work according to logic gates already has been disproven by the whole Emergence,/Resonance/mancer/technomagic business.
No it has not actually, however as was recently suggested, they may also function according to some addition resonance factor. This does not remove their dependence on logic, but it does allow for some additional, reasonably stable X factor which is supernatural in nature. Given however most A.I.'s tend to follow a reasonably stable and predictable pattern of behavior this X factor, if it is resonance in origin SHOULD be able to studied and understood by technomancers atleast in some fashion.
QUOTE
So why, exactly, are you trying to impose your interpretation of the setting on everyone else again?
a) because it is the view I hold b) most other views thus far presented a grossly lacking in explanatory power c) I really do not like unexplained 'things' when explanations SHOULD be available d) because it is a time honored tradition of dumpshock.
Posted by: Walpurgisborn Jul 12 2010, 01:28 PM
QUOTE (Inpu @ Jul 12 2010, 04:13 AM)

Then there is the classic example of rape, where a person passes their genes on by force. This is largely looked down upon for a number of reasons, one of which is the idea that such an option is considered a method for a creature who cannot properly court a partner and is at risk of not passing on the unique genetic line that would result.
You know, me and the boys always say, "I'm kinda okay with rape except for the possibility that the unique gene coding for social interaction may be lost over future generations."
I keed, I keed.
As a side note, that's an inversion of the original social evolutionary theory: man as barely-tamed rapist.
Posted by: Inpu Jul 12 2010, 01:30 PM
QUOTE (Walpurgisborn @ Jul 12 2010, 03:28 PM)

You know, me and the boys always say, "I'm kinda okay with rape except for the possibility that the unique gene coding for social interaction may be lost over future generations."
Good to know.

I abhor it, personally, but take it for the intent: there is a natural revulsion to it.
Posted by: Walpurgisborn Jul 12 2010, 01:44 PM
Should have added the Sarcasm tags, but sarcasm seemed a little too harsh. Maybe Irony tags.
Posted by: Inpu Jul 12 2010, 01:47 PM
Trust me, I heard it without the tags.
Posted by: Walpurgisborn Jul 12 2010, 02:19 PM
QUOTE (Inpu @ Jul 12 2010, 09:47 AM)

Trust me, I heard it without the tags.

Still, when I make my congressional bid, I don't want someone to go 'hey, there's the guy who's mostly ok with rape".
I should clarify though, my biggest issue isn't with evolutionary sociology as much as the statements coming out of evolutionary sociology described as fact, when ES is such a speculative field.
Posted by: Inpu Jul 12 2010, 02:21 PM
Duly noted. How's the campaign going?
Posted by: Mordinvan Jul 12 2010, 02:27 PM
QUOTE (Walpurgisborn @ Jul 12 2010, 07:19 AM)

Still, when I make my congressional bid, I don't want someone to go 'hey, there's the guy who's mostly ok with rape".
Don't worry we'll be sure to quote your statement out of context at every available opportunity.
Posted by: Inpu Jul 12 2010, 02:33 PM
"You have the right to remain silent. What you say will be misquoted and used against you..."
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)