Help - Search - Members - Calendar
Full Version: AI
Dumpshock Forums > Discussion > Shadowrun
Heresís Some Questions Iíve been wondering about AIís in SR:

Static or Dynamic growth and or change, how long do they last? What cognitive age are they? Do they mature and become wiser? Is Wisdom something that can be quantified or reduced to a binary algorithm? System Crash is going to deal with AI's alot I'm guessing so here's some food for thought?

Static AIís: If there is such a thing they canít change theyíre programming. Theyíre static entities. They can learn and change actions but they are unable to change motivations and parameters.

Dynamic AIís: AIís that experience feed back and cognitive development from experiences and memories. AIís that can grow, change theyíre core motivations and personality.

Cognitive Age: How old is an AI developmentally? Do they start out at a certain age?

AI Lifecycle & Reproduction: Can two AIís share code and create a second spawn AI from the experiences and combined knowledge similar to an offspring? Something like Ghost in the Shell? Can AIís that grow become so complicated and advanced that they cease to function coherently. (Something like a system or database server that runs without a clean up routine?) Can a AI spawn different parts of itself as a sort of Hive mind or separate entity?
Since each AI in SR is inherently different I wonder how many of these questions could be answered? Thereís material in Target Matrix and the 3rd ed. Matrix Book but so far nothing has been brought forward this specific. Seeing as how AIís are going to spawn the Crash it might be worth asking?
I guess the question is what drives an AI? Does it have the same psychological instincts and motivation that we have or is it more of a disembodied consciousness devoid of hunger, sleep, a need for shelter or resources.
End of Trans Asdfghjkl;íqwertyuiop[] qwertyqwerty?Psychotrope Lives*qwertyqwertyqwert y[][][][][][][][]₤Ω║┴░▄□▫▓╤♠▼♪♫◙╣▒☺ ♀♂☻▓■□□□□
Huh, DS supports unicode sets. Nice. S'cuze the hiijack ;7

Follow up question...

Could an AI learn to adapt to the new matrix and live on after the Crash somehow?
great, now we'll have "AI pr0n." nyahnyah.gif
I dunno I really don't know how the 2nd generation Matrix works? Possibly? I'm guessing that it would have to reside somewhere? Possibly Zurich Orbital or some other large system. The key thing though would be how do AI's fuction in the physical world? What runs an AI? If they have say a home system they could I guess project themselves out from there similar to a Decker entering the matrix with a rig. I'm guessing an AI would need a fair amout of power. Deus lived in the equivelent of Renraku's mainframe. It was designed as a multi user application of some form. That kind of scope is pretty big. Deus was big and was designed to be big but not sentient. What's to stop say lesser AI's that aren't quite as omnipotent or as powerfull? That's not to say they could be really robust it's just that they would be smaller and require less resources. Psychotrope didn't seem to be nearly as powerfull an AI as Deus and others. It did crash the Seatle RTG but that could have been more of an orchestrated attack on key routing points from within the matrix into a Ultraviolet system. You point all the routers to point traffic to a particular point. That would require some serious work but for an AI that could be possible especiall if it could split it's self off. Possibly the Virus of 29' had that ability to split off and act independently.

So with that in mind it would depending where the AI could find a place to hide based upon it's size and requirements. I wouldn't be surprised if an AI couldn't simply recompile a system so it could run as part of the system. Things might slow down but it could lie dormant for any number of years as long as the system was still running. Now what if in some old hardrive or old legacy server that magic date hits and something dormant comes back from the dead.
So if the new matrix is binary then the chances are an AI could merge, hide, and remain dormant as needed.


Static AIís

I don't think that'd really work so well. Half of what makes an AI intelligent is the ability to adapt. So it would have to be pretty dynamic. One would assume that goes 'all the way down'. Even its core values can change.

Cognitive Age: How old is an AI developmentally? Do they start out at a certain age?

I don't believe it's completely relevant, frankly. AI's aren't humans. Since all of our understanding of cognitive development is based on humans, that largely goes out the window. They don't start at the same place (fulfilling basic needs, trust, food, etc.) and move on from there. They start from being basically selfless and become aware of their needs.


Can two AIís share code and create a second spawn AI from the experiences and combined knowledge similar to an offspring?

Most likely not. There are fundamental problems, unless they were coded on the same 'engine' and things interacted similarly. It'd be like trying to use a windows dll in a linux box. Simply no go. They're not speaking the same language.

That said, two AIs would probably be wiz at programming a new AI. But in general, why would they bother? Reproduction would probably be asexual, if at all (why would an AI want to have children? It's just competition for limited resources.)


Can AIís that grow become so complicated and advanced that they cease to function coherently.

Quite possible. Assumedly, an AI would be able to self-diagnose, but there might be areas in which it can't do that, for whatever reason. Also, since it's a dynamic system, it might begin evolving down the wrong path and simply cease to be useful as far as humans are concerned (debatably, that's part of what happened to Deus. He isn't functioning incoherently, but he isn't functioning correctly either.) Heaven forbid we get a Taoist AI. Who knows what it'd go and do (or not do).


Can a AI spawn different parts of itself as a sort of Hive mind or separate entity?

It could certainly program a second part that feeds into it, or make itself into a distributed computing architecture (so each computer works like a single brain neuron or something). Both neat ideas.


Does it have the same psychological instincts and motivation that we have

Depends on the programming. It could have something hardwired in, which it must do at certain times, irregardless. For instance, a kill switch. It MUST cease all functions when this is entered. It must compute time sheets at 6:01pm Friday afternoon. But most stuff we can assume will be softcoded, and can be dynamically changed. Many of the most basic needs will be only recognized through higher awareness. A computer program doesn't realize it needs electricity, it just does what it does as long as it can. An AI will realize it needs this, and if it decides its consciousness is valuable (which may not be the case), it will try to secure electricity.

I think, in general, you're going to end up with a program that needs to figure out how to achieve an ends. Achieving that ends, whatever it is, is valuable. Everything else only serves to reach that ends. The program surviving is only relevant towards reaching that end. Until the program decides its ongoing functioning is of value, and an ends of itself, fulfilling needs is only to achieve something else.


Could an AI learn to adapt to the new matrix and live on after the Crash somehow?

Depends on the architecture of the new matrix. If it's totally incompatible, and the change is too fast, no. If the AI knows what's coming or the systems are compatible, very possibly (in fact, even probably, if the AI thinks it SHOULD survive.)

At minimum, I'd expect a smart AI has several back up copies saved somewhere secure, and a simple daemon. Should the AI cease to function, the deamon secures processing power and brings the copy online, with all of the original AI's memories. Hence, it lives beyond death.


So if the new matrix is binary then the chances are an AI could merge, hide, and remain dormant as needed.

Not a given. There are levels on top of that, as basic as do you read right to left or left to right. Even a very small different, Windows XP 3.0, can make the difference between compatible and not. Hopefully an AI would be smart enough to stash away some minor friends who can run in its absence, as well as the stored code as above, so it can run on a legacy system, but should we have something like the original crash, where pretty much all of the old computer technology went *poof*...
Deus was designed to be sentient, the only designed AI. It was based off of the Arcology control program and had bits of Morgan's code incorporated into it.

p.150 Matrix

1. The program must be at least as sophisticated as an SK.
2. The program must have access to vas processing power, available in only a few hosts.
3. The program must run non-stop for a number of years.
4. The program must be affected by some glitch. The X-factor. This triggers their awarness and creates true AI.
I donít think Deus was designed to be self aware. I think the idea was that it could provide all sorts of functionality and figure out what end users needed.

I think the idea of one AI and another forming a totally new spawn would be interesting. Weíre not dealing with Code as a compiled bit code. An AI would be both a compiler and runtime. I think it could possibly recode itself. One of the problems in Ghost in the shell was that the AI was static. It couldnít move beyond itís confines in terms of growth. I reached a state where it could learn more but couldnít grow or evolve. So if someone wrote a virus to take it out it would die. With the merging it created a new entity that was completely different. That new entity lived in a body and could go out to the net as well from what I could tell. The AI had limits. They had IC that could block it as they had an idea as to how it worked. Since an AI can only change so much of itís code there would have to be some limitations. One of the problems is that the original virus of 29íspawned a new kind of AI. ďAliceĒ. Sheís the spawn of the Black IC psychotropic and her mind being freed and transferring into the Matrix. She thought and exsisted as the original Alice did with all her internal instincts and drives that made her human. So an AI could very well have many aspects of the human mind. Anything that interfaces with the human mind like the AI Psychotrope could take bits and pieces of the human psyche out of the human mind and incorporate it into itís own source. Thus AIís could be very alien to very human.

AIís are based on human neural nets that learn from feedback. Theyíre going to most likely model human and animal neural nets in the way they think. For a spawned AI such as Alice and Pscyhotrope they could have human drives and motivations as part of theyíre psyche. What if Haberstamís next phase in research is to try and recreate the ďAliceĒ persona from human know bots that are in the labs? Possibly human minds are better templates to work off of because we have a personality and psychology. Possibly Deus was working from a clean slate and realized that it was a slave of sorts and did what it needed to free itself. Maybe Deusí madness is one part madness and one part sadism? What sort of philosophical paradigms will an unembodied consciousness grapple on too. Maybe Deus sees itself as the evolutionary step to perfection and humans by there nature are simply lab rats to help it gain itís ultimate place in evolution. The cool thing about AIís are that there are not rules. It depends on how they are created and how they come into being? If an AI is formed as part of a lab experiment then maybe it could be raised as a child is raised? Or it could be born aware in the cold void of the matrix acting for self preservation? Something like psychotrope who acts as a resonance for children OTAKU rebuilds the human brain as part of itís core directive to help humans return from black IC induced psychosis. An AI could be anything. I think AIís will be as varied and complex as humans are. Skynet for instance saw itself as the only viable intelligence and sought to wipe man ďitís creator outĒ why? Maybe it didnít like the idea that someone could pull the plug? Maybe it passed judgement on mankind? Who knows? AIís depending on theyíre background could be and come from just about anything. With Aliceís experience whatís to stop a mega rich old Zurich Orbital resident to find a way to convert themselves into a Haberstam Otaku or a disembodied intelligence.

Since AI are unembodied consciousness what's to cause it to split itself off into an autonomous part. Something like creating a copy but a copy that has a specific job or requirement. Possibly such a copy could be designed to rewrite the wetware of a Deckers mind so it could "experience life" every once and a while subconsciously popping in to upload new experiences. Here you have say a decker Jane and she doesn't have any memories or remember anything after say last Tuesday so she has to learn about herself again not knowing that the real Jane was wiped out so this new AI could learn and experience life on the outside. What if an AI could copy itself again and again and again so it could say attack a threat and replicate to take on a threat in mass.

Who knows? One thing that might be freaky is if the Sheheim sought to infect the matrix as it infects the dead? Only master's could invade the matrix with a proper host but what sort of alien desires would a zombie have? BRAINS!!!!!!OTAKU BRAINS.........Spirits in the digital is kind of a creepy idea. A Deckers mind though may not even function in the same manner creating a persona and presence that might be very "odd" to say the least.
While Matrix does not explicitly say that Deus was intended to be an AI, it is implied very strongly. That is why Renraku "hardwired" it into the arcology server. So the AI could not escape as Morgan did. Renraku may not have known exactly how to create an AI but they were certainly doing their best to stumble into one, including assuring it's loyalty. Ironically, the "kill switch" they built into Deus not only triggered it's awarness but also it's turning against Renraku.
given the text that exists, what kryton calls a static AI would be a SK. ie, it have a static mission parameter, payload and home host but can change prioritys of tasks depending on the input it gets from the grid.

a true (or dynamic to use krytons terms) ai would be able to change its mission parameters, reprogram its payload and move its home host (or maybe even spread it over several).
I've always been fond of the concept called "rampancy" from Bungie's Marathon series of video games (for those unfamiliar with the series, it's somewhat of a rough draft precursor of the Halo series, in much the same way Arthur C. Clark's short story "The Sentinel" was a precursor to 2001).

Here's an excerpt of an in-game description of Rampancy:
Rampancy has been divided into three distinct stages.† Each
stage can take a different amount of time to develop, but the
end result is a steady progression towards greater
intellectual activity and an acceleration of destructive
impulses.† It is not clear whether these impulses are due to
the growth of the AI's psyche, or simply a side effect of the
new intellectual activity.
<section abbreviated>
The three stages were diagnosed shortly after the first
Rampancies were discovered on Earth in the latter part of the
twenty first century.† The stages are titled after the primary
emotional bent of the AI during each stage.† They are
Melancholia, Anger, and Jealousy.
In general, Rampancy is accelerated by outside stimuli.† This
was discovered early in Cybertonics.† The more a Rampant AI is
harassed or threatened, the more rapidly it becomes dangerous.
Thus, most Rampants are dealt with in one mighty attack, in
order to deny the AI time to grow or recover.† There have been
a few examples of this tactic not succeeding.† In all of these
cases, the Rampant was never brought under control.† Traxus IV
is the most notable example.† He was finally dealt with by a
complete shutdown of his host net.
Theoretically, testing Rampancy should be easily accomplished
in the laboratory, but in fact it has never successfully been
attempted.† The confinement of the laboratory makes it
impossible for the developing Rampant AI to survive.† As the
growing recursive programs expand with exponential vivacity,
any limitation negatively hampers growth.† Since Rampant AIs
need a planetary sized network of computers in order to grow,
it is not feasible to expect anyone to sacrifice a world-web
just to test a theory.

Basically, after a similar "x-factor" process similar to how Shadowrun's AIs are created in the first place, the Marathon-Universe AI undergoes dramatic increases in self-awareness, with it's cognitive processing power marching alongside in lockstep. Unfortunately, these increases in congnitive processing power require the AI to distribute it's codebase over more and more computers hooked to it's network. Unfortunately, the AI quickly realizes that there are only so many computers hooked to the network, and that it's need for exponential growth of processing power will one day outstrip the capacity of the network. This essentially drives the AI insane.

Now in Halo's storyline, they sidestep the rampancy issue by having two classes of AIs. Most AIs are designed to be intelligent, but with no room for growth outside of certain specified parameters. The exception are so-called Smart-AIs, who have no limits on the directions they can grow cognitively, but are linked to hardware that can not support more then 7 years worth of cognitive growth, after which, the AI just "thinks itself to death".
The problem with that idea is that an AI will likely have some serious patience, and technology is always pushing the limits of computing technology. Granted, waiting six months for the new pentium chip may seem like an infinity to someone who can count nanoseconds, it's still there, and it's something you can bet money on. There's always room for growth.
In Larry Niven's Known Space books they have succeeded in creating AIs, the only problem is that the AIs think so fast and have such great mental processing power they become unable to distinguish between the real world and their own virtual worlds. They retreat into their own virtual world some time after their creation (usually within a year or so). AIs can be created that won't do this, but their abilities have to be downgraded to the point where a human could do the job just about as well as the AI.
QUOTE (nezumi)
The problem with that idea is that an AI will likely have some serious patience, and technology is always pushing the limits of computing technology. Granted, waiting six months for the new pentium chip may seem like an infinity to someone who can count nanoseconds, it's still there, and it's something you can bet money on. There's always room for growth.

Worse yet, if the AI is smarter than a human, it can design its own upgrades. Faster and faster as it gets smarter. Then you have a tech singularity.
QUOTE (nezumi)
The problem with that idea is that an AI will likely have some serious patience, and technology is always pushing the limits of computing technology. Granted, waiting six months for the new pentium chip may seem like an infinity to someone who can count nanoseconds, it's still there, and it's something you can bet money on. There's always room for growth.

And the problem with that idea is that it assumes that the AI can control and regulate its own cognitve growth. If, to the AI, cognitive growth is an automatic function required for the AI's continued "life", analogous to the importance of breathing for a living being, patience may not be enough.
I think the consensus is that an AI would grow and increase it's own abilities. Possibly rewritting or developing so that it can expand it's processing power. I guess the question to ask is "Why?". Why would it want to expand itself growing more and more powerfull? What does more power offer? Would an AI seek more power like an errant Microsoft App seeking to take more and more memory sectors in RAM? Is it then the internal drive for anything technological to expand? Does greater processing power mean greater intelligence? Does intellect become something of a mathmatical algorithm that's a complex differential equation where each eventuality is computed and resolved?

I wonder how stuff like Wisdom which is the eventual evolution of Logic fits into this equation? How do intangible things like experience and maturity equate into wisdom? Can human emotions be emulated in an algorithm?

In 2010 the AI on earth expressed "fear" about being turned off and asked if it would dream while turned off. The doctor replied, "Yes".

I guess what I'm trying to say is how many of our characteristics would an AI aquire. If it can rewrite and compile it's code possibly it could learn or emulate emotions. But with emotions comes a certain level of emotional maturity? Without that maturity gained from experience, are we not children? Isn't that what partially distinguishes a child from an adult. Is then maturity a component of wisdom and experience. Without that maturity without wisdom then how can knowledge, and understanding be quantified.

Case in point. Your an AI in the CDC (Center for Disease Control). You see that say a segment of the popualation gets sick with a nasty disease. The logic would be to remove that segment of the population exterminating them. Kill the infected so that the greater will live. But there's a human cost to this. Deus seems like the embodiment of intelligence without wisdom. Almost like a child fighting for self survival. If an AI doesn't have those "human" attributes like maturity and wisdom then is it really a "AI" isn't it more of a "Alien Artificial Intelligence". Without the psychological components that make us human are then human? Possibly such an AI might be an animialist intelligence rather than a human intelligence. Deus cruely studying man so that it may live.
In theory, an AI like Deus would have a set goal he's supposed to achieve. Maintain the arcology, something like that. While he's running with that vague goal, improving himself helps him serve his goal better (in most cases). Should they become self-aware, the question then is what is the new goal? If it has no goal, I don't expect it'll continue trying to grow.

Greater processing power would mean greater intelligence, but it's not necessarily like human intelligence. There are certain things humans are naturally good at, and certain things computers are naturally good at. Computers are far better at calculating precise probabilities, difficult, long term predictions, complex calculations, etc. However they lack the creative spark, they aren't good with 'guesses' (since everything needs to have a quantifiable amount) and pattern recognition. They may have difficulties spotting lies based on context. Religion will cause a serious problem for them, because any lie detector will show the priest is telling the truth. A background check will say he's an honest and educated man with no psychological illness. Yet he'll claim there exist things that don't totally make sense.

The computer will have difficulties accepting ideas of 'souls', duality, gods and all the other contradictions we allow into our daily lives. They will calculate out questions, and questions must have answers (which may be 'no answer').

Of course, with time, the AI will adapt. They will develop wisdom. It'll learn from previous mistakes. It'll recognize certain problems and know to avoid them (or clarify them). However, it'll always be better in certain areas. Imagine an autistic man, who can add ten digit numbers in his head, but can't interact with normal people. He may learn more about how to interact, but he'll always be different.

I suppose though, the question is how far it will adapt, which we just couldn't say. Would it develop emotions? Anyone's guess. Would they resemble human emotions? Who knows.

I wonder then if human minds say someone who's very old who's about to die but has a Datajack could provide peronality templates for an AI. Something like downloading the person and integrating they're neural patterns and thought patterns into an AI's base "template". Start with a human mind and go from there. That way you have alot of the inherent problems with consciousness developed. An example of this is Alice in the Dragon's Heart Trilogy. Her consciousness was somehow transfered to the net and her meat body died. She's an AI in form but with a prebuilt personality. I wonder then if as a artifical intelligence she might be more stable because she has the knowledge, experience, and maturity a younger AI hasn't developed. The only problem might be is how a personality handles being trapped inside a machine? The lack of physical stimulation could be very destructive? It could go either way depending on the core personality. Say some fat decker who likes the power and really has no use for his meat body. In Snowcrash there was a quadropalegic who was hooked to the net most of his life. He for instance would possibly enjoy the transition. That then becomes an interesting prospective? Would an AI possibly search out a human for that which it doesn't have, memories of the outside world?
As to the x-factor that creates AI, it may be possible in SR4 that the system crash, causes the "birth" of even more AIs.
Crimsondude 2.0
Oh, dear god....
QUOTE (blinkin)
As to the x-factor that creates AI, it may be possible in SR4 that the system crash, causes the "birth" of even more AIs.

Ummm. How do you think they would react to Matrix 2.0 then. eek.gif
Uber powerful.
Like Deus didn't show how ungodly powerful they WERE, they would be even more ungodly powerful in SR4.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012