Help - Search - Members - Calendar
Full Version: Can a decker make an "AI"
Dumpshock Forums > Discussion > Shadowrun
Pages: 1, 2, 3, 4
Cynic project
I am so sorry, my internet went stupid...please forget this post. And if some can delete this thread,please do.
Walknuki
Maybe.
Cray74
Agents approximately equate their pilot rating with that of Rigger 3's Robotic pilot ratings. A rating 4-5 Agent is therefore comparable to a well-trained human in some intellectual respects.

However, it's relatively easy for a decker to program a rating 6, 8, or even 10 agent. With a good program plan and programming suite, it won't take years to code, either, and the resulting Agent should be (using the Rigger 3 robotic pilot scale) pretty darn smart.

Maybe a high level Agent doesn't sit back and say, "I think, therefore I am," and thus isn't a True AI, but it's smart in a brute force problem solving fashion, enough to be considered intelligent by other standards.

In other words, above a certain level, it depends on what you define as "artificially intelligent" as to whether the answer is "yes" or "no."
Cynic project
So using the parameters that I set up,the awnser is yes a shaodwruner can make an AI.

" By "AI" I mean a program that is capable of learning from outside stimuli. "

It is aurgable if the makes shadowrun mean an UBER AI with god like abilties..IE DEUS, when they mean AI.
mfb
well, sorta. i believe you can make a frame or agent that incorporates the Cascading option; that's basically learning from outside stimuli. beyond that, no; what you're talking about is effectively an S-K, and lone deckers can't make those.
sidartha
Mr Woodchuck and I just had this conversation a few days ago, weird eh?
What we came up with was that an AI in shadowrun, demigod definition was that it had to be a VERY high rating SK pushing twenty on the scale from one to ten.
It has to be unique in it's creation or duties from your run of the mill go get the info SK.
It has to display an emotion as it's X factor.
For instance, Deuce displayed pride, Morgan displayed love and Mirage displayed compassion.


So far eleven Megacorps working for two years haven't been able to reproduce AI's beyond the first three.
If you want to give your players that kind of power be my guest. Just remember the Arcology wink.gif
Cray74
QUOTE (Cynic project)
So using the parameters that I set up,the awnser is yes a shaodwruner can make an AI.

" By "AI" I mean a program that is capable of learning from outside stimuli. "


By that definition, quite a few real life programs are already AI.

QUOTE
It is aurgable if the makes shadowrun mean an UBER AI with god like abilties..IE DEUS, when they mean AI.


QUOTE
So far eleven Megacorps working for two years haven't been able to reproduce AI's beyond the first three.


Those are goofy cinematic AIs with Super Powers. S-Ks and AI's in Shadowrun are marked by super control of the Matrix beyond the ken of deckers and even the ability to manipulate human brains to produce Otaku. Bleh.

If your interest is NOT in god-like beings, but rather simply thinking programs, I think the bar is set much lower. A high rating Agent should be able to be able to make that leap to, "I think, therefore I am" with a little experience and polish.

Raising emotions to some high pedestal beyond the ken of normal machines is Hollywood influenced thinking. The Agent software that experiences a "priority shift in tasking due to threatened self-dissolution by IC," has just experienced fear of being killed and is responding by getting ready to fight. Programs with positive feedback loops to encourage certain learning behaviors (like an Agent learning a creator's habits) experience what amounts to pleasure in their success.
Toptomcat
With high skill, luck, Karma, a drek-hot deck, a talented programming team, a spark of genius, and plenty of player motivation, yes, if only a minor or flawed one.
That's my philosophy, anyway, when GMing- 'don't say no, say how hard.'
Cray74
QUOTE (Toptomcat)
With high skill, luck, Karma, a drek-hot deck, a talented programming team, a spark of genius, and plenty of player motivation, yes, if only a minor or flawed one.
That's my philosophy, anyway, when GMing- 'don't say no, say how hard.'

So...what would the dread be of an AI, assuming it has no extra powers beyond that of an Agent or Smart Frame?

It'd kind of be like a Free Spirit or Ally, but without the Matrix sorcery powers, right?

Or maybe just a contact/ally?
mfb
i dunno. if all of the megas can't get one working on purpose, i don't see a single decker doing it. multiple tests at TN 25-30--that's not a "no", technically.
BitBasher
I don't see a single decker ever having access to an ultraviolet host for a few years to leave code running, which is a requirement. Much less someone SINless and without millions and millions of nuyen to rent that processing power of that billion nuyen mainframe.
Cray74
QUOTE (mfb)
i dunno. if all of the megas can't get one working on purpose, i don't see a single decker doing it.

Well, yeah, but look at the deities-in-a-box the megacorps try to make.

What if the decker's goal is a human-in-a-box?
mfb
the corps haven't managed that, either.
Cray74
QUOTE (mfb)
the corps haven't managed that, either.

Meh. By the time you get to a rating 10 Agent, you've got a program smarter than most humans. What's the difference if it also asks a few existential questions?
Moon-Hawk
QUOTE (Cray74)
QUOTE (mfb @ Dec 23 2004, 07:53 PM)
the corps haven't managed that, either.

Meh. By the time you get to a rating 10 Agent, you've got a program smarter than most humans. What's the difference if it also asks a few existential questions?

Well, maybe a very, very high rated agent is capable of going "AI". The point is, it would need to run for years before it ever thought to ask an existential question, and even then would need some sort of X-factor to get it thinking along those lines.
mfb
well, for one, it's not going to act like a program that's under the decker's control. it stops being an agent that the character uploads, and becomes a contact.

also, AI is one of the Big Mysteries in SR. if you allow anyone with a high programming skill to put one together anytime they want, it diminishes the mystery.

now, if you're giving a high-end agent or S-K a 'personality' of sorts, i'm all for that. one of my otaku has a daemon, rating 8, named Furious George. i run him like a seperate character; the otaku gives him a job, and he's smart enough to go out and do it himself. but he's not actually intelligent--just programmed to seem that way.
Moon-Hawk
So can a decker create an AI? I think that depends on the degree of intention.
Let me explain; no, there's too much; let me sum up:
Could a decker set out to program an AI, write a program, then have it be an AI? No way.
Could a decker write a sophisticated, adaptive program like an Agent, SK, or Daemon that could, someday, after months or years of run time and a mysterious X-factor become an AI? Sure, why not? It'd be fun.
Could a decker manipulate the environment of said program to increase the likeliness of becoming AI? I dunno, maybe.

But as far as just making an adaptive program that learns from experience is easy, that's just an Agent.
Kagetenshi
QUOTE (sidartha)
It has to display an emotion as it's X factor.

Ugh. The X-factor, whatever it is, is a cause rather than an effect, and thus displaying an emotion could be indicative of having been triggered, but I can't imagine how it would be the cause. Also agreed with Cray on the topic of emotions and their "specialness".

~J
mfb
i disagree. for one, re-prioritizing in order to maximize self-preservation isn't necessarily fear--more likely, the program simply has a standing order to avoid destruction. now, a program that doesn't have a standing order to preserve itself, that changes its actions in order to avoid danger? that's fear, and that is something special and cool.
SirKodiak
QUOTE
i disagree. for one, re-prioritizing in order to maximize self-preservation isn't necessarily fear--more likely, the program simply has a standing order to avoid destruction.


What you're getting into here is the claim that there is a difference between something that acts exactly like fear, and fear itself. This is where you get into the big philisophical question behind things like the Turing Test. If it quacks like a duck, walks like a duck, and acts like a duck, if it is indistinguishable from a duck, does that make it a duck?

Anyways, the main question here is a little vague because the definition of AI in the real world is a huge argument among researchers, and the definition of AI in Shadowrun is stupid. So, to answer a couple of more specific questions (all this being in my own opinion):

Can a Shadowrunner make a Shadowrun-style AI, an online god? No. The resources required are way beyond anything a Shadowrunner should ever see, unless you let your Shadowrunners own megacorps.

Can a Shadowrunner make an adaptive, learning program that can compete with a human for very specific tasks? Yes, these already exist now, and also exist in the Shadowrun books.

Can a Shadowrunner make a Virtual Personality, which interacts like a human being? These aren't really gone into in Shadowrun, but given the level of technology they have, this should be possible. I'd let them have it, though I'd do it by just making them a technology that exists in the world. You'll find these used instead of voice mail, instead of phone menus, instead of all the things which we stopped using people for now, but currently require you to hit buttons on your phone to operate. These are easy to add because they don't really break too many things, and make the computer scientist in me less crazy with the way computers work in Shadowrun.
SirKodiak
Board went wonky on me, resulting in double post. Please ignore. Sorry!
Zeel De Mort
Quotes from Matrix:

QUOTE
Agents are roughly equivalent to robots, and are capable of learning and adapting their behavior to suit new conditions.

p88

- That's not AI by any means. A high rating agent would be very advanced and could even be better than a human decker if it's rating was REALLY high, but it's still nothing like an AI. Interestingly though, Agents have no ceiling on their rating, so you could theoretically have one at rating 20 or something if your Computer (Programming) skill was also 20.

QUOTE
They [SKs] are the most complex programs written...  In game terms, programming SKs requires the use of, at minimum, a Red-10 host and programming resources equal to a half-dozen top programmers.

The frame-core rating of an SK can be any rating, with a maximum of 14.

both p147

The SK is a far better platform to develop your AI from. If it requires half a dozen top programmers (I think VR 2.0 said Computer(Programming) skill of 12 each?), then I guess, perhaps, a REALLY hot decker could do it on his own if he was good enough, spent ALL his time working on it, and had a wicked programming suite to work in. Again, if you were amazing, you could continually boost programming time on a Red-10 host to create your SK. I'm sure they'd notice something very weird was going on as you drained all their resources but, again, possible. You might even be able to get away with it if you were a bit of a matrix legend.

The last part is a bit more fuzzy. But basically you have your rating 14 SK online for months or years, experiencing new things, roaming around extremely high level hosts, hopefully getting pulled into UV environments now and then, just waiting for that final spark (i.e. GM say-so) to launch it to AI status.


In summary, in my opinion, it'd be possible for a loan decker to create an AI, but would be ridiculously unlikely and would require extreme dedication, resources, and good fortune.
mfb
QUOTE (SirKodiak)
What you're getting into here is the claim that there is a difference between something that acts exactly like fear, and fear itself.

yes, but a program designed to alter priorites to ensure its own continued existence is not at all indistinguishable from a duck. you can't dissect a duck and find the piece that makes it do things like fight for its life--with a program designed to ensure its own survival, you can.
SirKodiak
QUOTE
yes, but a program designed to alter priorites to ensure its own continued existence is not at all indistinguishable from a duck. you can't dissect a duck and find the piece that makes it do things like fight for its life--with a program designed to ensure its own survival, you can.


Well if I use a compiler that obfuscates the code so you can't decompile, and trash the source, then you can't point to that piece in the program either. Similarly, if our understanding of how ducks work increases, then someday I may be able to do that for ducks or even people. If I can show the complex network of electrical and chemical signals that correspond to the self-survival behavior of ducks, does that mean they no longer feel fear?

What I think you're saying is that things which are designed can't have emotions or be aware. That's a perfectly acceptable moral or philisophical position if it's the way you want to go, but it's not one computer scientists tend to use because it puts computers forever out of that realm, so it isn't very useful for examining computers. It also means genetically engineered organisms are in a very different class, morally, from "natural" ones. It also completely rejects the idea of a creator god.

The other basis for that way of thinking is that you've probably never actually disected all the ducks or people in the world. Unless your way of life is very different than mine, you accept that the people around you are self-aware because they act like it, not because of their internals. If I could show you two universes, and in one a person is a normal person, driven by electrical and chemical behavior in their brain, and in the other, there is a computer running the show inside their skull, and their behavior is always identical, then is one alive and the other not?
mfb
what i'm saying is that a program that is designed to mimic fear isn't showing fear when it follows its programming. at best, it's a representation of fear--a painting; it has no self-awareness. in the parallel universes question, the robot is 'alive' only if it is self-aware; if it is simply following its programming, no. it's not alive. if the robot can fool people into thinking it's alive, that's because its programmer did a good job.
Johnny Reb
I see. So, basicly, if it had something like:

"When you see a gun, cringe and cower and shiver" in the programming, that's *emulating* fear without actually *having* fear.

-- Johnny Reb
Johnny Reb
D'oh. And my first double-up. Grf!
mfb
if it has no self-awareness? no knowledge of itself? yeah. if you don't exist, you really can't be afraid.
Edward
A moderate SK beats every objective definition we have for a AI today. In SR the definition is different.

It cant be based on observed emotion because emotion is easily programmed. In fact any attempt to write an artificial intelligence would by definition fail, as the only definition I can work out is that the result not behave as programmed.

Alternatively it may be that the program behaves in a manner not in keeping with its original program (eg deus was programmed to protect arcology residence and then killed them).

Differentiating this from a bug is difficult but not imposable (a bug can theoretically be found in the starting source code inelegance will not.

As to can a PC decker write one. Well thay can try but an AI is like dragons and meetings with corporate CEOs and immortal elves or the acquisition of a unique and obscenely powerful magical item that dose not require bonding. It is a plot device and should only be in the game if the GM has a plot to go around it. A PC or NPC decker successfully triggering the creation of an AI may be a interesting plot but if it isn’t a plot I don’t think it should be any more avaliable than a rigger getting an aircraft carrier or a mage binding a force 12 free spirit or a sami getting cyber wear at higher than eth max rating normally avaliable (rating 10 muscle toner)

Edward
BitBasher
Also, even according to SR canon, an AI does not necessarily have to have any emotion or even have a though process similar to a human. Intelligence != mimics human behavior.
Crusher Bob
QUOTE (mfb @ Dec 26 2004, 02:01 PM)
what i'm saying is that a program that is designed to mimic fear isn't showing fear when it follows its programming. at best, it's a representation of fear--a painting; it has no self-awareness. in the parallel universes question, the robot is 'alive' only if it is self-aware; if it is simply following its programming, no. it's not alive. if the robot can fool people into thinking it's alive, that's because its programmer did a good job.

There is a bit of a problem here in that it cannot be proven that you, mfb, are not acting via 'programming' when you are afraid as well. The Turing test is an 'engineer's' test ratehr than a 'mathematicians' it only provides a close approximation, rather than absolute proof of self awareness.

i.e.:
If we assume that humans are self aware, then we can further assume that anything that can act 'perfectly' like a human must also be self aware.

So when we, the experimenter, shove a gun in someones face and say, 'gimme all your money', then later ask them how they felt they say 'I was afraid.'

If we wave a cattle prod at the computer and say, 'gimme all your p()rn', then later ask it how it felt, and it says 'I was afraid', how are we to tell the difference between the computer and the man?

Assuming we had perfect knowledge of both the computer and the man, would be able to pinpoint the 'fear code' imbedded in the brain of the man and the software of the computer, so using the 'the computer acts because of its programming, while the man dosen't' is a very weak argument.
mfb
true. self-awareness isn't something that can be communicated. and i suppose that if a robot can fool everyone into thinking it's alive, it should be treated as if it were. but if it lacks self-awareness, i don't think it's an AI as the term is used in SR and most sci-fi. it's just a very complex program that's better at a certain thing than its creators.
Kagetenshi
mfb raises an interesting point. While our characters would not be able to differentiate, the GM certainly would (though he or she might choose not to), and the players might be able to if the GM were willing to share that information. I suppose it is therefore still relevant.

~J
mfb
AI in SR raises some really big issues and has some very far-reaching ramifications. for instance:

-does an AI have an aura? if not, is it really definable as being alive? (it's worth pointing out that AIs arguably lack several real-life emergent properties that are commonly-accepted criteria for determining the difference between life and not-life, including reproduction and energy utilization)

-if an aura is not a definitive sign of the presence of life, is it possible for non-electronic organisms to live without an aura?

-if an AI is living, can it be hit with mind-affecting spells (assuming you can figure out where to aim them)?
Kagetenshi
QUOTE (Wikipedia.org)
In biology, an entity has traditionally been considered to be alive if it exhibits all the following phenomena at least once during its existence:

1. Growth

2. Metabolism, consuming, transforming and storing energy/mass; growing by absorbing and reorganizing mass; excreting waste

3. Motion, either moving itself, or having internal motion

4. Reproduction, the ability to create entities that are similar to itself

5. Response to stimuli - the ability to measure properties of its surrounding environment, and act upon certain conditions.


I think the only difficult one is metabolism (Deus has created Semiautonomous Knowbots, which can be considered unfertilized AI). As the entry quoted above goes on to talk about, though, the definition is not formal; for instance, sterile people and animals are still considered alive. The entry goes on to list the following criteria sometimes added:

QUOTE (Wikipedia.org)
1. Living organisms contain molecular components such as: carbohydrates, lipids, nucleic acids, and proteins.

2. Living organisms require both energy and matter in order to continue living.

3. Living organisms are composed of at least one cell.

4. Living organisms maintain homeostasis.

5. Species of living organisms will evolve.


The first is easy if we assume biocomputing to take off. Four is also easy. The others are much more difficult, however the question of whether these criteria are relevant is raised.

The other questions I'll have to think more on.

~J
BitBasher
This is getting overly complicated, a sentient being and a living being do not have to be related at all int he context of this argument. There is no reason to use one to define the other.
mfb
it's an important point, though--not specifically in SR, at its current level of technology, but certainly in SR's future. for instance, there's legal ramifications: if you say an AI isn't life, then is it murder when you kill one? and that's one of the easier questions to resolve. wait until it comes to intellectual property rights, that's where the real fun begins.

though i guess SR already has a precedent for this sort of thing, with spirits. some spirits, at least, are sentient (to the extent that they act self-aware, though we've already discussed how that sort of thing's hard to prove), but they certainly aren't alive according to our current definitions.
Kagetenshi
QUOTE (BitBasher)
This is getting overly complicated

I take it you've forgotten where you are? wink.gif

~J
SirKodiak
QUOTE
in the parallel universes question, the robot is 'alive' only if it is self-aware; if it is simply following its programming, no. it's not alive. if the robot can fool people into thinking it's alive, that's because its programmer did a good job.


Prove to me that you're self-aware, and not simply pretending to be.

Basically, your definition of sentience does not allow for a test to determine if something is sentient or not. I know I'm sentient, but I can only know for other people through observable phenomenon, so any usable definition should really be based on that.

QUOTE
-does an AI have an aura? if not, is it really definable as being alive?


As has been pointed out, sentience and living are two different topics, which brings up the moral dilemma of how one should treat non-living sentient creatures. Also, the answer to that question could have much more to do with the nature of magic and auras than the nature of sentience and life. If a child was born with an aura, would it not be alive, or would it mean there were unanswered questions about auras?

QUOTE
(it's worth pointing out that AIs arguably lack several real-life emergent properties that are commonly-accepted criteria for determining the difference between life and not-life, including reproduction and energy utilization)


I'm pretty sure computers use energy. I'm pretty sure that it's possible to make more computers. I'm pretty sure that programs can be copied.

QUOTE
if you say an AI isn't life, then is it murder when you kill one?


No, it's not murder unless the law is rewritten. I can kill as many dogs as I want so long as I own them and I avoid animal cruelty. Killing living things is perfectly legal. Only killing people is illegal.
mfb
QUOTE (SirKodiak)
Basically, your definition of sentience does not allow for a test to determine if something is sentient or not.

i've acknowledged this, and also allowed that if an non-self-aware AI can fool tests designed to test for self-awareness, it should be (and will be) treated as sentient. but the question was whether or not a robot designed to mimic emotions actually has those emotions; the answer is no, if the robot lacks self-awareness, even if a lack of self-awareness can't be proven.

QUOTE (SirKodiak)
I'm pretty sure computers use energy. I'm pretty sure that it's possible to make more computers. I'm pretty sure that programs can be copied.


the computer is not the AI. the AI is information, and information doesn't require energy. you could print out the AI's code and delete all online copies; the AI still exists (albiet in stasis). living beings die, without energy; an AI remains viable as long as someone, somewhere, can put its code into a computer and run it.

i'm not sure i'd agree that replication and reproduction are the same thing. the informational nature of AI makes reproduction weird; given that it should easily be possible to collate replicated AIs, you'd end up with a Tachikoma effect. each replicated AI might think it's a seperate and unique entity; but collate them and they become simple copies or reflections.
Crusher Bob
Biologic fallacy again...

If I gave you a very complex computer and claimed it was running a self aware algorithm, then revealed that 'computer' to be a human brain... Is the human 'self' then just the pattern of electrical impulses? If an exact duplicate of your brain could be made and an exact 'copy' of the electro-chemical impulses that your brain was running at some given moment, then that would be 'you' but in stasis. Would this mean you are alive as long as this information is encoded somewhere?

Bacteria replicate by roughly cloning themselves, are then all genetically identical bacteria the same being? Identical twins are genetically identical, yet most people have no trouble at all wrapping their head around the idea that they are separate people.

Assuming you introduced the code for 'AI sex' where some number of AI combine 'parts' of code to create new AIs (see genetic programming experiments) then you also get around the 'a copy is merely a copy' argument as well. (Or you can just mutate an exact copy slightly.)
lorthazar
Well my definition of Ai has always been: Can it do something it was not specifically programmed to do? if the answer is yes then it is AI. Emotions, humor, self awareness and everything else doesn't count for squat.
SirKodiak
QUOTE
but the question was whether or not a robot designed to mimic emotions actually has those emotions; the answer is no, if the robot lacks self-awareness, even if a lack of self-awareness can't be proven.


How can you tell the difference between something that perfectly mimics emotion and something that has emotion? You keep coming back to the claim that humans are really sentient and really self-aware and really feel emotions because we don't know the mechanism that controls them. Does a definition of these things that depends on obfuscation really make sense? Would God knowing how human fear works in the brain mean that we don't really feel emotions either?

QUOTE
the computer is not the AI. the AI is information, and information doesn't require energy.


Human intelligence is not the body, it is the pattern of electrical and chemical signals stored in the brain. It's just information, and information doesn't require energy.

All you've done is define what the AI is in such a way that it won't fit your definition of sentience.

QUOTE
i'm not sure i'd agree that replication and reproduction are the same thing. the informational nature of AI makes reproduction weird; given that it should easily be possible to collate replicated AIs, you'd end up with a Tachikoma effect. each replicated AI might think it's a seperate and unique entity; but collate them and they become simple copies or reflections.


Given the ability to learn, they will diverge as soon as they are copied. Would the technology to make an identical copy of a human being mean that human beings are no longer sentient?

QUOTE
you could print out the AI's code and delete all online copies; the AI still exists (albiet in stasis).


The AI is not just the code, it's the current memory contents as well. In fact, any real AI would most likely be able to modify its own code, so printing out the current code for an AI and doing a memory dump would be identical to mapping the exact physical state of the human brain. Also, if the AI were to run on hardware that can modify itself by building new circuits (technology that we have now), then you can't even just dump the program and memory, you have to examine the current physical structure of the processor. Of course, those self-modifying circuits can be simulated in software, so that's more of a philisophical distinction than a computational one.

Much of this comes down to the philisophical question of whether you think human intelligence is anything more than the result of a special-purpose biological computer. Can humans compute things beyond what a limited-memory Turing machine could? If so, then we can be simulated by any other Turning machine with enough memory. If not, then where does that ability come from? Note that no one has a model of computation that a Turing machine can't simulate.

QUOTE
Well my definition of Ai has always been: Can it do something it was not specifically programmed to do? if the answer is yes then it is AI. Emotions, humor, self awareness and everything else doesn't count for squat.


Back when I ran Windows98 my computer crashed daily, something it was not specifically programed to do. So apparently I was reading email on an intelligent being. Makes me feel kinda bad about disassembling parts of it to make my current computer.
lorthazar
QUOTE (SirKodiak)

QUOTE
Well my definition of Ai has always been: Can it do something it was not specifically programmed to do? if the answer is yes then it is AI. Emotions, humor, self awareness and everything else doesn't count for squat.


Back when I ran Windows98 my computer crashed daily, something it was not specifically programed to do. So apparently I was reading email on an intelligent being. Makes me feel kinda bad about disassembling parts of it to make my current computer.

How do you know it wasn't programmed to crash daily.
Kagetenshi
QUOTE (SirKodiak)
QUOTE
i'm not sure i'd agree that replication and reproduction are the same thing. the informational nature of AI makes reproduction weird; given that it should easily be possible to collate replicated AIs, you'd end up with a Tachikoma effect. each replicated AI might think it's a seperate and unique entity; but collate them and they become simple copies or reflections.


Given the ability to learn, they will diverge as soon as they are copied. Would the technology to make an identical copy of a human being mean that human beings are no longer sentient?

Which, on a tangent, is a theme in Ghost in the Shell. All of the fuchikoma (tachikoma in the SAC) are synchronized regularly except for Batou's, and as a result, his fuchikoma begins displaying a clearly distinct personality from the others.

~J
Crusher Bob
I got the impresion that Batou's was synched regularly too, that's how the rest of the caught the 'natural oil virus'
Kagetenshi
Did they, though? The rogue was the only one to run away, and a single fuchikoma preaching revolution or somesuch to a disinterested mass is a recurring theme…

~J
BitBasher
QUOTE (Kagetenshi)
Did they, though? The rogue was the only one to run away, and a single fuchikoma preaching revolution or somesuch to a disinterested mass is a recurring theme…

~J

Because his was the only one to use natural oils, which caused somethign similar to genetic memory allowing it to partially escape the nightly data wipes. organic oil was then banned.
Kagetenshi
But that did not occur in the manga unless I am very much mistaken, while the differentiation of the single fuchikoma did. In the manga, Batou's fuchikoma froze up during a mission (I think it was pursuit of Koil Krasnov, though I can't say for sure) due to the natural oil rather than it forming a sort of additional memory device.

~J
SirKodiak
QUOTE
How do you know it wasn't programmed to crash daily.


Well, to give a more general example, I once wrote a program for a class project that used simulated annealing to schedule classes, and the early versions worked so badly that they scheduled classes based on the completely wrong criteria. Is this artificial intelligence?

I think what you're getting at is that an artificial intelligence should be capable of novel and useful emergent behavior. This is like looking at a stick and realizing that it can be used as a tool without ever having seen someone do that before. What's nice about this quality is that it isn't really based on the human condition: it ignores emotions, wetware versus hardware, and all those issues. The problem is that it can be very hard to test for, but not impossible.

I don't necessarily agree it is the appropriate definition for intelligence, but I haven't yet thought of a reason that it's an unreasonable or unfair one. Is this what you were going for?
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012