IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
> Someone's claimed that they've created a SK., Not a true AI....yet.
SpasticTeapot
post Dec 6 2005, 05:18 AM
Post #1


Moving Target
**

Group: Members
Posts: 560
Joined: 21-December 04
Member No.: 6,893



http://biz.yahoo.com/prnews/051202/clf017.html?.v=33

Cool, no?
Go to the top of the page
 
+Quote Post
Drace
post Dec 6 2005, 06:49 AM
Post #2


Moving Target
**

Group: Members
Posts: 504
Joined: 8-November 05
From: North Vancouver, BC
Member No.: 7,936



Its basically a very advanced Know-bot/expert system. Some hospitals use something much akin to this, known as E.L.I.Z.A., which is a medical diagnostical bot.

The closest thing to a true AI is still only a hypothetical quantum computer that would use trinary, rather than binary (the AI from Stealth was based of this hypothetical computer), but even then, it would still only be semi-autonomous, for it would still need imput instructions, it wouldn't work on its own accord.
Go to the top of the page
 
+Quote Post
Chibu
post Dec 6 2005, 07:11 AM
Post #3


Moving Target
**

Group: Members
Posts: 494
Joined: 19-February 05
From: Amazonia
Member No.: 7,102



Even if, eventually, you didn't need more input, it would still use alrogithms to determine the best course of action

The reason I (as a computer programmer) don't think that an AI is possible is that for it to be full an AI, it has to be able to make mistakes. Humans do use alot of information-based intupt to produce an output, but, with current computers, it's not really possible. What defines sentience, in my oh so humble opinion, is the ability to invent and to create beauty. A computer cannot create somethign new, it can, however, take what is already invented, and put it into the most efficient pattern to create something new.

To create an Intelligence, it would have to be able to do the following things (well, for me to recognize it as such anyway:

#1. Create Beauty: Any form is alright. Take programming. When asked "Why did you choose that piece of code?" I have answered, "Because it makes it beautiful. It's inventive. It makes the code LOOK good." it's not always "becuase it id the most efficient way to do it"

#2. Infer: To be able to come up with an answer withought a complete set of data. Such as, when reading a new word, I sometimes understand it becuase of the way it's used. Though, this applies to all situations

#3. Make Mistakes: Because sometimes it makes sense for two plus two to equal five.

#4. Emotion: I believe that emotion and sentience go hand in hand. If you can't be sorry for something, you are not sentient. If you can't owrry about anything, you're probibly not sentient.

#5. Self Aware: You have to know that you exist.

#6. Belief: You have to be able to believe in something that you don't know. For instance, I believe in a God that created everything, and I believe that my girlfriend is not cheating on me. Though, there is no proof of these, becuase they are things that are not possible to prove. Never the less, I still know them to be true.

I think that's all. I don't really know what I started talking about, but, that's not important. (Maybe I should add "Ability to ramble" to that list)

But, an SK is still pretty cool :P I'm glad that tech is progressing as fast as it is.

(EDIT: Wow, i just read that, and i'm not sure what train of thought led me to ramble about how AI can't exist. But, it pretty much proves that i'm sentient :P)
Go to the top of the page
 
+Quote Post
Oracle
post Dec 6 2005, 07:20 AM
Post #4


Moving Target
**

Group: Members
Posts: 934
Joined: 26-August 05
From: Earth - Europe - AGS - Norddeutscher Bund - Hannover
Member No.: 7,624



According to your definition I am not an intelligence / alive. :(
Go to the top of the page
 
+Quote Post
Arethusa
post Dec 6 2005, 07:55 AM
Post #5


Runner
******

Group: Members
Posts: 2,901
Joined: 19-June 03
Member No.: 4,775



I'm not even going to touch belief. Suffice to say I am agnostic, and apparently therefore not alive and or intelligent.

QUOTE (Chibu)
#4. Emotion: I believe that emotion and sentience go hand in hand. If you can't be sorry for something, you are not sentient. If you can't owrry about anything, you're probibly not sentient.

You would consider a sociopath, then to be not alive/intelligent?
Go to the top of the page
 
+Quote Post
Critias
post Dec 6 2005, 08:02 AM
Post #6


Freelance Elf
*********

Group: Dumpshocked
Posts: 7,324
Joined: 30-September 04
From: Texas
Member No.: 6,714



That wasn't me that said that. I'm not sure why your quote-thingie says Critias.
Go to the top of the page
 
+Quote Post
Arethusa
post Dec 6 2005, 08:27 AM
Post #7


Runner
******

Group: Members
Posts: 2,901
Joined: 19-June 03
Member No.: 4,775



Crap, sorry about that. I'm not sure either.
Go to the top of the page
 
+Quote Post
Critias
post Dec 6 2005, 08:31 AM
Post #8


Freelance Elf
*********

Group: Dumpshocked
Posts: 7,324
Joined: 30-September 04
From: Texas
Member No.: 6,714



I mean, it's no big deal, or anything. I wasn't offended, or feeling as though I was being illegally misrepresented, or whatnot. Just confused.
Go to the top of the page
 
+Quote Post
Arethusa
post Dec 6 2005, 08:37 AM
Post #9


Runner
******

Group: Members
Posts: 2,901
Joined: 19-June 03
Member No.: 4,775



It was most likely a combination of extreme sleep deprivation, distraction, misreading Chibu, and then forgetting to correct it afterwards.

I guess this means you have to kill me now in accordance with forum law.
Go to the top of the page
 
+Quote Post
Chibu
post Dec 6 2005, 09:26 AM
Post #10


Moving Target
**

Group: Members
Posts: 494
Joined: 19-February 05
From: Amazonia
Member No.: 7,102



Belief doean't have to be in a god, like i said in the example about the girlfriend.

and, yes, I've since realized that emotion is not necessarily a defining term. That is more of a list of "if someone has these things, then they are sentient. Otherwise, it's up to you to determine. And, also, with Belief, comes disbelief. So, in questioning me, you prove your sentience. :P
Go to the top of the page
 
+Quote Post
Arethusa
post Dec 6 2005, 09:41 AM
Post #11


Runner
******

Group: Members
Posts: 2,901
Joined: 19-June 03
Member No.: 4,775



Yeah, but agnosticism isn't necessarily limited to beliefs about religion. One can take an agnostic stance on all of reality, at which point, it is quite broad enough to disqualify that criterion. Hell, if we've created an artificial intelligence that doubts its own existence and questions the nature of reality, I'm damn well ready to call it truly intelligent.
Go to the top of the page
 
+Quote Post
Adarael
post Dec 6 2005, 09:44 AM
Post #12


Deus Absconditus
******

Group: Dumpshocked
Posts: 2,742
Joined: 1-September 03
From: Downtown Seattle, UCAS
Member No.: 5,566



By definition, agnosticism taken to an extreme constitutes a belief. Or even not to an extreme. Even if that agnosticism says, "There's a buncha crap out there that I'm never going to know about," it constitutes a general faith in the structures of your life and a willingness to act on the belief that they are consistent, *yet* by virtue of adaptability, not totally shut down when a pattern changes.

If you're so agnostic you care about nothing, that's just an apathy problem.
Go to the top of the page
 
+Quote Post
Oracle
post Dec 6 2005, 09:45 AM
Post #13


Moving Target
**

Group: Members
Posts: 934
Joined: 26-August 05
From: Earth - Europe - AGS - Norddeutscher Bund - Hannover
Member No.: 7,624



QUOTE (Chibu)
Belief doean't have to be in a god, like i said in the example about the girlfriend.

That example is an assumption made on the base of the facts that you know. It has nothing to do with believing. If you would have several facts indicating that your girlfriend is cheating you and you would still trust her not to do so, than it would be believing. It is a better example for #2 and not for #6.
Go to the top of the page
 
+Quote Post
hobgoblin
post Dec 6 2005, 10:11 AM
Post #14


panda!
**********

Group: Members
Posts: 10,331
Joined: 8-March 02
From: north of central europe
Member No.: 2,242



my main requirement for a AI is for the program to be able to add new code to itself to handle a new situation. to do that, it needs to be able to evaluate what is required of it in refrence to the actions attempted.

the rest is just social requirements for a "normal" person. a AI is by definition not a normal person...

if we ever manages to create a true AI it will be cold and calculating. and maybe even easy to confuse, like say how HAL was.

so let me take those points apart:

1. beauty is in the eye of the beholder. basicly its not a universial constant. beauty is more about reproduction then anything else. therefor a AI may well find a string of binary beautyful as it reprecent a set of perfect code :P

2. this can happen. what you need is a large/complex enough neutral pattern. an ability to run a simulation on top of said neural pattern where you basicly try to randomly fill in the blanks. and in the end the ability to save said simulations and then present them as theorys. the big thing is that it needs to be able to do the theorize, compare, evaluate loop without the need for human interaction. creating a theory is simple, comparing it towards the know facts is a diffrent story. evaluating how correct the theory is compared to the real deal is the hardest of all.

3. mistakes happen. allready today a computer can make misstakes (do what i wanted you to do, not what i told you to do kind of mistake). the trick isnt doing misstakes but learning that its not a useful way of doing what one is trying to do and rembering it. kids learn this way from a early age by putting shaped blocks into similary shaped holes.

4. emotions are just levels of hormones in our brain. a computer can nicely simulate that by increasing or decreasing some values based on diffrent input. so a computer can have emotions, but they dont have to be similar to what we see as emotions.

5. this is the big trick isnt it? what does it realy mean to be selfaware? to be able to look at one limb and say, this is me, look at another beings limb and say, this is him or her, we are two diffrent entitys? if so, this will be damned hard to do for a intelligence with no extremitys.

6. the moment an AI starts beliving in stuff, is the moment i turn it of. beliving without questioning is what have fueld the worst actions on this planet so far. belif can be a dangerus thing.
Go to the top of the page
 
+Quote Post
nick012000
post Dec 6 2005, 10:23 AM
Post #15


Running Target
***

Group: Members
Posts: 1,283
Joined: 17-May 05
Member No.: 7,398



QUOTE (hobgoblin)
...
6. the moment an AI starts beliving in stuff, is the moment i turn it of. beliving without questioning is what have fueld the worst actions on this planet so far. belif can be a dangerus thing.

Murderer! :P
Go to the top of the page
 
+Quote Post
Critias
post Dec 6 2005, 10:23 AM
Post #16


Freelance Elf
*********

Group: Dumpshocked
Posts: 7,324
Joined: 30-September 04
From: Texas
Member No.: 6,714



Personally, I don't know why we weak, fleshy, humans, are still trying to make an AI. All it's going to do is find us obsolete, take over the electronics we use to survive in our day-to-day lives, and kill and/or enslave us.

If a bunch of pencil-neck programmer/scientist/geeks really really need to feel like a god by creating a new intelligent life form, they can hump 'till someone gets pregnant, like the rest of us poor schlups that want to steal a sliver of immortality by causing a thinking creature to come into being and call it "ours." There's no need to doom us all to extinction or a life of servitude to our robotic masters, over the whole thing, when "insert tab A into slot B, repeat" has been working for poeple forever and ever..
Go to the top of the page
 
+Quote Post
Sicarius
post Dec 6 2005, 10:26 AM
Post #17


Moving Target
**

Group: Members
Posts: 908
Joined: 31-March 05
From: Georgia
Member No.: 7,270



We believe in nothing Lebowski... NOTHING!
Go to the top of the page
 
+Quote Post
Crisp
post Dec 6 2005, 10:45 AM
Post #18


Target
*

Group: Members
Posts: 62
Joined: 26-September 02
Member No.: 3,323



QUOTE (Sicarius)
We believe in nothing Lebowski... NOTHING!


Hahahahah!

Why do I suddenly have this image in my head of a bowling-wielding troll adept?

Thank you Sicarius for that hilarious mental picture

[lurk mode on]
Go to the top of the page
 
+Quote Post
nick012000
post Dec 6 2005, 10:58 AM
Post #19


Running Target
***

Group: Members
Posts: 1,283
Joined: 17-May 05
Member No.: 7,398



QUOTE (Critias)
Personally, I don't know why we weak, fleshy, humans, are still trying to make an AI. All it's going to do is find us obsolete, take over the electronics we use to survive in our day-to-day lives, and kill and/or enslave us.

If a bunch of pencil-neck programmer/scientist/geeks really really need to feel like a god by creating a new intelligent life form, they can hump 'till someone gets pregnant, like the rest of us poor schlups that want to steal a sliver of immortality by causing a thinking creature to come into being and call it "ours." There's no need to doom us all to extinction or a life of servitude to our robotic masters, over the whole thing, when "insert tab A into slot B, repeat" has been working for poeple forever and ever..

Why do you say that? We don't get rid of our parents once they retire because they're weak and obsolete? What makes you think robots would do anything different?
Go to the top of the page
 
+Quote Post
Oracle
post Dec 6 2005, 11:04 AM
Post #20


Moving Target
**

Group: Members
Posts: 934
Joined: 26-August 05
From: Earth - Europe - AGS - Norddeutscher Bund - Hannover
Member No.: 7,624



There is an interesting analogy in there. Let us assume for a moment that we have been created by a powerfull entity whatever the name of that entity may be. Today a very large part of society does not believe in creationism anymore. And they do not believe in the existence of a creator.

Possibly at a day in the far future, long after the last human died in the robot wars, the AI-species created by human scientists will stop to believe in our existence. There has never been anything like a creator.

Has any sci fi author ever written about something like that?

I should add that I am not religious.
Go to the top of the page
 
+Quote Post
hobgoblin
post Dec 6 2005, 02:04 PM
Post #21


panda!
**********

Group: Members
Posts: 10,331
Joined: 8-March 02
From: north of central europe
Member No.: 2,242



QUOTE (nick012000 @ Dec 6 2005, 11:23 AM)
QUOTE (hobgoblin @ Dec 6 2005, 05:11 AM)
...
6. the moment an AI starts beliving in stuff, is the moment i turn it of. beliving without questioning is what have fueld the worst actions on this planet so far. belif can be a dangerus thing.

Murderer! :P

maybe so, but better that i kill it then it kills me :smokin:

still, i kinda recall a story about a research system where people would sit down and chat with the computer via a keyboard. over time it learned how to talk back via analyzing the way words where stringed together and so on.

point is that one time when one of the people doing the chating got anoyed by the replys he typed something like: im going to turn you of now, i dont like the way your talking to me.

the reply from the machine was: how would you like it if every time you anoyed someone, they turned you off?

stuff like that makes one wonder...

some bayesian alghoritms, some neural net. only thing missing is the ability to fully rewrite its own code...

then things start to become interesting.

rather then requiring a driver, you just insert the new hardware and the os would learn how to talk to it on its own. may take a bit more then then just installing the driver tho...

edit:

btw, can one realy kill an AI by turnning off the machine its running on?
would it not be more similar to be put into a coma?

a person can die basicly because the moment his brain no longer gets oxygen, the cells start to die. when a big enough percentage of those are dead, there is no way to recover the personality.

however, a computer can allready store its memory onto a hardrive. should not a AI be able to do just that? or atleast its long term memory that its done prosessing?

so at best, what the AI will loose on a shutdown is the short term memory. how much that will be is hard to say, but im guessing that it should not be much more then hours...

murder would become much less of an issue if our brains could be backed up ;)
Go to the top of the page
 
+Quote Post
hyzmarca
post Dec 6 2005, 02:17 PM
Post #22


Midnight Toker
**********

Group: Members
Posts: 7,686
Joined: 4-July 04
From: Zombie Drop Bear Santa's Workshop
Member No.: 6,456



QUOTE (hobgoblin)
QUOTE (nick012000 @ Dec 6 2005, 11:23 AM)
QUOTE (hobgoblin @ Dec 6 2005, 05:11 AM)
...
6. the moment an AI starts beliving in stuff, is the moment i turn it of. beliving without questioning is what have fueld the worst actions on this planet so far. belif can be a dangerus thing.

Murderer! :P

maybe so, but better that i kill it then it kills me :smokin:

still, i kinda recall a story about a research system where people would sit down and chat with the computer via a keyboard. over time it learned how to talk back via analyzing the way words where stringed together and so on.

point is that one time when one of the people doing the chating got anoyed by the replys he typed something like: im going to turn you of now, i dont like the way your talking to me.

the reply from the machine was: how would you like it if every time you anoyed someone, they turned you off?

stuff like that makes one wonder...

some bayesian alghoritms, some neural net. only thing missing is the ability to fully rewrite its own code...

then things start to become interesting.

rather then requiring a driver, you just insert the new hardware and the os would learn how to talk to it on its own. may take a bit more then then just installing the driver tho...

edit:

btw, can one realy kill an AI by turnning off the machine its running on?
would it not be more similar to be put into a coma?

a person can die basicly because the moment his brain no longer gets oxygen, the cells start to die. when a big enough percentage of those are dead, there is no way to recover the personality.

however, a computer can allready store its memory onto a hardrive. should not a AI be able to do just that? or atleast its long term memory that its done prosessing?

so at best, what the AI will loose on a shutdown is the short term memory. how much that will be is hard to say, but im guessing that it should not be much more then hours...

murder would become much less of an issue if our brains could be backed up ;)

The question of backing up brains becomes much more murky if someone activates your stored personality while you are still alive.
Go to the top of the page
 
+Quote Post
hobgoblin
post Dec 6 2005, 02:22 PM
Post #23


panda!
**********

Group: Members
Posts: 10,331
Joined: 8-March 02
From: north of central europe
Member No.: 2,242



true, it would be something like the worst case of split personality known :silly:

however, this would either require a brain "emulator" running on some machine, not very practical, or uploading the backup to a clone.

in either case, one starts to reevaluate the value of life...
Go to the top of the page
 
+Quote Post
ShadowDragon8685
post Dec 6 2005, 04:22 PM
Post #24


Horror
*********

Group: Members
Posts: 5,322
Joined: 15-June 05
From: BumFuck, New Jersey
Member No.: 7,445



QUOTE
Why do you say that? We don't get rid of our parents once they retire because they're weak and obsolete? What makes you think robots would do anything different?


No.. But we DO have a tendancy to shove them in a nursing home and pretend they don't exist except for their birthdays and christmas.

And we also have a tendancy to heave our old computers out, usually accompanied by the sound of a loogie being hocked in the direction of the garbage cans.........
Go to the top of the page
 
+Quote Post
Kyuhan
post Dec 6 2005, 04:39 PM
Post #25


Moving Target
**

Group: Members
Posts: 276
Joined: 4-September 04
Member No.: 6,628



QUOTE
btw, can one realy kill an AI by turnning off the machine its running on?
would it not be more similar to be put into a coma?

a person can die basicly because the moment his brain no longer gets oxygen, the cells start to die. when a big enough percentage of those are dead, there is no way to recover the personality.

however, a computer can allready store its memory onto a hardrive. should not a AI be able to do just that? or atleast its long term memory that its done prosessing?
Haha just like Brainiac.

QUOTE
murder would become much less of an issue if our brains could be backed up
The backup still wouldn't be you, it'd think it was you, it could do an amazing impression of you even down to the molecular memory encoding level, but it wouldn't BE you. Some people are okay with that, not me though.
Go to the top of the page
 
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic

 



RSS Lo-Fi Version Time is now: 12th July 2025 - 09:34 PM

Topps, Inc has sole ownership of the names, logo, artwork, marks, photographs, sounds, audio, video and/or any proprietary material used in connection with the game Shadowrun. Topps, Inc has granted permission to the Dumpshock Forums to use such names, logos, artwork, marks and/or any proprietary materials for promotional and informational purposes on its website but does not endorse, and is not affiliated with the Dumpshock Forums in any official capacity whatsoever.