Help - Search - Members - Calendar
Full Version: Veterans
Dumpshock Forums > Discussion > Shadowrun
Pages: 1, 2, 3, 4
Smokeskin
QUOTE (Rystefn @ Feb 14 2010, 06:44 AM) *
In fairness, they've been saying this for about thirty years now. I'll believe it when I see it and not a second before.


It doesn't follow logically at all that someone has been wrong about it in the past, so everyone is wrong about it now. You're also ignoring all the scientists in the past who said strong AI was a long way off.

To be concrete, the BlueBrain project has clearly shown they can simulate the behavior of neural tissue accurately. In the latter part of the decade, supercomputer processing power should be enough to simulate a full human brain.

There's a pretty big difference from past claims, to having a working proof of concept and essentially just needing 7-8 years more of Moore's Law to go full scale.

overcannon
QUOTE (Smokeskin @ Feb 14 2010, 08:26 AM) *
It doesn't follow logically at all that someone has been wrong about it in the past, so everyone is wrong about it now. You're also ignoring all the scientists in the past who said strong AI was a long way off.

To be concrete, the BlueBrain project has clearly shown they can simulate the behavior of neural tissue accurately. In the latter part of the decade, supercomputer processing power should be enough to simulate a full human brain.

There's a pretty big difference from past claims, to having a working proof of concept and essentially just needing 7-8 years more of Moore's Law to go full scale.


So let's get your logical path cleared up. If people were saying in the past that AI would happen soon are wrong, that does not mean that people saying it now are also wrong. In addition, other scientists predicted that it would not happen soon. . In addition "Citation of example," "Citation of Moor's law." Therefore, AI will occur soon.

Umm. yeah. Here are the logical implications of Rystefn's comments. Futurists are remarkably known for being wrong, often on a very grand scale, especially due to the fact that terrifying predictions have a strong tendency to bring upon them an otherwise unachievable modicum of fame. Scientists are also known for misrepresenting their capabilities and falsifying research with purpose of getting more grants. As such, the "I'll believe it when I see it" comment is quite reasonable considering the indeterminable trustworthiness of any given researcher.

Furthermore, all these "we can simulate.." claims are well and good, and quite possibly a sign of progress towards the AI goal, but we are also continuously discovering new problems and limitations. What really piques my curiosity is how they are going to go about writing the program for human mentality, and what that is going to actually do.
Smokeskin
You don't need to write "a program for human mentality", anymore than such a program is found in your neurons and synapses. You just need to model what the neural tissue does. There's no requirement for us to understand how intelligence works - we have a working intelligence, the human brain, and reverse engineering that is enough.

However you look at this, you need to come to terms with the fact that we can understand and model how neural tissue functions and is connected, and we're approaching the point where supercomputers have the processing power to model the entire brain. This is drastically different from the past, where successfully building an AI would depend on doing something A LOT smarter than the human brain does it.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012