QUOTE (Smokeskin @ Feb 14 2010, 08:26 AM)

It doesn't follow logically at all that someone has been wrong about it in the past, so everyone is wrong about it now. You're also ignoring all the scientists in the past who said strong AI was a long way off.
To be concrete, the BlueBrain project has clearly shown they can simulate the behavior of neural tissue accurately. In the latter part of the decade, supercomputer processing power should be enough to simulate a full human brain.
There's a pretty big difference from past claims, to having a working proof of concept and essentially just needing 7-8 years more of Moore's Law to go full scale.
So let's get your logical path cleared up. If people were saying in the past that AI would happen soon are wrong, that does not mean that people saying it now are also wrong. In addition, other scientists predicted that it would not happen soon. . In addition "Citation of example," "Citation of Moor's law." Therefore, AI will occur soon.
Umm. yeah. Here are the logical implications of Rystefn's comments. Futurists are remarkably known for being wrong, often on a very grand scale, especially due to the fact that terrifying predictions have a strong tendency to bring upon them an otherwise unachievable modicum of fame. Scientists are also known for misrepresenting their capabilities and falsifying research with purpose of getting more grants. As such, the "I'll believe it when I see it" comment is quite reasonable considering the indeterminable trustworthiness of any given researcher.
Furthermore, all these "we can simulate.." claims are well and good, and quite possibly a sign of progress towards the AI goal, but we are also continuously discovering new problems and limitations. What really piques my curiosity is how they are going to go about writing the program for human mentality, and what that is going to actually do.