Help - Search - Members - Calendar
Full Version: Can a decker make an "AI"
Dumpshock Forums > Discussion > Shadowrun
Pages: 1, 2, 3, 4
mfb
QUOTE (Crusher Bob)
Is the human 'self' then just the pattern of electrical impulses? If an exact duplicate of your brain could be made and an exact 'copy' of the electro-chemical impulses that your brain was running at some given moment, then that would be 'you' but in stasis.

you're assuming that there's nothing more to humans than electrochemical impulses. in SR, at least, that's provably untrue.

QUOTE (SirKodiak)
You keep coming back to the claim that humans are really sentient and really self-aware and really feel emotions because we don't know the mechanism that controls them.


no. what i'm claiming is that humans are sentient and self-aware; therefore, we really feel emotions.

QUOTE (SirKodiak)
Given the ability to learn, they will diverge as soon as they are copied. Would the technology to make an identical copy of a human being mean that human beings are no longer sentient?


i'm not sure how you got "AIs can replicate and collate, therefore they are not sentient" from "AIs can replicate and collate, and i'm not sure that really counts as reproduction in the biological sense".

QUOTE (SirKodiak)
Much of this comes down to the philisophical question of whether you think human intelligence is anything more than the result of a special-purpose biological computer.

again, in SR, humans are more than meaty bits of clever clockwork.
Kagetenshi
I’m still not seeing any convincing argument that emotions are anything but orthogonal to sentience.

~J
mfb
*shrug* in real life, it's as faulty to assume that AIs will have emotions as it is to assume that they won't. no one's ever seen a sentient being that didn't (excepting humans with certain psychological problems), so you really can't make a claim either way.

in SR, there's a wider variety of intelligences to compare against; the world is chock-full of sentient beings besides humans and metahumans. all of them, thus far, have displayed emotion (though some of them, such as shadow spirits, display a different range of emotions from human normal). i think it's safe to assume that in SR, at least, AIs will probably display emotion.

regardless, if a program does display something that resembles emotion, my stance is that it's only mimicry if the program isn't self-aware.
BitBasher
QUOTE (mfb)
QUOTE (Crusher Bob)
Is the human 'self' then just the pattern of electrical impulses? If an exact duplicate of your brain could be made and an exact 'copy' of the electro-chemical impulses that your brain was running at some given moment, then that would be 'you' but in stasis.

you're assuming that there's nothing more to humans than electrochemical impulses. in SR, at least, that's provably untrue.

IIRC Alice Haffner (Correct character name?) contradicts you directly. She was entirely and wholly transferred into the matrix as a Ghost in the Machine. She proves that in SR sentience and a person's psyche is entirely nothing more than electrical impulses, and can be translated as such.

And if the description of an AI is "Can perform tasks not specifically programmed" than a robot pilot fulfills that. It can even come up with it own plans and execute them to complete an objective.
mfb
untrue. alice haeffner died during what may have been the upload of her psyche. it's also possible that the AI that's taken her name merely took on aspects of Alice's personality; Alice is little more than a rumor, so it's hard to say exactly what she is and how she came to be.
BitBasher
I guess then you haven't read (any of) the novel(s) in which she was featured rather prominently? They made it pretty clear that she is in fact a ghost in the machine, retaining all her memories thought processes ect.
mfb
ah. no, i haven't. shadowboxer lit on fire my ability to read SR novels. she did die during the process of becoming an AI, though, right? which leaves the possibility open that her AI is more than the sum of the data that existed in her brain.
SirKodiak
QUOTE
you're assuming that there's nothing more to humans than electrochemical impulses. in SR, at least, that's provably untrue.


Can you actually prove that the phenomenon such as auras and astral presences aren't the product of those electrochemical impulses?

QUOTE
no. what i'm claiming is that humans are sentient and self-aware; therefore, we really feel emotions.


Then all the computer emotions will be felt by sentient and self-aware computers. So we've agreed it's possible to make a computer intelligence?

QUOTE
i'm not sure how you got "AIs can replicate and collate, therefore they are not sentient" from "AIs can replicate and collate, and i'm not sure that really counts as reproduction in the biological sense".


I was pointing out that there were scenarios under which the human race wouldn't meet the posted biological definition of life, and would instead reproduce in the manner that a computer might. I then used that to further my argument. Part of my statement was not a reply to the original statement.

QUOTE
again, in SR, humans are more than meaty bits of clever clockwork.


Can you really prove that they're more than meaty bits of clever clockwork emulating all these extra phenomenon?

QUOTE
I’m still not seeing any convincing argument that emotions are anything but orthogonal to sentience.


I agree. Sentience is the issue.
mfb
QUOTE (SirKodiak)
Can you actually prove that the phenomenon such as auras and astral presences aren't the product of those electrochemical impulses?

easily. spirits have auras.

QUOTE (SirKodiak)
So we've agreed it's possible to make a computer intelligence?

of course. i'm not saying it's impossible to make AIs; i'm just saying that if it's not self-aware, it's not AI (as we're using the term).

QUOTE (SirKodiak)
Can you really prove that they're more than meaty bits of clever clockwork emulating all these extra phenomenon?

no, but it's a logical assumption because i, a human, am self-aware; other humans (and, in SR, living beings that act self-aware) get the benefit of the doubt. when you're talking about creating programs that mimic self-awareness, the presence of self-awareness can't be assumed.
BitBasher
QUOTE (mfb)
QUOTE (SirKodiak)
Can you actually prove that the phenomenon such as auras and astral presences aren't the product of those electrochemical impulses?

easily. spirits have auras.

And so do plants and other totally nonsentient living things.

QUOTE
ah. no, i haven't. shadowboxer lit on fire my ability to read SR novels. she did die during the process of becoming an AI, though, right? which leaves the possibility open that her AI is more than the sum of the data that existed in her brain.
Ah well, yes, her body died, and her consciousness lives on, no AI is involved at all. She is not an AI, she is still Alice. She is a ghost in the machine. Noone knows exactly how she exists, but she is not code. She was not created. She was a living thing whose sum of consciousness was transferred into the matrix. I honestly dont remember all the details. Using the term AI with her is wholly false, because although she is an SE (Sentient Entity) she is not an AI (Artificial Intelligence) because she is not at all artificial.

mfb
QUOTE (BitBasher)
And so do plants and other totally nonsentient living things.

if i were arguing that auras were required for sentience, that'd be a point against me. i'm more arguing that auras might be an emergent property of life.

it doesn't sound like Alice is a viable example for the argument that human intelligence is simply the sum of its electrochemical whatchadiddles. Alice sounds more supernatural than technological.
Cynic project
Oh da joy, geeks at their best. Sometimes I forget why I started shadowrun,and the awnsers to this thread remind me. Because somethings are never right,somethings are never wrong and sometimes you just have to make do with what you have.
BitBasher
Alice isn't supernatural, she's entirely technological at this point. A consciousness in the matrix.
mfb
...a consciousness that has no code. that doesn't seem very techological to me.
BitBasher
As stated earlier, its simply a collection of cohesive electrical impulses It started out grounded in a meat brain, now grounded in the matrix.

I'm not happy about it at all, in fact I wish It didn't exist because I think it's beyond cheezy and stupid, but this is what we're stuck with.
SirKodiak
QUOTE
easily. spirits have auras.


QUOTE
no, but it's a logical assumption because i, a human, am self-aware; other humans (and, in SR, living beings that act self-aware) get the benefit of the doubt. when you're talking about creating programs that mimic self-awareness, the presence of self-awareness can't be assumed.


Ah, but are spirits self-aware? The argument that you set the bar lower for considering humans self-aware because you're self-aware and are a human doesn't apply to spirits. They should be subject to the same test as computers.

As for mimicing self-awareness, I think that what's going to happen there is that the distinction between mimicing being self-aware and actually being self-aware is going to be hard to draw.

QUOTE
Noone knows exactly how she exists, but she is not code.


If she actually runs on the hardware, then machine code is part of her existence, though obviously it won't have come out of some corresponding high-level code (Java, C++, etc...). If she's just a spirit infesting a computer, then she's hardly been uploaded to the matrix.

One other question, is self-awareness required to be intelligent, or are they two seperate things. I'd say they're seperate, meaning that an Artificial Intelligence, using a strict interpretation of the words, is different from an Artificial Personality.
hobgoblin
hmm, could it be that the sr matrix acts more like a electronic copy of the human brain then they want to admitt? could explain some stuff wink.gif
BitBasher
QUOTE
Ah, but are spirits self-aware? The argument that you set the bar lower for considering humans self-aware because you're self-aware and are a human doesn't apply to spirits. They should be subject to the same test as computers.
Yes, spirits are self aware. I'm pretty sure in some countriues spirits can get privisional citizenship. Others own parts of megas.
mfb
QUOTE (SirKodiak)
Ah, but are spirits self-aware? The argument that you set the bar lower for considering humans self-aware because you're self-aware and are a human doesn't apply to spirits. They should be subject to the same test as computers.

he's talking about provability, bitbasher. as i said above, self-awareness is hard to prove; because i'm not a sociopath, i assume that my fellow humans are as self-aware as i am. other natural beings who show qualities of self-awareness fall under the same assumption, because there's no clear reason not to make that assumption. yes, it's theoretically possible that non-humans that act self-aware are secretly magically-powered automatons that merely mimic self-awareness, but that's highly improbable.

with an AI, though, you're creating something that can mimic self-awareness without actually being self-aware. it is therefore not automatically assumed that an AI is sentient just because it acts sentient.

QUOTE (SirKodiak)
One other question, is self-awareness required to be intelligent, or are they two seperate things. I'd say they're seperate, meaning that an Artificial Intelligence, using a strict interpretation of the words, is different from an Artificial Personality.

i'm not sure what you mean; how are you defining "intelligence"?
JaronK
What's really the question here is what the difference is between mimicing self awareness and being self aware? Aren't they really the same? I mean, an A.I may not really be self aware, but it thinks it's self aware...

Really, if you can't tell the difference, then there isn't one.

JaronK
mfb
*shrug* we're not really discussing practical differences. there really isn't any practical difference between a true, self-aware AI and a program that can mimic self-awareness to the point that no one can tell it's not. but if someone lies to you, and you never find out, they still lied.
JaronK
The thing is, if someone lies by saying "I'm lying" then they're not really lying, now are they?

If a machine thinks it's self aware, it can't really be wrong, can it?

JaronK
mfb
in order for an intelligence to think it is self-aware, it has to be self-aware, yes. but try to prove that an AI actually thinks it's self-aware, and isn't just saying that because it's programmed to appear self-aware.
JaronK
The thing is, how can you prove you're self aware? Maybe you're just saying you are. Maybe you just think you are because a lower level of your brain is telling you that you are, because it has evolved to do so. At some point, your intelligence lying to you enough that you're self aware is enough to make you so.

JaronK
mfb
the ability to question the presence of one's own self-awareness is not possible without self-awareness. i think, therefore i am. as i've acknowledged several times, it's difficult to prove self-awareness to others. a non-self-aware AI will not question its own self-awareness, or lack thereof, because it has no self to question. the act of questioning its self-awareness would be an indicator that it has made the leap from mimicker of self-awareness to an actually self-aware entity.
JaronK
Many forms of AI have layers of programming. What if the higher level programming questions its own awareness, because a sub-routine was programed to do so? And how is that different from a human conciousness questioning its own awareness, when the question comes from your sub-conciousness? After all, what is the subconcious and instict, if not a series of low level sub routines?

JaronK
BitBasher
There's total speculation. We have no AI today, and the AI in shadowrun has never been addressed in such a fashion. Are you totally pulling that from your behind or is that relevant in some way?

You're addressing the difference between something that knows it's alive and something that is told it is. Big difference.
JaronK
For the record, I'm a computer science major who's specialized in Artificial Intelligence. Depending on your definition of Artificial Intelligence, I've programmed one. It certainly wasn't self aware, but it was good enough to do a few things that surprised me, and I wrote more than 70% of the code (and oversaw the writing of the rest).

Now, there are a variety of possible structures and methods of creating AIs. With the one we went with, it had three layers. The first layer was an enourmous group of substrategies (this AI was designed to play a game), mostly that I programmed in, some that it created through formulae and recombination of existing substrategies. The second layer was an evaluation layer... it took each strategy, and determined which strategy was doing the best at any given time. It then chose the best strategy and sent it on to the third layer. The third layer took the guess sent by the second strategy, and used information from the first layer to figure out what the other players were doing, and modified the guess from the second layer so the bot wouldn't play something that it knew an opponent would counter.

Now, if you look over the whole code, you can figure where any individual bit of information comes from... exactly which line of code generated what. However, if you simply were the top layer of code (the third layer), to you what's happening is you're thinking up guesses, comparing them to what you think the other players will do, modifying your original guess, and going with it. As far as you know, you just magically thought up a number to work with.

What I'm saying is, if someone saw all your code, inside your brain, they could probably tell you where each individual thought you have comes from. But since you can't actually see the processes in your own brain from lower level areas of your mind (your instinct, your subconcious, etc.). Self awareness, then, is sort of a missnomer... it's the very fact that you're not aware of where all your thoughts come from that makes you think you're self aware. What is self awareness, after all, but mystery... the fact that you can't point to where in your own head that thought about how nice Davinci's painting was came from. Yet perhaps it came from a some instinctual part of your brain that was programmed to like colorful things because fruit is colorful and could be food, combined with a piece of your object recognition subconcious routines that liked the symmetry, because it was easier to process.

This is why AI can never be self aware according to it's coders. Yet it might think it's self aware... and shouldn't that be enough?

JaronK
mfb
QUOTE (JaronK)
After all, what is the subconcious and instict, if not a series of low level sub routines?

according to Jung, the subconscious is a set of mystical symbols shared by all of humanity. according to some schools of thought, it is indeed a series of low-level subroutines. according to Freud, it's a cigar. the nature of the subconscious mind is a long-debated and highly speculative subject, in psychology. i hope you're not claiming to have solved they mystery all by your lonesome.
JaronK
Frued was a crackpot, and in the psychology world is pretty much universally acknowledged as such, though for some reason the art world still loves him.

With that said, no, I don't claim to have solved everything by myself... considering how long I've been studying this, I know I'm not the only one saying it. But whatever you want to call it (without going into spiritual collective crap), the fact is that the subconscious is by definition a group of mental impulses which your higher mind has little controll over or information about.

And seriously, what is instict but a series of instructions programed into your brain at birth? Isn't that the definition?

And isn't that what a series of subroutines is?

JaronK
mfb
so, because the definition of "self-aware" can be obfuscated, any program that mimics any emotion is now an AI? that seems like a pretty flimsy definition--at least as useless for externally diffing AI and not-AI as my own standards, and without the benefit of decent internal diffing.

QUOTE (JaronK)
What I'm saying is, if someone saw all your code, inside your brain, they could probably tell you where each individual thought you have comes from.


that's pure and unadulturated speculation. you're discarding millennia of religious thought out of hand, with no proof whatsoever. it's entirely possible--especially in SR, where anybody can see the spiritual aspects of living beings (given an astral shallow)--that self-awareness comes from the presence of a soul or, in SR, aura. maybe self-awareness does come from a set of biological subroutines that can wholly be described by electrochemical impulses--but there other, equally-viable possibilities out there. i'm not saying i believe people have souls; i'm saying it's sloppy and intellectually dishonest to disregard such possibilities, especially when they're so commonly held, because they don't jive with your own set of beliefs.
JaronK
Well, yes, I am dismissing most religeous bits out of hand, because there's no scientific proof. Instinct, however, is measureable and observable. Even if you go the religeous route, one could say that "God" is your programmer, but none the less... what makes you think you're self aware? Where is that coming from? If I could point to the exact part of your brain that makes you think it, you might say you're not really self aware, because you're programmed to think you're aware, right?

At least, that's essencially the arguement that people give for why A.I. can't be self aware... because somewhere, deep inside the programming, is the part that makes it think it's aware, and that's scripted, programmed, and thus somehow a lie.

But the truth is, self awareness is a contradiction. It is only the mystery, the lack of awareness of all of your self, that makes you think you're self aware.

At least, all this is the oppinion of one A.I. programmer. But having seen the building of A.I. in its infantile stages, I think self awareness is really only created by the complexity the mystery that comes from that complexity. This translates nicely in Shadowrun, where you create an extreamly complex learning agent (Semi-Autonomous Knowbot), then let it lose in the world for long enough for its complexity to obfuscate its code enough that even the original programmers don't know what it will do next. Thus, you have an A.I.

JaronK
hyzmarca
The simple solution is to make a Knowbot that isn't programed to do anything. If it does something, it obviously has free will.
JaronK
I'm not sure just "doing what you wouldn't expect" counts as free will. I know the last A.I. I programmed was for a game where you had to guess numbers while also guessing what numbers your 20 or so opponents might guess, to get the right number for that round. On a lark, I had it play another game where you chose either "Go" or "Not Go", where if you went and no more than 60% of the others went you'd win, but if you didn't go and more than 60% did, you won. While the agent didn't play as well in the second game, it did perform admirably enough, dispite never having been programmed to play that game at all (or even to guess anything other than numbers).

The way that particular agent learned, by the way, was by imitating others it saw in its world, then figuring out whether that particular immitation was a good idea or a bad one, which is very similar to the way a child learns. It was very successful.

JaronK
Cain
MFB: I just got done earning a 4.0 in Biopsychology, and I can tell you that some of what JK says is true. For some thoughts, we can tell precicely what area of the brain they originated in. I can tell you what neural pathways brought them there, and what types of processing occured at each point along the way.

If we break down the human brain far enough, the theory is that we can predict with great accuracy where each though originates. We're up against an ethical limit, not a practical one-- we're not allowed to vivisect working human brains to see how they work.

So, in theory, we could break down a human brain until we find the part that says: "I think, therefore I am."; and isolate it. By damaging or removing that section, we could remove a person's self-awareness, and then see if that's a core part of his or her sentience.
mfb
QUOTE (JaronK)
Well, yes, I am dismissing most religeous bits out of hand, because there's no scientific proof.

there's at least as much proof for religious bits as there is for the idea that you can dissect the brain and find the Self-Awareness Gland--that is, there's no proof to back up either claim. it's just as silly to claim that self-awareness comes from chemicals as it is to say that self-awareness comes from an immortal soul. in SR, it's more ridiculous, because there's concrete evidence that points to the existence of a soul-like entity that resides in all living beings.

QUOTE (JaronK)
But the truth is, self awareness is a contradiction. It is only the mystery, the lack of awareness of all of your self, that makes you think you're self aware.

says you. you've got no more proof for that claim than the pope does for his claims. you can't point to a layer of the subconscious that creates a feeling of self-awareness, or you'd have already done so; all you can do is hypothesize, and say that a commonly-used model among AI programmers is the many-layers theory of consciousness. they're good hypotheses, sure; i'd be willing to buy the concept of layers as a source of self-awareness. but they're only hypotheses, and they're only as valid as any other hypothesis that fits the few known facts. some genius could come along tommorow and prove that the layered paradigm is totally flawed and completely unworkable for the goal of creating a self-aware program. you're speculating, and presenting it as fact; for all that, you may as well be spouting off about how i'm going to hell when i die.
BitBasher
Actually strictly speaking of SR we do in fact have several true AI without auras.
mfb
true. AIs work on wildy different principles, though; it's probable that the basis for an AI's self-awareness is completely different from the basis for a human's self-awareness. heck, it could be layers.
BitBasher
Actually, the AI's basis for self awareness is the "x factor" that they don't understand in SR, even those who made it.
Edward
Mfb. You make a strong claim when you when you say that there is “concrete evidence that points to the existence of a soul-like entity that resides in all living beings.”

I have a signifignt amount of anecdotal evidence to suggest that this is true but not enough for me to believe it is true (I do not however believe it is false). I would like to know what evidence you have (ether threw a link, posted on this forum or a personal message as you se fit) I truly hope that you are correct.

At present the information I can acses shows the probabilities leaning towards sufficient examination of the human brain would allow us to identify the proses of eth subconscious mind as we would with the AIs mentioned by the likes of JaronK. This is more probable because I m aware of the existence of reproducible experiments that uncover the source of some types of thoughts and teat the rain hs not yet ben aped ell enuf to expect to find them all. Meanwhile I am aware of no reproducible test that shows the presence of a soul-like entity or any related phenomenon. I would love to be shown a counter example.

Edward

On a more SR note I don’t have any source books that describe an AI, dose it state that a AI has no aura or is this left unsaid.

Edward
mfb
it is, as far as i know, left unsaid. the "evidence" i was referring to applies only to SR, where all living creatures have an aura. in real life, i suppose you could say that stuff like Kirilian photography points to a part of living beings that we haven't yet identified, which could be construed as being non-conclusive evidence of the existence of souls. it could also be construed as a lot of other things, though; at this point, it's all guesswork.
JaronK
One of the reasons I'm interested in A.I., in fact, is precisely because we can't take apart a human brain and watch what's going on when someone thinks, but we can watch what an A.I. is thinking while it operates. Thus, if we can create an A.I. that behaves exactly like a human, we can see what makes the human mind behave as it does. Perhaps even more interesting, if we can create a criminal A.I., we can find out what causes certain criminal or deviant behaviors.

One test, by the way, to determine of something is actually an A.I. is if you talked to it online, can you tell if it's a person or a computer? If you can't tell, it's an A.I.

JaronK
SirKodiak
QUOTE
in real life, i suppose you could say that stuff like Kirilian photography points to a part of living beings that we haven't yet identified, which could be construed as being non-conclusive evidence of the existence of souls. it could also be construed as a lot of other things, though; at this point, it's all guesswork.


Kirlian photography is, to be blunt, garbage. The fact that living things are surrounded by slightly ionized gas that can cause distortions to photographic plates can be explained by the fact that we are moist, moving objects. A sponge puppet can produce the same phenomenon, and no one thinks those are alive or have souls.

Anyways, part of what we're getting into here is trying to apply SR information to a discussion that is grounded in real world science and philosophy. We could probably all stand to be more specific about whether we are talking about what actual computers are capable of or if we're talking about what SR computers are capable of, because as anyone who knows anything about computers can tell you, SR meshes very poorly with reality when it comes to computers, and SR contains a lot of reproducable supernatural phenomenon that haven't been proven to exist in the real world.

Considering my group is made up mostly of computer scientists, we house rule almost all the computer stuff, because we can't stand how bad it is.
mfb
yeah. i mean, i love doing Matrix stuff, but it's only because i turn my brain off when i play.

QUOTE (JaronK)
One test, by the way, to determine of something is actually an A.I. is if you talked to it online, can you tell if it's a person or a computer? If you can't tell, it's an A.I.

that's a reasonably practical way to define what is and isn't an AI. you'd have to come up with specific criteria, i guess--"if the AI can fool anyone that talks to it for at least 24 hours of discussion time", or somesuch.
Crimsondude 2.0
Why in the world is this discussion still going on?

Let's consider VR2.0. An AI is at leastr comparable to an SK. An SK requires, at minimum, a Red 10 host, and the eqv. of a half-dozen or more programmers with Computer 12. (140-41).

"programming an SK is beyond the capabilities of player characters (Matrix, 147).

More fundamentally, "AIs cannot be created--they happen." (Matrix, 150).

The criteria is:
QUOTE (Matrix @ 150)
1 ) The program must be at least as sophisticated as a semi-autonomous knowbot.
2) The program must have access to vast processing power, which is available in only a few select hosts.
3) The program must run nonstop for a period of years.
4) Finally, the program must be affected by some glitch-- an x-factor-that sparks awareness.


This is a non-question. They cannot create an AI.

Everytime someone asks a question with a clear answer, yet it is debated for four pages, God kills a kitten.

Please, think of the kittens.
Kagetenshi
This thread isn't even two pages long yet nyahnyah.gif

~J
Crimsondude 2.0
Good for you.

95 responses for this is too much.
SirKodiak
QUOTE
Why in the world is this discussion still going on?


QUOTE
Everytime someone asks a question with a clear answer, yet it is debated for four pages, God kills a kitten.


Because the ways computers work in ShadowRun is so out-of-date in comparison to what we know now, that the dead kitten could probably come up with more realistic house rules than the current setup.

This is particularly obonoxious as computer and electronics technology are at the core of the Cyberpunk setting, so that half the Cyberpunk Fantasy setting that is Shadowrun causes laughter when reading in people who know what they're talking about. The rules read like the person who wrote them had only ever read 80s cyberpunk sci-fi, and never actually used a computer.

Anyways, I'm sorry if you don't like the thread. If only there was someway to not read a thread you weren't interested in.
Kagetenshi
QUOTE (Crimsondude 2.0 @ Dec 28 2004, 12:15 PM)
Good for you.

95 responses for this is too much.

The thread has evolved decidedly beyond the initial (and easily answerable) question. This is the nature of many discussions on Dumpshock, if you haven't noticed. While I think we're reaching the end of the new material on our current line of questioning, I would decidedly disagree that what we've discussed so far is worthless.

~J
Crimsondude 2.0
There are worthless threads, just like there are worthless questions (and people).

This is one of them.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012