QUOTE (Draco18s @ Jul 11 2013, 08:17 PM)

That isn't the question at all. Conclusion drawn from a false premise.
It isn't? Isn't this whole discussion spawned from the root question of:
How can task X be performed more efficiently through matrix aware gear than through non-matrix aware gear?Okay. Well. My mistake. That's what I've been discussing, at least.

Feel free to disregard my rambling.

QUOTE (')
Nezumi']Since we're pressing for resumes, I suddenly feel qualified to jump in again (I have 11 years in computer security and a degree in CS from one of the top 10 universities in the field. However, I don't play SR4 and haven't been tracking SR5 too closely, so I've felt content to be quiet.)
I think that definitely gives you a lot of room to weigh in on a number of topics here, and the security background should particularly help tie in numerous areas where 'overhead' can eat its way into otherwise simple processes.
Let me make some targeted responses:
QUOTE
1) Advantages of Predictive Computing
Correct. However, the context of the topic is accelerating a simple action into a free action. Obviously, no improvement exists to make free actions into
even more free actions. (Unless, I suppose, you were able to take an additional free action I suppose. Regardless, that's conjecture, and not proposed.)
QUOTE
2) Transmission speed
The actual processing power required for most of these activities, most especially predictive computing, is pretty minor. You have something watching the brain activity. If it detects the brain activity in a certain area, it triggers an event. That doesn't require a server farm. The major bottleneck for speed won't be processing power, but transmission speed.
Mmmm. My stock answer has been: That really depends, with 'minor' really being relative. While I respect that you have provided a thoughtful response, I'm going to stick with that stock answer.
Consider the following:
The overhead of creating new data vs. just grabbing data that's already been computed can indeed have significant savings. Passing a hash of 'brain activity' out into my limitless cloud of data and examining a small subset of matches replies could be an order of magnitude simpler than computing the results locally. So we aren't strictly considering distributed computing (Though that plays a role in a number of the 'under the hood' features that pull this all together), but we're also considering the implication of BigData in 2070.
Therefore, combining highly effective hash algorithms (possible) + near limitless data (possible), we find yet another cost savings afforded by an implementation favoring non-local capabilities: Fingerprint criteria. Pass fingerprint. Collect results. Simple Analysis. Loop.
Additionally, your security background likely recognizes that the actual implementation likely needs to be biased towards rejecting valid signals, rather than accepting invalid ones. So the subsequent problem isn't strictly 'transmitting open', but confirming that 'open' is indeed 'open', not 'fish' or 'apple pie'. This overhead of additional checks taxes a local process that executes serially in a way that parallel computations are not taxed.
Such an implementation could provide rapid predictive capabilities in conjunction with nearly 'free' error checking, as in the above example error checking is baked into the pre-existing data set.
QUOTE
3) Processing restrictions
This is the only area where I think the cloud may have a clear advantage over implants, but it's only for a small subset of cases. Specifically, it's for the cases where the processing requirements are so onerous that the processor can't be carried/worn/implanted, but that the transmission requirements are light enough for it to be functional over a limited wireless connection. So let's take a moment and break this into two separate questions.
I think this is a fair treatment of the topic. Expense incurred by transmission vs. Expense saved by local processing.
QUOTE
If you're calculating the weather for the next five minutes, the time spent uploading data will be greater than the time saved by calculating it on a supercomputer. In this case, I don't see most cyberware requiring (during combat) predictions more than thirty seconds in the future. Anything further ahead than that will require newer data, and newer data means you lose that speed advantage again. Processing the data locally is in this case ideal.
Really this can be extended to anything computationally complex. Anywhere from bullet dynamics to weather calculations. The key being that the data being processed does not necessarily need to be transmitted for processing from the local cyberware, merely references to that data, which may be minute in comparison.
And so the transmission overhead need not include, in all cases, the full set of data. That data may, or may not, enjoy much lower transmission latency between the point of storage and computation. And the final resulting data, likewise, need not be as massive as the set of data to be operated upon.
The received answer may be as simple as a transmitted: "YES" or "NO".
The end result simply goes back to a question of unknowns. There is a resounding chorus of voices that says:
It must not operate this way! I think your expertise may instead reinforce a more rational conclusion of: It
may or
may not operate this way.
-Wired_SR_AEGIS