Help - Search - Members - Calendar
Full Version: Video editing and CyberEyes
Dumpshock Forums > Discussion > Shadowrun
Tarko
Do you guys think that a decker (I wont use hacker) could , technicly, edit a cyber eye'video feeds to remove all instance of a personne or an objet, making 'invizible' for the target?

I know that you could just crash the cybereyes and make that person effectivly blind altogether, but thats not the issue here.

(crashing the cybereyes... funny thing.. I would like to see the streetsam physicly re-booting his eyes, poping them off to do so)
Tarko
Am asking the question since editing if usualy for just a small task... unless I totaly missed something. But video editing might be different.
Jaid
i believe you would have to edit repeatedly (ie not just one action), but i think it would be possible.
Tarko
so technicly you could upload an agent into the cyberEyes with the edit programm and make it dedicated for that purpose?


(how do you spell 'technicly' ?)
Darkness
You spell it "technically".
Squinky
Someones been watching/reading ghost in the shell....
Tarko
actually, no.

I think I heard someone stating there was a second one.
I couldnt even watch the first one, I would fall asleep each time.
Chandon
I'd think it would work exactly the same as editing a security camera feed. You need to get to the cybereye node through the users PAN and then perform an edit action - I don't remember if you need a control program or not.

On the other hand, I'd give a gigantic bonus to notice errors in the editing for the owner of the eyes - it's not like he'll be looking at something other than his visual feed from his cybereyes.
Azralon
Run an agent loaded with Analyze and Edit. You'll get a program doing something similiar to virtual weather, virtual person, virtual pet, and so forth found in the sourcebook.

(It's been talked about before.) smile.gif
Mr. Unpronounceable
QUOTE (Tarko)
actually, no.

I think I heard someone stating there was a second one.
I couldnt even watch the first one, I would fall asleep each time.

In that case:

His reference was to a scene where one of the protagonists notices that a dying man is looking at someone else - someone he can't see - prompting him to accuse the unseen person as having 'hacked his eyes'

It was actually the first thing I thought of when I heard about cyber being hackable.
Squinky
Me too. I thought it was a cool concept though.

Too bad we don't all have cyber-brains...
MaxMahem
Visual interceptors = cool.gif , So I'd allow it. This might not be as difficult as you might think. What the brain perceives is not always the same as what the eyes SEE. You only have to exploit this fact to fool the brain. Considering the percentage of people who use Augmented Reality in 2070, this idea has a lot of potential.

As I see it it would be a pretty easy analysis test and then a more difficult edit test. Since the edit test is harder (and I'm a big beliver in simplifing the rules) I would just have them perform an edit test, they would need an analysis program though. This would have to be done fairly continuously, so a hacker would probably have to set up an agent inside the device/node to do it. Tests would be required when the effect started, and whenever someone tried to observe in detail.
I would set the thresholds as follows.
  • 1 hit - To obscure/replace the features of something. For example, to put the laughing man logo on top of your face.
  • 2 hits - To make some no-obious object invisible. For example, to make some small object dispear in a semi-clutered room.
  • 3 hits - To make something that is not actively being looked for invisible. For example, a runner trying to hide from a patrolling security guard.
  • 4 hits - To make something that is being actively looked at/for invisible. For example, that same runner after attracting the guards attention.
-----

Again I REALLY like this idea. It flows with the augmented reality/wired/cyberpunk theme of Shadowrun to me (which is pretty obvious considering how it draws from Ghost in the Shell SAC which is also Cyberpunk). There should be some kind of sprite that would perform this action for technomancers. I'll get working on it.
fistandantilus4.0
my only issue on this is that the eyes would have to be wireless augmented as well. They'd have to be keyed into the commlink. Most liely they would , since it's probably a fair assumption that it they have a set of cybereyes they'd link them up to the AR so that hey don't have to have the glasses or contacts too . But it's not something that wil be in every case. If this happens, and the decker is the one doing the hacking, then they should jsut be able to turn off the commlink. But if you send in an agent, then turning it off doens't matter, the 'virus' is already in.
just my 2 nuyen.gif
mfb
they wouldn't, though. the target doesn't really need cybereyes at all--as long as your target has some sort of AR interface running, you should be able to hack their visual feed. the question, though, is whether or not you can come up with a believable video file to insert into their vision. personally, unless the hacker has explicitly sunk phat sacks of cash into a high-end animation utility, and has a respectable amount of skill in video editing/animation, i'm not going to let them create any thing that will fool someone. at best, i'd allow them to insert new AR data into the target's vision.
MaxMahem
QUOTE
my only issue on this is that the eyes would have to be wireless augmented as well. They'd have to be keyed into the commlink. Most liely they would , since it's probably a fair assumption that it they have a set of cybereyes they'd link them up to the AR so that hey don't have to have the glasses or contacts too . But it's not something that wil be in every case. If this happens, and the decker is the one doing the hacking, then they should jsut be able to turn off the commlink. But if you send in an agent, then turning it off doens't matter, the 'virus' is already in.
just my 2 nuyen.gif

I think it highly likely that the VAST majority of cybereyes are linked into there users PAN. In SR4 every cybereye includes a camera by default. And while the eye can be assumed to have some storage, linking it up to users Comlink with the potential to store vast amounts of visual data is only logical. Not to mention the utility to share your visual data with others. People who are downplaying the cybereye in favor of shades/contacts just haven't realised the power and utility of this (my players haven't). And as you say any sort of AR use of a cybereye would require wirless conectivity.

As for getting rid of the Agent that has possesed your Cybereye, that's just life in the 7th world Chummer. As I see it, if you load an agent onto someones eye/VR gear, there options are limited. As you say, cutting the eye out of your PAN doesn't help you. I would rule that such an agent could easily run on your eye, since editing/filtering data is something it does naturaly anyways. Rebooting/System Reset doesn't help since the active program is "saved" and presumably starts up again on restart. SR4 isn't clear on this, but is certianly how I would program my agent to work, if it was possible, and I don't see why it shouldn't be (gameplay wise or realisticly).

So what you need is some IC or combat skills to activly destroy/terminate the program. If you don't have either of these things, than your just stuck. You'll have to pay your friendly neighborhood decker/technomancer to remove it for you. Sucks to be you chummer, watch what networks you interface next time.

QUOTE
they wouldn't, though. the target doesn't really need cybereyes at all--as long as your target has some sort of AR interface running, you should be able to hack their visual feed. the question, though, is whether or not you can come up with a believable video file to insert into their vision. personally, unless the hacker has explicitly sunk phat sacks of cash into a high-end animation utility, and has a respectable amount of skill in video editing/animation, i'm not going to let them create any thing that will fool someone. at best, i'd allow them to insert new AR data into the target's vision.

I can see where you are comming from here, but you have to consider the possible advances in Shadowrun programming skills. Simply pasting a fake picture over someones face is possible in real-time today, though wouldn't be simple. In 2070 this stuff is old hat, and the foundation of some of the popular AR MMORPG played in real-life. These AR RPG's and other AR 'themes' consitantly replace/alter the apperance of the background, surrounding, and people a person percieves to fit the setting of the AR program. This is the foundation of what I see AR to be. Eliminating a person or object from perception is just a mild step up from here. It is also enhanced by the fact that people are growing acustomed to what the see via AR being diffrent than what realy exists. In 2070 VR, including nearly indistinguishable from reality Ultra-Violet VR have been around for quite some time as well. I think it's safe to say that such levels of computational ability are within reasonable grasp at this point.

But I do see your point, AR devices such as glasses/contacts/goggles could be harder to fool than cybereyes, because their nature of operation is diffrent. They (generaly) provide an overlay rather than replacing what a user sees. If using my system you could increase the threshold or substract dice to reflect this. I'm not going to, because I think the cool factor overides this. Dice should be added or subtracted anyways, as some editing tasks are going to be more difficult than others depending upon the situation, here are some examples of some modifiers:
  • +4 - User heavly using AR (playing a game)
  • +2 - User moderatly using AR (mild theme to vision)
  • +0 - User mildly using AR (few information pop-ups)
  • -2 - User not actively using AR (but still has device)
  • +2 - Man AR objects on screen (lots of AR adds and what not)
  • -0 - Uncomplex sceen (few moving objects, allyway)
  • -2 - Moderatly complex sceen (4-5 moving objects, side street)
  • -4 - Very complex sceen (Many moving objects, main street)
  • -8 - Extreamly complex sceen (Very many moving objects, Times Square)

Thats all I could think up with right now, but others could apply.

Another way of doing this might be to have the edit test opposed by a perception check. This is more appropriate in a crowded situation, where an edit might be a slight failure, but still not get noticed. Still think of rules for this, but you could just note the failure, and have the person role perception to see if they notice it. Nice and simple.
Orb
The only problem I see with this is that the hacker would have to perform the same procedure to each eye. Each individual cyber eye has its own image sensor and image processor - otherwise stereoscopic vision that we rely on would be ruined. Since the two agents would then be running independently from each other, the results could be different for each eye. That would most likely result in mismatched information heading to the brian, which should be easy to notice.

Cybereyes are normally wireless enabled and slaved to the users PAN. They all have an image link - this is how AR information is sent to the eyes. I would assume that a single image link handles the wireless connection for both eyes, so you don't need to slave each eye, individually, to your PAN.

mfb
QUOTE (MaxMayhem)
I can see where you are comming from here, but you have to consider the possible advances in Shadowrun programming skills. Simply pasting a fake picture over someones face is possible in real-time today, though wouldn't be simple.

yes, and that takes a) a high-end program, and b) lots of training and practice to do right, despite how it's portrayed in the movies. besides all which, that assumes you've got a picture of someone else's face to paste on. i don't see art--and this is art, same as any other attempt to affect a viewer with artificial imagery--becoming any easier in 2070 than it is today.
RunnerPaul
QUOTE (MaxMahem)
that's just life in the 7th 6th world Chummer

Please do not refer to SR4 as the Seventh World. "World" nomenclature is based off of the mana cycle, and not other advances.
SL James
QUOTE (mfb @ Dec 31 2005, 01:02 PM)
QUOTE (MaxMayhem)
I can see where you are comming from here, but you have to consider the possible advances in Shadowrun programming skills. Simply pasting a fake picture over someones face is possible in real-time today, though wouldn't be simple.

yes, and that takes a) a high-end program, and b) lots of training and practice to do right, despite how it's portrayed in the movies. besides all which, that assumes you've got a picture of someone else's face to paste on. i don't see art--and this is art, same as any other attempt to affect a viewer with artificial imagery--becoming any easier in 2070 than it is today.

The hell you say? You mean all of the visual artists who happen to be programmers weren't wasting their time learning how to make art and integrate tech advances into their art, using tech to catch up to their own abilities? They couldn't just whip off a perfect simulacra in 10 seconds with Photoshop? Dammit, the media lied to me again! Noooooooooo!

Damn, Max. That's some ignorant shit.
MaxMahem
Virtual Reality simulations that are entirely realistic and virtualy indistinguishable from reality have existed since at least 2057. See "Dry Run" in Super Tuesday. The VR Simulation in that run was near perfect and everything that the character experienced was in fact virtual, including both the Astral Plane and Matrix. People, places, food, explosions, EVERYTHING had to be dynamicly generated (as they had no way of knowing what the runners would actualy do). UV Realms on the Matrix have also existed for quite some time. Some are completely realistic, some are fantastical, but all precived as being as real as real-life.

Given that the technology to do that has existed for at least 13 years in the Shadowrun world, I don't see why it should be so difficult to utilise this level of technology to perform the signifigantly less dificult task of simulating a portion of a characters visual reality.

I guess this is a blow to the present day visual artists. But the fact is in 2070 reality can be simulated, and you may never be the wiser.
RunnerPaul
QUOTE (MaxMahem)
including both the Astral Plane and Matrix.

Simsense recreating the astral plane? Can you give any more detail on this, because right now my bullshit detector has its needle pegged in the red.

First off, every other product that I've seen with details about ASIST technology have flatly stated that they've never been able to record the sensations of an astrally perceiving/projecting mage. Without pre-recorded reference material, it'd be fairly hard to put together a 100% convincing simulation.

Second, the portion of the ASIST playback system that's responsible for shutting off the body while the user is immersed in the simsense has no way of preventing a mage from astrally projecting/perceiving. The RAS Override shuts down muscle control and your body's organic senses. Going astral is not controled by any muscle, and is not an organic sense. There's no mechanism for the ASIST playback unit to detect when the mage is attempting to go astral, and no way for it to intercept that attempt so that the simulation can be substituted.

If that's really how that particular section of Super Tuesday is written, then my opinion of that book just dropped like a rock.
hyzmarca
It is in Super Tuesday, as Max stated. Dry Run is Dunkie's run from that book. The Big D owned a technology company that was building an early UV host. Their work was generally superior than the UV Hosts that would come later and was able to simulate everything except death and the metaplanes. Everything includes magic and astral. The beauty of this unique host was that certain values are influanced by user expectations to make the simulation seem more real. As a result, a character may get some interesting modifiers based on his expectations. This system almost certainly contributed to the realism of the magic simulation.

For more information look at this thread . http://forums.dumpshock.com/index.php?show...9522&hl=dry+run
RunnerPaul
QUOTE (hyzmarca)
Everything includes magic and astral.

Damn, I hate having to disregard canon books, but I'm afraid I'm left with no choice. Well, at least it saves me the cost of a book. I'm sort of glad I never got around to buying it.

You would think that something as important as a company developing a method of using ASIST technology and RAS Cutout to suppress a mage's ability to cast spells and to astrally project/perceive would have at least gotten a mention in subsequent sourcebooks on the subject. There have been a lot of books that have come out since Super Tuesday, and I've not seen any mention of such a technology. I doubt VisionQuest would simply sit on such a useful technique, and there's certainly a market for it.
SL James
QUOTE (MaxMahem @ Jan 1 2006, 10:47 PM)
Given that the technology to do that has existed for at least 13 years in the Shadowrun world, I don't see why it should be so difficult to utilise this level of technology to perform the signifigantly less dificult task of simulating a portion of a characters visual reality.

Well, let's see. First off Dry Run was set in a shit-hot SOTA simsense lab where the simulation was already programmed and designed by the simsense artists, and run on god only knows what kind of systems. Not one thing about it was "on the fly" which is what you're trying to tell me can be done by anyone with Adobe Photoshop 2070 instantaneously in realtime because computers are so good they can compensate for the unknown variables and uncontrolled environment which we call Real Life.

Somehow in 2070 a commlink and program are so aware and good that they can create perfeclty lifelike simulacra that are created instantly and updated perfectly in real time using insufficient or null information. Sure. There's nothing on-the-fly about Simsense and especially not the simsense used in that run.

Given that I, like RunnerPaul, am going to have to call major bullshit on that.

There are plenty of stupid or crazy things I've read on this forum, but the idea that anyone can whip up perfectly lifelike realtime animations instantly on a commlink is by far the pinnacle of both.
RunnerPaul
QUOTE (SL James)
Yeah, like RunnerPaul said, I'm gonna have to call bullshit on that.

Well, to tell the truth, I was only calling bullshit on using simsense to simulate astral space and using RAS Override to keep a mage from astrally projecting/perceiving. That's all. I agree with the rest of it.

After all, if we have canon examples of AR Overlay programs such as Virtual Weather (p.322), and rules that say the edit program can manipulate a turn's worth of video feed, then as long as you can actually gain access to the goggles/cybereyes, then you can change what someone is seeing. (I'd certainly set a high threshold for some of the tricks in this thread, but I'd still say they're within the realm of posibility.)
SL James
Fine, then. You didn't. I'm still calling bullshit, because this is just completely absurd. Apparently AR can perform magic, because that's the only way that's possible to render a realistic metahuman in an uncontrolled environment in under 3s. The person and program rendering them would also have to be considerably adept at combining the considerable amount of physics, kinesiology, and sheer artistic ability to pull that off. Oh, and that's assuming that they already have a good three-dimensional model.

In other words, to paraphrase a great Far Side comic, "... And then a miracle happens."

Removing an object is even better, because it requires the user and program to be able to convincingly recreate three-dimensional space with incomplete information (i.e., information on what is behind the object in their field of vision) onto the cybereye of a third-person from a different, shifting point of view, and do so in real-time. Oh, and of course let's not forget altering shadows, altering anything that may be moved, touched or altered by the person or object being "cloaked" and syncing all of that up with sensory information from all other senses.

I know that Lisa Smedman in The Lucifer Deck did this, but with 2-D CCTV cameras, but she also had a Light elemental that manifested in the Matrix. I'm not going to really lend too much credence to what the devs think technology can or can't do when one of the SR4 devs (Szeto) couldn't even explain how a diesel engine works correctly in R3 and Technomancers use magical unobtainium rays shooting out of their brains to connect to the Matrix.
MaxMahem
QUOTE
Well, let's see. First off Dry Run was set in a shit-hot SOTA simsense lab where the simulation was already programmed and designed by the simsense artists, and run on god only knows what kind of systems.

Yah, it was a shit-hot SOTA simsense lab... 13 YEARS AGO. Today that level of simsense is probably in play all over the place. And it's not even necessarily that "shit hot." In Ivy and Crome a Shadowrun for first edition set in 2050, a similar level of VR simsense (although in a more limited setting) was provided by VMI (Virtual Meetings Incoporated). That makes this level of tech over 20 years old. And VMI provided this to anyone who could pay for a meeting.

QUOTE
Not one thing about it was "on the fly" which is what you're trying to tell me can be done by anyone with Adobe Photoshop 2070 instantaneously in realtime because computers are so good they can compensate for the unknown variables and uncontrolled environment which we call Real Life.

Somehow in 2070 a commlink and program are so aware and good that they can create perfeclty lifelike simulacra that are created instantly and updated perfectly in real time using insufficient or null information. Sure. There's nothing on-the-fly about Simsense and especially not the simsense used in that run.

It was "on the fly" in the sense in that the content had to be dynamicly generated in real-time, not programed and set to render over a longer period of time like modern high-def 3D images are. Heck, some UV hosts which offer similar levels of realisim run signifigantly faster than real-time. These hosts (especialy the one in Super Tusday) had to have either a very large libary to draw there data from, or the ability to generate it on the fly. Probably a combination of both, IMO.

The variables are not necessarily totaly unknown to the edit program. It has the cybereyes real image to play around with, and its past image history as well. A small libary could also be included with common images/textures. And in 2070 a small libary could be very large indeed. In any case, there are cannon examples of programs in the SR4 sourcebook (Virtual Weather, Virtual Person, Mircale Shooter) which all do a similar amount of work, I'm not just spouting BS, just extending what is writen in the book.

I'm not saying it would be totaly easy, I think to hide a person would probably be a hard task, with a Threshold of 3, making a rating 6 agent and program necessary to achive average success in a normal situation. I'm working on a complete set of rules to do this with, and I'll post them when I'm done.
Ranneko
QUOTE (SL James)
Fine, then. You didn't. I'm still calling bullshit, because this is just completely absurd. Apparently AR can perform magic, because that's the only way that's possible to render a realistic metahuman in an uncontrolled environment in under 3s. The person and program rendering them would also have to be considerably adept at combining the considerable amount of physics, kinesiology, and sheer artistic ability to pull that off. Oh, and that's assuming that they already have a good three-dimensional model.

In other words, to paraphrase a great Far Side comic, "... And then a miracle happens."

Removing an object is even better, because it requires the user and program to be able to convincingly recreate three-dimensional space with incomplete information (i.e., information on what is behind the object in their field of vision) onto the cybereye of a third-person from a different, shifting point of view, and do so in real-time. Oh, and of course let's not forget altering shadows, altering anything that may be moved, touched or altered by the person or object being "cloaked" and syncing all of that up with sensory information from all other senses.

The aim is not to make it completely perfect, the aim is to make it convincing enough, it'd have a high threshold, and your opponent could then make perception checks to see if they spot the clues as to where the tampering is occuring.

It would of course, take a highly sophisticated edit tool, which would also need a good image and modelling library.
mfb
MaxMayhem, every example you have come up with so far involves pre-generated content. not completed content, necessarily, but a program that is specifically designed to create a certain set of experiences. there is a massive difference between creating a video game--which is basically what all your examples are--and creating on-the-fly content from scratch. yeah, they were able to generate a realistic VR experience 13 years ago. it involved massive sensory data libraries and fantastic amounts of computing power to assemble them. the only thing that's going to be different, 13 years later, is the availability and price of the computing power.

if you want to insert a simulacrum of yourself in someone else's AR, you need to somehow generate that simulacrum. you need to draw it and animate it. it doesn't just magically appear. no amount of technological advancement is going to change that--and no example you've so far shown has broken with that.
MaxMahem
QUOTE (SR4 @ pg. 322)
Virtual Person: Simulate your favorite person!  Whether it’s your ex-boyfriend or your favorite sim starlet, just access or upload their personal data, modify it as you see fit, and project the person into your life just like the real deal. This program only simulates one person at a time, and the realism in behavior depends on the amount of data given as well as the processing power of your commlink best results are achieved with a growing assortment of downloadable sim-persons (including sim stars like Tracy Monroe and Neko-Katz).

I don't realy see how it can get more explicit than that. In 2070, simsense has advanced to the point where you CAN simply insert realistic, real-time, dynamic images into someones AR. It only costs 150Y. Heck, programs exist to overylay Virtual Game imagery over your reality (Miracle Shooter) or change the weather, turn night into day, day into night, ect (Virtual Weather).

The Virtual Weather program is especialy impressive. It allos you to can change the apparent possition of the sun, add or remove rain, turn night into day and vise versa. All this would require incredibly massive editing and re-calculation of visual data. Changing the position of the sun demands recalculating the lighting level of everything you can see and more. Not to mention the filtering and dynamic editing (in addition to re-lighitng) that removing rain would require.

Given that this is possible, I see no reason that removing an object from view should not be. Certianly it shouldn't be much more difficult. Heck, deleting some of the unplesent parts of life in 2070 (metahumans, garbage, homeless people, whatever) would probably be a pretty popular application of AR technology.

-- Also it's Mahem, the lack of Y is intentional.
hyzmarca
To address RunnerPaul's objections:

The Astral is fairly simple to explain, it could be hand coded by magicaly active programers from memory. Because the astral plane is identical to the physical plane in everything except physics this should be difficult to do but not impossible. The holes is handled by the simulation's expectation based self-correction which adjusts the simulation world according to the user's expectations.

As for the RAS override problem - Jack be Nimble. Jack be quick. Jack jumped over a candlestick. Candlestick jumping is a difficult, dangerous, and impercise art. However, if one were to study Jack one could learn how to jump candlesticks in a half-assed way. Half-ass candlestick jumping does have certain applications.

Dunkie's sim didn't use a RAS override, a RAS override still leave the character with the ability to feel pain and to perform perform physical actions, albeit at a +8 TN.
Dunkie's sim completely severed the mind from the real world such that the immersed characters may as well have been Ghosts in the Machine from their own points of view.
That isn't RAS Override, that is something far more serious. Considering that Big D had JackBNimble and was having his people do their darndest to understand it, the VisionQuest's override is probably derivative technology.
mfb
jesus, MaxMahem, pay attention. i'm not saying you can't insert realistic video into someone's simsense. i'm saying that video has to come from somewhere. images do not appear from the ether fully-formed. someone makes them, or writes a program--such as Virtual Person--to make them. you keep referencing programs that are specifically designed to generate specific types of sensory input, and then claiming that means that anybody can slap together any sensory input they want, any time they want. there's a video game out there called RPG Maker, which allows anyone who uses it to create an RPG game. that doesn't mean that anyone, at any time, can simply generate playable game content on the fly.
Lagomorph
My impression of virtual weather was that it just replaced with its own weather, what it couldn't find on a range finder in the goggles or cyber eyes. So looking at a mirror into the sky would show the drab of Seattle, but looking up directly at the sky would show the beautiful Waikiki sunset.
Rotbart van Dainig
QUOTE (Tarko)
Do you guys think that a decker (I wont use hacker) could , technicly, edit a cyber eye'video feeds to remove all instance of a personne or an objet, making 'invizible' for the target?

Sure.

QUOTE (Tarko)
(crashing the cybereyes... funny thing.. I would like to see the streetsam physicly re-booting his eyes, poping them off to do so)

Crashing the OS automatically makes them reboot... even if not, the DNI is better than physical access.
RunnerPaul
QUOTE (hyzmarca)
As for the RAS override problem - Jack be Nimble. Jack be quick. Jack jumped over a candlestick. Candlestick jumping is a difficult, dangerous, and impercise art. However, if one were to study Jack one could learn how to jump candlesticks in a half-assed way. Half-ass candlestick jumping does have certain applications.

I'd buy that, but it's curious that VisionQuest hasn't ever taken that tech, and advanced it to where it's available in the marketplace. It'd certainly be a more humane alternative to some of the current methods for incarcerating the magically active.

RunnerPaul
QUOTE (mfb @ Jan 2 2006, 04:00 PM)
jesus, MaxMahem, pay attention. i'm not saying you can't insert realistic video into someone's simsense. i'm saying that video has to come from somewhere. images do not appear from the ether fully-formed.

And we're saying that the Edit utility is powerful enough to take other portions of the image and extrapolate from them with the same ease of using photoshop's clone tool today, as well as draw from an library of stock images, rendered shapes, and textures included in the program, and create the final desired image.

It would have been very easy for the folks at FanPro to say "video editing is a task with an interval of 10 minutes". Sure, it might be an oversight on their part, but looking at how the rules are laid out, it seems to me that they wanted video editing to be possible in real time.
mfb
*shrug* okay. i guess it's no less insane than a lot of the other rules.
PlatonicPimp
QUOTE (Ranneko)


It would of course, take a highly sophisticated edit tool, which would also need a good image and modelling library.

Like, say, The EDIT UTILITY.....
Ranneko
QUOTE (PlatonicPimp)
QUOTE (Ranneko @ Jan 2 2006, 10:26 AM)


It would of course, take a highly sophisticated edit tool, which would also need a good image and modelling library.

Like, say, The EDIT UTILITY.....

Indeed that was kind of my point.

And that you would need a fairly good one to be likely to succeed.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Dumpshock Forums © 2001-2012