Jump to content

History and development of motion capture technology


Recommended Posts

Hello all,

 

I want to dig deep into the history and development of motion capture technology (perhaps not starting as early Muybridge, but somewhere in the 60's/70's up through and on to today).

 

I know that as deep as I can go into google, wikipedia, patents, cinefex and technical/engineering library paper databases and what not there is usually someone on this board who has an insight beyond those means.

 

Any links, anecdotes, information or otherwise greatly appreciated.

Edited by Chris Millar
Link to comment
Share on other sites

Yes good point - I see the connection...

 

To provide a small bit of background I'm interested in developing a real-time application of motion capture - hence the interest in the past, not only is it interesting in itself it can often inform you away from re-inventing square wheels for instance.

 

Roto animation like in LOTR (original) - hrrrm ... ok, it doesn't exactly inform the current methodology, so although interesting: no.

Link to comment
Share on other sites

  • Premium Member

It's just that you mentioned Muybridge... animators have been filming live-action subjects to study or even trace with rotoscoping as a crude form of motion capture, from the Prince in "Snow White" to filming an elephant as the basis of doing the stop-motion of the Imperial Walkers in "Empire Strikes Back".

 

But if you mean modern computer-driven motion capture, I don't know when that began. I seem to recall that for "Jurassic Park", ILM made some interfaces that allowed stop-motion animators to move a model and have that translated into a key-frame animation on screen, for those who weren't used to animating directly on a computer.

Link to comment
Share on other sites

A lot of the techie history is very well documented at www.smpte.org . All the old documents are now stored on the site, and go back to 1916. I would say in the old days, they thought of "moving pictures" as motion capture so there was a lot of research in that space (i.e. motion blur, movement capture, film rates, camera movement vs actor movement, moving 3D capture, etc).

Link to comment
Share on other sites

Hello chaps,

 

Thanks much for the info so far - once I'm out of work this week I'll have a chance to follow through - smpte sounds promising !

 

Going to hustle up all the old Cinefex mags of the films that used/made progressive steps in the technology - imagine they'll talk about it.

 

Empire Strikes Back (have already)

Jurassic Park

Final Fantasy

 

uuum...

 

and - jeez ... Arri working on it, I imagine that'll be a pretty damn nice piece of kit

Link to comment
Share on other sites

  • 2 months later...

The fact that Arri are currently working on spatial capture only means that the technological ideas behind such are reaching a particular form of recognition.

 

The technological ideas behind this have been undergoing research and development for decades.

 

Another example are the dinosaurs in Jurassic Park which have their origin in a research project quite independant of Spielberg, involving the computer animation of dinosaurs. Spielberg was originally going to use animatronics but came across this project and made it mainstream.

 

So the history of certain ideas is not to be necessarily found where the ideas are entering mainstream technology or the public imagination. By this time the ideas are really quite old having spent decades in incubation, in low budget experimental contexts, in the dark somewhere.

 

However this is not diminish in anyway the mainstream activity as it brings something new to the ideas: better funding - and a resulting flowering of more attention and creative work.

 

C

Edited by Carl Looper
  • Upvote 1
Link to comment
Share on other sites

Motion capture involves two historically opposed parents: empericism and rationalism.

 

Muybridge is definitely a key figure here because it wasn't just the capture of otherwise moving objects (empericism) but also a corresponding analysis of the captures as well (rationalism). The cinema does not begin with Muybridge because while the encoding is part of what would become the cinematic apparatus, the decoding is not: the captures were not projected (decoded) in time but projected in space, laid out side by side. Time (and motion) is thought of in terms of space rather than in terms of time - or to put it another way time remains theoretical (or rational) rather than visible (emperical). An Einstienian (or Minskowskian) concept of time: time as a fourth dimension of space. Hyperspace. In this context time is perceived as some sort of eternal thing or even an illusion! The concept of time as something one might actually experience - ie. as a duration - or a waiting - as in the waiting room of a hospital, or in an Ozu scene, or in the interminable stretches of time onboard the Jupiter mission spaceship in 2001, remain yet to be understood.

 

But by keeping time (and motion) arranged in space instead it does allow analysis a way to progress. By assigning time the structure of space it lends it to mathematical treatment. The conversion back into actual time (cinema) can be deferred.

 

One of the interesting points in the history (in relation to the cinema) might be the work done by the MPEG group while working on video compression. The employment of motion vectors.

 

And another key point is the work behind MatchMove technology.

 

Other areas of interest are "Structure From Motion".

 

And of course the background behind technology such as XBox Kinect.

 

In each case we see analytic methods being used to make computational sense of time/motion (time > space > analysis) before transforming such information back into time again

 

Carl

Link to comment
Share on other sites

Seems like we have similar interests - hope we don't scare people off :D

 

Right now I'm reading up on DSP, Fourier transforms and what not. Laying the fundamentals for the image processing component of (some) motion capture algorithms. What captures my interest is the tenuous connections between all the different process, particularly attempting to decouple time from the other dimensions and seeing what can come from not treating as if it were special. Of course, it is different in that most (if not all?) originating sampling devices by their physical mechanism will treat it differently, but you know ... who knows might come from it.

 

I'm studying a engineering masters so I'm at the level where some technical books found in an engineering library are tractable (used 'tractable' in a sentence today>> tick), and yes, some papers and theses are also within my grasp - but so so many ...and of course each with it's own screed of references ... and then for instance, the Kinect patent ... it could win the Turner prize for it's legalese and technical jargon :rolleyes:

 

Anyhoo, work finished a week ago for an indeterminate amount of time - (the joys of being what the folk call 'freelance') - so I may as well make the most of the new summer sun, close the blinds and get into bed with some heavy math.

Link to comment
Share on other sites

Seems like we have similar interests - hope we don't scare people off :D

 

 

 

The Kinect stuff is interesting. Have been programming the Kinect for a number of projects. There's a new one coming out soon (apparently) with a better camera. The old one (or existing one) is god awful. Very noisy - needs a lot of filtering to get a decent signal. And the depth camera isn't much better. But despite that it does the job it needs to do. Intel's Perceptual I found a lot cleaner than Kinect but Kinect has that brand recognition factor which works well when pitching work. You'd think it wouldn't matter but it does. And Kinect is also quite an innovative thing that Microsoft funded - historically they have been trend followers but in this case they were definitely the trend setter. I understand that Apple has now bought Kinect.

 

 

C

Link to comment
Share on other sites

 

What captures my interest is the tenuous connections between all the different process, particularly attempting to decouple time from the other dimensions and seeing what can come from not treating as if it were special. Of course, it is different in that most (if not all?) originating sampling devices by their physical mechanism will treat it differently, but you know ... who knows might come from it.

 

In terms of conventional theoretical physics (ie. mathematics) time and space are bound together in terms of a single SpaceTime Continuum - a four dimensional frame of reference in which Relativity Theory governs the structural relationship between the parts. Newer frameworks (such as String Theory) extend this idea into more dimensions but its still basically the same idea - an homogenous consistent mathematical (or geometrical) structure.

 

In other words time is already treated as not special. Its effectively been converted into another dimension of space (or of geometry). Homogenity. Consistency. A Theory of Everything approach.

 

However, in terms of cinema, this is actually the problem. Cinematic time differs from abstract time insofar as cinema time is something an audience experiences. It takes actual time (experiential time) to watch a film. But in abstract spatialised time this otherwise experiential time is expressed geometrically and as a result there is no sense of experiential time - its as if the past, present and future all co-existed at once - all at the same time.

 

And indeed, when one looks at a roll of film, or a hard drive, the experiential time of any cinema encoded in such has been spatialised. The past, present and future is arranged along the length of the filmstrip, or across spatially separate bytes of memory on disc, or in solid state memory. Co-existing.

 

This is not necessarily the best way to understand time. In this state the time is locked up. Encrypted, giving rise to fantasies of Time Travel.

 

An alternative conception of time, which the cinema understands, is to de-spatialise this time - to restore or otherwise decrypt this locked up time. To create time. Or recreate it - the time it takes to watch a film. Or the micro-time between one frame and another. But as experienced rather than as it is arranged in a film strip or in bytes, or along the timeline of an NLE.

 

This is one of cinema's secrets - that it provides us with a concept of time very different from mathematical time. In other words it gives us a concept of time as something that is special rather than not so.

 

The hard part is that this special time doesn't lend itself so well to mathematical treatment. Its as if we have no choice, from a computational mathematical point of view, to use a spatialised version of time. However the solution is to ensure that while one might work in terms of spatialised time, and all that entails, one can transform the results of such work back into time proper through the simple act of doing what the cinema does (or music) and that is to perform the work - or run the program.

 

It is what we otherwise call "real time" or "run time" in programmer's speak.

 

Special time.

 

C

Link to comment
Share on other sites

Slitscan has always interested me - and perhaps a more generalised n-dimensional understanding of it...

 

n-1 >> audio spectrograms played sideways

n+1 >> a bit harder to visualise

 

Also gotta try 3D slitscan, once you're playing sideways, it'll actually be temporal parallax (i.e. a delay) that will provide depth.

 

Time for a cup of tea ;)

 

You're in Melbourne also ?

Link to comment
Share on other sites

You're in Melbourne also ?

 

 

Yes I am. I didn't realise you were also. I'm out in Spotswood (near Williamstown). I work from home most of the time - at least when on programming duties (or otherwise goofing off on film forums). But otherwise I'm working (or rather playing) in Carleton - at a studio called the Artist Film Workshop - open to membership if you are interested - but note that by "film" it literally means film of the photochemical variety:16mm and Super8 etc. Wet labs and darkrooms. Mechanical cameras and projectors. Workshops and Art Gallery exhibitions.

 

http://artistfilmworkshop.org/

 

C

Link to comment
Share on other sites

Also gotta try 3D slitscan, once you're playing sideways, it'll actually be temporal parallax (i.e. a delay) that will provide depth.

 

 

Re. 3D slitscan - I'm guessing you mean something like this:

 

 

Have been working (on and off) on film transfer systems and processing algorithms where I use temporal information (derived from optical flow estimations) to recompute and otherwise enhance transferred film. The development of super-resolution algorithms - although it's not just about extracting a sharper signal from the film, but dynamic range as well.

 

Part of the research work involves using Structure From Motion to segment those aspects of the image that are rigid bodies - constructing a 3D model of those aspects of a scene which vary only in terms of their 3D orientation and position with respect to a camera - and then integrating their projections along the temporal axis, before redistributing the projections back into time (from whence they came). And ensuring the result is at least as good as the unprocessed signal - ie. not making it worse. And how to computationally judge that!

 

Reflective and transparent surfaces are quite challenging.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...