Jump to content

How do you place visual effects into a shot when doing a photochemical finish?


Reuel Gomez

Recommended Posts

From what I understand, digital compositing never really became commonplace until the digital intermediate process became popular in the early 2000's. So how were visual effects (not special effects) placed into a shot if you aren't doing a D.I.?

Link to comment
Share on other sites

Optical compositing, multiple exposure, motion control, stop-motion, go-motion, travelling matte, foreground miniatures- this is a huge field. There's no simple answer to your question.

Edited by Mark Dunn
Link to comment
Share on other sites

Optical compositing, multiple exposure, motion control, stop-motion, go-motion, travelling matte, foreground miniatures- this is a huge field. There's no simple answer to your question.

I'm referring to CGI, not anything that can be shot practically. Sorry I didn't specify
Link to comment
Share on other sites

  • Premium Member

I don't think there's been more than a few movies that rendered CGI then composited it optically. Any that did would have been very early examples. The Kodak Cineon system was invented to get around exactly this sort of situation in the early 90s and was obsolete by 97. So, those sections of the movie which had CG effects in them were scanned, composited and filmed out, although the purpose of doing so was to add effects, not do grading. Something like Jurassic Park would have been done this way.

 

Before that, Tron famously did CGI optical compositing, using a horrendously manual technique involving originating on 65mm then shooting every frame out to large kodalith sheet film and rotoscoping manually to produce mattes with which to drop in the backgrounds. However, most of the stuff that's in Tron is actually compositing of airbrushed backdrops - I'm not sure how much actual live action/CGI integration there is. There was certainly no moving camera integration - it was all locked off. The DVD extras famously include one of the people involved saying "it had never been done that way before and it will never be done that way again", and I would hazard the opinion that he's right.

 

Some CGI was done for The Last Starfighter. I'm not sure if there was any live action integration.

 

Theoretically you could still render your CG, film it out, and drop it in using an optical printer, and the process wouldn't be any different to what was done for, say, Star Wars, just using CG elements as opposed to models. The practical complexities of making things look properly integrated and part of the same scene in an optical printer are notorious, involving complex exposure, filtration and timing concerns. I'm not sure this was ever done.

 

P

Link to comment
Share on other sites

  • Premium Member

We had a similar question back in "2004" and this was my response then:

 

http://www.cinematography.com/index.php?showtopic=1067&page=4

 

 

 

 

"Terminator 2" was blown-up conventionally using an optical printer. What helps his blow-ups is that Cameron asks his cinematographers to go for a very dense negative. Also, Adam Greenberg did a great job of lighting for contrast, which gave the image a nice snap, which always helps a blow-up.

 

However, it was one of the first films to have its digital efx shots composited digitally and then laser record the finished shot back to film (Super-35.) ILM previously had recorded out the separate digital efx elements to film and then did the compositing with the background plates in an optical printer (as they did for the water snake in "The Abyss.") At a lecture I attended, Dennis Muren said he pushed ILM to get digital compositing and outputting to film ready in time for "Terminator 2."

 

 

"Terminator 2" came out in 1991. So the 1990's were the age of digital effects composites recorded out to film (usually an I.N.) and then cut into the film negative, and that was the norm all the way past "Lord of the Rings" (2001) until D.I.'s had become commonplace by the mid to late 2000's. It's still done today, as in the case of "The Dark Knight" films, which did not go through a D.I.

 

Before that, elements (models, CGI, matte paintings, live-action, etc.) were composited in an optical printer.

Link to comment
Share on other sites

  • Premium Member
ILM previously had recorded out the separate digital efx elements to film and then did the compositing with the background plates in an optical printer

 

Yikes, I stand corrected. That sounds like a pain. The period during which that was the standarda approach must have been fairly short, though.

Link to comment
Share on other sites

  • Premium Member

CGI elements were rare in the 1980's other than "The Last Starfighter" -- the stained glass knight in "Young Sherlock Holmes" was 1985 and the water snake in "The Abyss" was 1989.

 

There was some attempt at digital compositing of a few shots even back then, even "Flash Gordon" (1980) tried some electronic compositing which had to be recorded back to film. See:

 

http://www.theasc.com/magazine/april05/conundrum2/page1.html

 

 

The DI process was born out of a rather recent marriage between visual effects and motion-picture film scanner- and telecine-based color grading. Of course, digital imaging began impacting the motion-picture industry a long time ago. While at Information International, Inc. (Triple-I), John Whitney Jr. and Gary Demos (who now chairs the ASC Technology Committee’s Advanced Imaging subcommittee) created special computer imaging effects for the science-fiction thriller Westworld (1973) and its sequel, Futureworld (1976). The duo subsequently left to form Digital Productions, the backbone of which was a couch-sized Cray XM-P supercomputer that cost $6.5 million. With that enormous hunk of electronics (and an additional, newer supercomputer that the company later acquired), Whitney and Demos also produced high-resolution, computer-generated outerspace sequences for the 1984 feature The Last Starfighter. The substantial computer-generated imagery (CGI) in that film was impressive and owed a debt of gratitude to a groundbreaking predecessor: Tron. That 1982 film, to which Triple-I contributed the solar sailer and the villainous Sark’s ship, featured the first significant CGI in a motion picture — 15 minutes worth — and showed studios that digitally created images were a viable option for motion pictures.
During that era, computers and their encompassing “digital” aspects became the basis of experiments within the usually time-consuming realm of optical printing. Over 17 years, Barry Nolan and Frank Van Der Veer (of Van Der Veer Photo) built a hybrid electronic printer that, in 1979, composited six two-element scenes in the campy sci-fi classic Flash Gordon. Using both analog video and digital signals, the printer output a color frame in 9 seconds at 3,300 lines of resolution. If optical printing seemed time-consuming, the new methods weren’t exactly lightning-fast, either, and the look couldn’t yet compete with the traditional methods.
In 1989, Eastman Kodak began research and development on the Electronic Intermediate System. The project involved several stages: assessing the closed-loop film chain; developing CCD-based scanning technology with Industrial Light & Magic; and, finally, constructing a laser-based recording technology and investigating the software file formats that were available at the time. The following year, Kodak focused on the color space into which film would be scanned. The project’s leaders determined that if footage were encoded in linear bits, upwards of 12-16 bits would be necessary to cover film’s dynamic range. Few file formats of the day could function at that high a bit rate. Logarithmic bit encoding was a better match for film’s print density, and Kodak found that 10-bit log could do a decent job (more on this later). The TIFF file format could handle 10-bit log, but was a bit too “flexible” for imaging purposes (meaning there was more room for confusion and error).
Taking all of this into account, Kodak proposed a new format: the 10-bit log Cineon file format. The resulting Cineon system — comprising a fast 2K scanner capable of 4K (which was too slow and expensive to work in at the time), the Kodak Lightning laser recorder and the manageable Cineon file format — caused a radical shift in the visual-effects industry. In just a few short years, the traditional, labor-intensive optical died out. Though Kodak exited the scanner/recorder market in 1997, the Cineon file format is still used today.
---
I seem to recall reading in Cinefex that the early CGI shots like the Genesis Planet demo in "Star Trek 2" was basically photographed frame by frame off of a hi-res b&w CRT monitor using color filters to build up RGB, which was how a lot of the early video-to-film transfer devices worked.
Link to comment
Share on other sites

  • Premium Member

There's an interesting coda to this, about that filtered-CRT trick.

 

From what I read, one of the issues they had on Tron was that one of the companies involved (which included Information International, coincidentally) was doing vector graphics, and filled-in areas with a repeated raster of passes. The vectors are visible in the "journey to the computer world" sequence where objects are clearly hashed-in with lines as the camera flies close.

 

The other guys were doing filled polygons. The vector stuff included Sark's carrier, but the carrier was required to appear in other scenes which were being polygon-rendered. The polygon people were therefore required to emulate the look of the vector graphics using very thin poly objects...

 

All of it was, as far as I know, shot off mono CRTs with RGB filters. Being as there was no such thing as any form of preview, or any way to store the images, this was the only way to do it - render it to a framebuffer and shoot it out to film. I don't think the vector stuff was even simultaneously rendered, just drawn onto a CRT and integrated as a photographic exposure.

 

And I thought Imagine 3.0 on the Amiga circa 1994 was painful. That solar sailer was beautiful.

 

P

Link to comment
Share on other sites

  • Site Sponsor

Modern CRT film recorders from Celco and Lasergraphics work the same way with a monochrome CRT and color filters, they just work allot faster. I think the elements for Dark Knight, etc. were recorded on a Celco at 4-5sec/frame.

 

-Rob-

Link to comment
Share on other sites

  • Premium Member

The Dark Knight trilogy did not go through a D.I. but not only did any visual effect have to go through the process of being scanned, composited with efx, and recorded back to a 35mm anamorphic I.N. to be intercut with the non-efx live-action footage shot in 35mm anamorphic... but it got even more complicated with the second two because of the IMAX footage.

 

The first one, "Batman Begins", had a photochemical contact-printed post (other than the efx shots) but was then also blown-up to IMAX by taking a color-timed 35mm I.P. and scanning that, going through the IMAX DMR process (basically a secret sauce of digital grain reduction and sharpening so that it looks decent on a large IMAX screen, and then recording the results to 15-perf 65mm negative.)

 

But for the other two, the filmmakers did not want to do a D.I. and then just record-out 35mm, DCP, and IMAX versions from one digital master, they wanted a film master using original negative for each release format... (similar to what had to be done with "Little Buddha", which mixed 35mm anamorphic and 5-perf 65mm) so essentially it meant that the IMAX footage had to be reduced to 35mm anamorphic and cut into the original negative, and the 35mm anamorphic footage had to be blown-up to IMAX and cut into that negative, so two original negative masters exist for contact-printing off of, one in 35mm anamorphic and one in IMAX. So essentially there are chunks of the movie that went through a D.I. type process for both the 35mm and IMAX versions, but overall, whenever possible, if original negative could be contact-printed from, that's what they did.

 

It's one reason the IMAX sections stand out, as compared to the IMAX sections in second "Transformers" movie or the last "Star Trek" movie -- in those movies, the IMAX footage went through a D.I. and had a lot of efx added to them (and in the case of "Star Trek: Into Darkness" also got converted into 3D) so you don't get to see contact-printed IMAX negative for any non-efx moment as you can in the second two "Dark Knight" movies.

Link to comment
Share on other sites

  • Premium Member

I'm referring to CGI, not anything that can be shot practically. Sorry I didn't specify

 

A CGI element like a monster could be recorded out to a piece of film, probably against a black background, along with the two hold-out mattes needed to be able to composite the element into a background element in an optical printer.

 

Basically positive versions of the elements are loaded into the projector side of the optical printer, along with any hold-out mattes, and are exposed onto a dupe negative in the camera side of the optical printer.

 

See:

http://en.wikipedia.org/wiki/Compositing

Link to comment
Share on other sites

A lot of interesting background, but I think the
simple response is that the OP's original understanding
was incorrect. Digital compositing was in wide use
well before D.I.'s. It was standard for CGI elements
through most if not all the '90's.

Link to comment
Share on other sites

A lot of interesting background, but I think the

simple response is that the OP's original understanding

was incorrect. Digital compositing was in wide use

well before D.I.'s. It was standard for CGI elements

through most if not all the '90's.

What happened was I was watching a featurette on the T2 DVD and I was confused when I heard them talking about the VFX shots and optical compositing and took it as if Jim Cameron were saying that they were optically compositing.
Link to comment
Share on other sites

The Dark Knight trilogy did not go through a D.I. but not only did any visual effect have to go through the process of being scanned, composited with efx, and recorded back to a 35mm anamorphic I.N. to be intercut with the non-efx live-action footage shot in 35mm anamorphic... but it got even more complicated with the second two because of the IMAX footage.

 

The first one, "Batman Begins", had a photochemical contact-printed post (other than the efx shots) but was then also blown-up to IMAX by taking a color-timed 35mm I.P. and scanning that, going through the IMAX DMR process (basically a secret sauce of digital grain reduction and sharpening so that it looks decent on a large IMAX screen, and then recording the results to 15-perf 65mm negative.)

 

But for the other two, the filmmakers did not want to do a D.I. and then just record-out 35mm, DCP, and IMAX versions from one digital master, they wanted a film master using original negative for each release format... (similar to what had to be done with "Little Buddha", which mixed 35mm anamorphic and 5-perf 65mm) so essentially it meant that the IMAX footage had to be reduced to 35mm anamorphic and cut into the original negative, and the 35mm anamorphic footage had to be blown-up to IMAX and cut into that negative, so two original negative masters exist for contact-printing off of, one in 35mm anamorphic and one in IMAX. So essentially there are chunks of the movie that went through a D.I. type process for both the 35mm and IMAX versions, but overall, whenever possible, if original negative could be contact-printed from, that's what they did.

 

It's one reason the IMAX sections stand out, as compared to the IMAX sections in second "Transformers" movie or the last "Star Trek" movie -- in those movies, the IMAX footage went through a D.I. and had a lot of efx added to them (and in the case of "Star Trek: Into Darkness" also got converted into 3D) so you don't get to see contact-printed IMAX negative for any non-efx moment as you can in the second two "Dark Knight" movies.

So anything that contained CGI was scanned/recorded back onto 35/65mm but everything else was contact printed, did I get that right?
Link to comment
Share on other sites

 

A CGI element like a monster could be recorded out to a piece of film, probably against a black background, along with the two hold-out mattes needed to be able to composite the element into a background element in an optical printer.

 

Basically positive versions of the elements are loaded into the projector side of the optical printer, along with any hold-out mattes, and are exposed onto a dupe negative in the camera side of the optical printer.

 

See:

http://en.wikipedia.org/wiki/Compositing

What's a dupe negative?
Link to comment
Share on other sites

  • Premium Member

What happened was I was watching a featurette on the T2 DVD and I was confused when I heard them talking about the VFX shots and optical compositing and took it as if Jim Cameron were saying that they were optically compositing.

 

For one thing, even today people loosely use the word "opticals" to refer to certain transitional effects such as dissolves, even tough they are all done digitally today. Second, even though ILM tried to do all the efx compositing digitally for T2, the film itself was shot in Super-35 and optically blown-up to anamorphic, plus there were probably some optical effects added to the negative -- occasional shots get farmed out to other efx companies, little stuff, but it may involve optical printing. Plus perhaps ILM's decision to do all the compositing digitally hadn't been made while they were shooting, so some comment made on the set would say "optical compositing". And perhaps at crunch time, maybe ILM had to push some stuff to their optical printer department to finish, they just couldn't get it all done in their digital system.

 

"Dupe negative" ("dupe" short for duplicate) is used interchangeably with "internegative" (though technically I think "internegative" refers to a negative made from a reversal positive original) -- either way, what I really mean is the use of intermediate duplication stock, a low-contrast film stock with a color mask (that brick orange color that negatives have) used for duplicating a piece of film with minimal increase in contrast and grain. If you copy a camera negative onto the stock, you end up with a positive image and thus is called an interpositive (I.P.) but if you copy this interpositive onto this same dupe stock, you end up with a negative image and this is called a "dupe negative" or "internegative" (I.N.).

Link to comment
Share on other sites

  • Premium Member

So anything that contained CGI was scanned/recorded back onto 35/65mm but everything else was contact printed, did I get that right?

The effects shots needed to be scanned and recorded back, but also they had to digitally reduce the IMAX footage down for the 35mm negative cut, and they had to digitally blow-up the 35mm footage to IMAX for the 15-pert 65mm negative cut.

Link to comment
Share on other sites

The effects shots needed to be scanned and recorded back, but also they had to digitally reduce the IMAX footage down for the 35mm negative cut, and they had to digitally blow-up the 35mm footage to IMAX for the 15-pert 65mm negative cut.

When the 35mm footage is blown up, isn't there a loss in quality?
Link to comment
Share on other sites

  • Premium Member

It's a digital "blow-up" meaning it's more of a format conversion, there's no generational loss as would happen in an optical printer working with dupes, it's just that the original 35mm image is going to be magnified on a larger IMAX screen... you're not going to get "loss" (unless the film element was scanned or recorded at a lower pixel resolution than the image contains in terms of detail) but you will be seeing the limitations of the original a little more clearly. However, part of the IMAX DMR process is to reduce grain and sharpen the image so that it holds up a bit better on a large IMAX screen.

Link to comment
Share on other sites

No generational loss, but certainly an effect on quality.
There is always concern when digitally manipulating
images, as certain operations alter the pixel arrangement,
among them being scaling and rotation.
Obviously, when blowing up, you're interpolating
pixels in that weren't there before. Bigger image, yes,
but not the same as the original. Conversely, when
reducing, you're throwing pixels away.

Link to comment
Share on other sites

  • Premium Member

If the original 35mm film element didn't have pixels and you scan it at 4K RGB, for example, and then scan that 4K RGB file to a piece of 65mm film, the "enlargement" is only happening because the image will be projected larger than it normally would be, but there is no resizing of the file necessarily. That's why digital conversions from Super-35 to anamorphic, for example, are not really "blow-ups" in the traditional optical sense of the word, you're just more or less outputting the file to a different film format. Yes, some rescaling might be happening but just because your film was originally "x" size and you scan that and then record that onto a piece of film that is 4X as large, that doesn't mean that you had to make the pixel dimensions of the file 4X as large. Besides, interpolating additional pixels when making a file size larger does not necessarily mean you are losing information in the original.

 

And we're talking about a relatively small loss (if done to high enough levels) compared to optical printing using dupe film elements where you have generational loss.

 

My point is that "blow-up" is an optical term that is not really an accurate way to describe scanning a piece of film and then recording that digital file onto a larger piece of film.

Link to comment
Share on other sites

Well, David, I'm not sure I follow you. You do a 4k scan off
a 35mm original, it will record back onto 35mm exactly the same way.
Record that same 4k onto 65mm and you're either going to have
an image that doesn't fill the larger area or you're recording
with bigger pixels that do fill the larger area. Don't we
always laud bigger film for its higher resolution capability?

If we keep our pixel size constant, then any alteration of
image size entails adding or subtracting pixels. The pixels
you add are computational products, not actual photographic
detail, thus not original. The pixels you subtract,
of course, would constitute lost detail from the original.

Link to comment
Share on other sites

  • Premium Member

I'm just saying that enlarging the file doesn't necessary count as loss per se, you aren't throwing out information to make it larger, you're just seeing it's original lack of information more clearly. Whereas if you have to do a blow-up through dupe elements in an optical printer, you have some inevitable generational loss unless you want to make each IMAX print directly from a 35mm negative in some sort of optical printer that allows it. I once saw "Howard's End" printed directly from the original Super-35 negative to 70mm print stock using an optical printer and it looked quite lovely but that isn't very practical.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...