Jump to content

Analytical breakdown of DI or Telecine process w digital or film source footage


Stelios Contos

Recommended Posts

35mm and digital S35 imager cameras are 4x3 or 1.33:1 aspect ratio.

 

So with a 2x squeeze, the imager captures twice the width.

 

With a 1.33:1 aspect ratio frame size, you're talking about 2.39:1 aspect ratio.

 

Actually, it would be a 2.66:1 AR with a 2x squeeze, so you'd have to crop left and right to obtain a 2.40:1 AR. Anamorphic 35mm is usually shot with a 1.20:1 mask in the gate.

Link to comment
Share on other sites

  • Premium Member

Hey all,

 

I wanted to know if anyone has found a site that gives literature breakdown as well as footage that shows the DI or Telecine process's?

 

Also, are film "Blow Ups" [E.G for IMAX] done in the DI or Telecine stage? And could someone expand on Film Blowups and when/how they are used?

 

My 3rd questions is partially tailored for a film section topic but: Generally speaking, whats the better option when shooting 4-perf 35mm, 16mm film or digital super 35mm sensor for a production company intended for DCP release(theater release if thats correct) with a final aspect ratio of 2.35? If the production company wanted to save money, would the principal photography process benefit from shooting w anamorphic lenses or just do the cropping and ratio changes @ the end?

 

Easier said: What are the resolution, sharpness, grain, etc gains or losses when using anamorphic lenses v. spherical lenses on 4-perf 35mm, 16mm or S35mm digital sensors?

Also, I'm still trying to wrap my head around the idea behind the cost associated with shooting with one film size & the final released aspect ratio (I.E The cutting of the film) that suits the story at-hand.

 

Thanks.

 

Hope that make sense.

 

Sites, video links, charts are all helpful.

 

 

Telecine just means a machine that transfers film to video in real time. Generally a film scanner is used for D.I. work -- film scanners don't work in real time, they scan one frame at a time. But there is some grey area between telecines and scanners. Some telecines transfer to data files instead of a video codec, and some run slower than real time when doing higher resolutions, making them pseudo scanners.

 

For the most part, telecines are used to transfer film to video for offline editing, and for making a home video master from a film that didn't go through a D.I.

 

More and more often though, scanners are used even for home video mastering.

 

Technically, "blow up" is misnomer to describe recording a digital file (made by scanning a smaller film format) onto a larger film format since there is no optical enlargement going on, though people use the term all the time. But basically the film recorder puts the digital file onto the piece of film. Now in the case of putting the file onto IMAX film (15-perf 65mm), there is some work done in terms of grain/noise reduction and sharpening to make sure the image holds up on a very large screen.

 

Anyway, "D.I. stage" versus "telecine stage" doesn't quite make sense.

 

Typically (as I said above) a film would use a telecine to make a transfer to video for offline editing, then for a D.I., once an EDL is created, the original camera rolls are scanned to RGB data files (usually DPX files) and the shots used in the edit are pulled out and conformed to match the offline edit. VFX shots, titles, and transitions are also cut in to create a master. Some shots might get cleaned up for dirt, dust, and scratches. Then the files are color-corrected (sometimes some of these things like building transitions or fixing a scratch happen simultaneously and dropped in as they are finished to be color-corrected.) So you have a color-corrected master and from that you make versions like a 2K or 4K DCP, 1080P HD home video masters, etc. The digital master might also get sent to a film recorder and the image is put onto film (often as a negative so that prints can be struck).

 

In terms of shooting for a 2.40 DCP, shooting with spherical lenses and cropping is more common than using anamorphic lenses, just because it's easier to shoot that way, plus it will be easier to make the 16x9 full-frame home video master if the original isn't 2.40 already.

 

In terms of image quality, whether anamorphic lenses help you get a higher resolution 2.40 version all depends on whether the anamorphic lens saves you from cropping the original, or how much cropping is needed, and the resolution of the original format. In the case of 4-perf 35mm, anamorphic would get you a better image because it uses most of the 4-perf negative area unlike cropping 35mm to 2.40, though spherical lenses are often sharper than anamorphic lenses so it's not a clear-cut issue.

Link to comment
Share on other sites

  • Premium Member

Actually, it would be a 2.66:1 AR with a 2x squeeze, so you'd have to crop left and right to obtain a 2.40:1 AR. Anamorphic 35mm is usually shot with a 1.20:1 mask in the gate.

If you shoot academy and make a print, you aren't cropping to get 2.39:1.?

Link to comment
Share on other sites

If you shoot academy and make a print, you aren't cropping to get 2.39:1.?

I don't understand the question.

 

If you shoot with 2x Anamorphics on a 1.33:1 sensor, you'll end up with 2.66:1. That's just math. So you'll need to crop to 2.40:1. That's why anamorphic 35mm is done with a 1.20:1 aperture, so that there's no cropping necessary.

Link to comment
Share on other sites

  • Premium Member

Not sure I understand the question.

 

The standard 4-perf 35mm anamorphic aperture is more or less the width of 1.37 Academy with the same offset to the right to make room for a soundtrack on the left, but the height of Full Aperture, taller than Academy Aperture.

 

It's rather similar to the first sound film format, Fox Movietone. Basically you had the 1.33 Silent Era / Full Aperture but the left side was shaved off for an optical track, making the projected area 1.20 : 1 instead of 1.33 : 1.

 

The industry felt this was too square-ish and in 1932, the AMPAS proposed that the top & bottom of a projector mask crop the image further, to trim 1.20 : 1 to 1.37 : 1, creating the Academy Aperture.

 

The original idea with CinemaScope (2X anamorphic) was to use 4-perf 35mm Full Aperture prints and run the sound in interlock using a separate 35mm mag roll, as Cinerama did. So the aspect ratio would have been 2.66 : 1 (2X 1.33).

 

But even before release of the first CinemaScope movie, it was decided that Kodak should make a special print stock with smaller sprocket holes to make room for tiny magnetic stripes on each side of the image on the print. This shaved the projected area down from 2.66 : 1 to 2.55 : 1.

 

Then later it was decided to use regular print stock with a regular optical track on the left side, shifting the projected image to the right side of the print and shaving the width down even further to 2.35 : 1.

  • Upvote 1
Link to comment
Share on other sites

  • Premium Member

Usually a rented 35mm camera set-up for anamorphic photography would traditionally be 4-perf 35mm with the optical center of the lens offset for sound print projection (Academy / 1.85 / scope) instead of centered between the perfs as with Super-35.

 

But usually the camera is still exposing Full Aperture unless you put a hard matte in the gate, but that extra image area on the left side is cropped off when making the sound print, and the scope projector mask also covers the optical track area.

 

However in this day and age of D.I.'s, some people use a 4-perf Super-35 camera so that a 2.66 (when unsqueezed) image is centered on the negative, but framed for 2.40 with the excess image equally on both sides rather than all on the left side. This just negates being able to make contact prints with optical soundtracks for screenings.

Link to comment
Share on other sites

  • Premium Member

Then later it was decided to use regular print stock with a regular optical track on the left side, shifting the projected image to the right side of the print and shaving the width down even further to 2.35 : 1.

Right, so when you shoot 2x anamorphic and project it, you are getting 2.39:1.

 

I guess Stewart's point is that with a digital imager the aspect ratio would be 2.66:1.

Link to comment
Share on other sites

  • Premium Member

If your digital sensor and recording off of it is 4:3 / 1.33 : 1, then yes, you get a 2.66 : 1 image with a 2X anamorphic lens. But since a scope image for a DCP is 2048×858 or 4096×1716 / 2.39 : 1, then at some point in the D.I. the image is trimmed.

 

It's interesting to look at the new ProRes anamorphic recording options for the Alexa Mini:

https://www.arri.com/camera/alexa_mini/camera_details/alexa-mini/subsection/license_options/

 

ProRes 4:3 2.8K records the full 4:3 sensor area (2880 x 2160) and can be used with anamorphic or spherical lenses. It offers frame rates up to 50 fps and an optional 2x anamorphic de-squeeze for the EVF and SDI monitoring paths.

ProRes 2.39:1 2K Ana. records a 2.39:1 standard 2K format (2048 x 858), which does not require any cropping or scaling in post. Due to the in-camera scaling the data rate is reduced, allowing recording speeds up to 120 fps.
ProRes 16:9 HD Ana. is for situations where the look of anamorphic lenses is desired but the end product is full 16:9 HD without letterboxing. This mode also supports recording speeds up to 120 fps.
Link to comment
Share on other sites

 

 

Telecine just means a machine that transfers film to video in real time. Generally a film scanner is used for D.I. work -- film scanners don't work in real time, they scan one frame at a time. But there is some grey area between telecines and scanners. Some telecines transfer to data files instead of a video codec, and some run slower than real time when doing higher resolutions, making them pseudo scanners.

 

For the most part, telecines are used to transfer film to video for offline editing, and for making a home video master from a film that didn't go through a D.I.

 

More and more often though, scanners are used even for home video mastering.

 

Technically, "blow up" is misnomer to describe recording a digital file (made by scanning a smaller film format) onto a larger film format since there is no optical enlargement going on, though people use the term all the time. But basically the film recorder puts the digital file onto the piece of film. Now in the case of putting the file onto IMAX film (15-perf 65mm), there is some work done in terms of grain/noise reduction and sharpening to make sure the image holds up on a very large screen.

 

Anyway, "D.I. stage" versus "telecine stage" doesn't quite make sense.

 

Typically (as I said above) a film would use a telecine to make a transfer to video for offline editing, then for a D.I., once an EDL is created, the original camera rolls are scanned to RGB data files (usually DPX files) and the shots used in the edit are pulled out and conformed to match the offline edit. VFX shots, titles, and transitions are also cut in to create a master. Some shots might get cleaned up for dirt, dust, and scratches. Then the files are color-corrected (sometimes some of these things like building transitions or fixing a scratch happen simultaneously and dropped in as they are finished to be color-corrected.) So you have a color-corrected master and from that you make versions like a 2K or 4K DCP, 1080P HD home video masters, etc. The digital master might also get sent to a film recorder and the image is put onto film (often as a negative so that prints can be struck).

 

In terms of shooting for a 2.40 DCP, shooting with spherical lenses and cropping is more common than using anamorphic lenses, just because it's easier to shoot that way, plus it will be easier to make the 16x9 full-frame home video master if the original isn't 2.40 already.

 

In terms of image quality, whether anamorphic lenses help you get a higher resolution 2.40 version all depends on whether the anamorphic lens saves you from cropping the original, or how much cropping is needed, and the resolution of the original format. In the case of 4-perf 35mm, anamorphic would get you a better image because it uses most of the 4-perf negative area unlike cropping 35mm to 2.40, though spherical lenses are often sharper than anamorphic lenses so it's not a clear-cut issue.

 

So what are the resolution limits of expanding (for simplicity) 4 perf 35mm film?

I always wondered how did the process go about in starting with one frame sizing film stock and ending up with a 2k DCP digital master. Originally, I looked at it in the way of you take a digital image and if you just resizing it in the most general way, one would start to loose quality from its original size.

 

Also, are you saying that: The most often case if a movie is shot on film and slated for a DCP then that movie would go through a Telecine process? Telecine then D.I? Just one of those process's because doing both would be redundant, costly and degrading to the source footage?

Link to comment
Share on other sites

  • Premium Member

Why are you surprised that theaters had to pick some standards for digital file sizes for projection? TV broadcast has standards for sizes -- 720 x 480 pixels for U.S. standard def, 1920 x 1080 pixels for HD, etc. Same goes for movie theaters using digital projectors, someone had to come up with some standards for sizes and frame rates, etc.

 

And film and digital are different systems completely, so you aren't "resizing" film for digital, you are converting film to digital, it's a transformative process, not a case of optical enlarging or shrinking.

 

When you talk about "resolution limits of expanding 4-perf 35mm film", what do you mean? Are you asking what the difference in quality is between 4-perf 35mm anamorphic vs. cropping Super-35 spherical to 2.40? Basically you are using a bigger negative area with 4-perf 35mm anamorphic compared to cropped Super-35 -- bigger negatives tend to have less grain because compared to the cropped Super-35 image, there is less enlargement to the viewer's eye. Bigger negatives also tend to capture more image detail. But on the flip side, anamorphic lenses in general are a little less sharp than spherical lenses, but that really depends on the individual lens and how it is used.

 

But then you start to get into the issue of "perceived" sharpness, with the shallower focus of much anamorphic photography, even if in low light at wide apertures, the anamorphic image might technically get softer than a spherical lens image, the shallower focus makes the object in focus stand out more, which gives the illusion that it has more detail since so much of the surrounding frame is soft, making what's sharp feel sharper than it really is.

 

Now outdoors in sunlight with the lens stopped down a bit, I usually feel that most anamorphic movies look more detailed than Super-35 movies, but in low-light at wide apertures, I've seen plenty of anamorphic movies that seemed softer than Super-35 movies. But other times, like I said, in close-ups, the anamorphic movie even in low-light might feel more detailed because of how everything else drops off so much into blurred focus.

Link to comment
Share on other sites

  • Premium Member

Wow. That arri info David provided is making sense. As is everything else now.

 

Is 4:3/1.33 : 1 full aperture?

For 4-perf 35mm, full aperture (the largest area you can expose an image between the perf rows) is 4:3 / 1.33 : 1 beginning with the Silent Era.

 

But for a digital sensor, there is whatever the max recorded area of the sensor might be. Even with the 4:3 sensors in the Alexa, the max recorded area is called Open Gate and is actually around 1.55 : 1. Not sure you'd call that "full aperture", which is more of a film camera term.

 

https://www.arri.com/camera/alexa/workflow/working_with_arriraw/arriraw/format/

 

Sensor raster:

2880 x 1620 active pixels (16:9)

2880 x 2160 active pixels (4:3)
3414 x 2198 active pixels (Open Gate)
Image aperture:
23.76 x 13.37 mm / 0.935 x 0.526" (16:9)
23.76 x 17.82 mm / 0.935 x 0.702" (4:3)
28.17 x 18.13 mm / 1.109 x 0.714“ (Open Gate)
Output resolution:
up to 2880 x 1620 (16:9)
up to 2880 x 2160 (4:3)
up to 3414 x 2198 (Open Gate)
Link to comment
Share on other sites

Why are you surprised that theaters had to pick some standards for digital file sizes for projection? TV broadcast has standards for sizes -- 720 x 480 pixels for U.S. standard def, 1920 x 1080 pixels for HD, etc. Same goes for movie theaters using digital projectors, someone had to come up with some standards for sizes and frame rates, etc.

 

And film and digital are different systems completely, so you aren't "resizing" film for digital, you are converting film to digital, it's a transformative process, not a case of optical enlarging or shrinking.

 

When you talk about "resolution limits of expanding 4-perf 35mm film", what do you mean? Are you asking what the difference in quality is between 4-perf 35mm anamorphic vs. cropping Super-35 spherical to 2.40? Basically you are using a bigger negative area with 4-perf 35mm anamorphic compared to cropped Super-35 -- bigger negatives tend to have less grain because compared to the cropped Super-35 image, there is less enlargement to the viewer's eye. Bigger negatives also tend to capture more image detail. But on the flip side, anamorphic lenses in general are a little less sharp than spherical lenses, but that really depends on the individual lens and how it is used.

 

But then you start to get into the issue of "perceived" sharpness, with the shallower focus of much anamorphic photography, even if in low light at wide apertures, the anamorphic image might technically get softer than a spherical lens image, the shallower focus makes the object in focus stand out more, which gives the illusion that it has more detail since so much of the surrounding frame is soft, making what's sharp feel sharper than it really is.

 

Now outdoors in sunlight with the lens stopped down a bit, I usually feel that most anamorphic movies look more detailed than Super-35 movies, but in low-light at wide apertures, I've seen plenty of anamorphic movies that seemed softer than Super-35 movies. But other times, like I said, in close-ups, the anamorphic movie even in low-light might feel more detailed because of how everything else drops off so much into blurred focus.

 

David I suppose I shouldn't have grouped the blow-up process for film with the re-sizing or enlarging of digital. I also previously dint know that celluloid has such range when it comes to resolution.

 

Thats what I meant when I said: resolution limits. Meaning, at what point would one notice a unsharp picture @ a certain resolution with a 4-perf 35mm frame film during a blow-up process? But, your respond to my question works to. Are the qualities of 4 perf 35mm used with ANA lenses that different than cropping S35mm w spherical lenses?

 

Also a little more on film S35mm. I leave it on film to make things more simple. S35mm initially has a notion to me that its bigger than 35mm. Do you mean 4 perf 35mm has a bigger surface area than S35mm right of the bat? Or is S35mm surface area smaller because the spherical lens usage during production then when everything is said & done it will be cropped to 2.40?

Link to comment
Share on other sites

  • Premium Member

 

Thats what I meant when I said: resolution limits. Meaning, at what point would one notice a unsharp picture @ a certain resolution with a 4-perf 35mm frame film during a blow-up process? But, your respond to my question works to. Are the qualities of 4 perf 35mm used with ANA lenses that different than cropping S35mm w spherical lenses?

 

Also a little more on film S35mm. I leave it on film to make things more simple. S35mm initially has a notion to me that its bigger than 35mm. Do you mean 4 perf 35mm has a bigger surface area than S35mm right of the bat? Or is S35mm surface area smaller because the spherical lens usage during production then when everything is said & done it will be cropped to 2.40?

 

 

Again, what do you mean by a "blow-up process"? Super-35 generally goes through a D.I. so there is no optical enlargement happening to get a 2.40 image.

 

Here is an old chart I drew back when it was an optical blow-up process:

anamorphic3.jpg

 

You can see that Super-35 uses the extra area on the left where the soundtrack goes, but that's just basically giving you roughly 3mm of extra picture width, but you are using a lot less picture height compared to 2X anamorphic photography.

 

It's just math, look at the negative area used to create a 2.40 image in either case:

 

The 2.40 area extracted from 4-perf 35mm or 3-perf 35mm is 24mm x 10mm

 

The 2.40 area used for 4-perf 35mm anamorphic projection area is 20.96mm x17.53mm, but let's round that off to 21mm x 17.5mm (a 1.20 : 1 ratio but expanded by 2X horizontally to 2.40 : 1.)

 

That's 240 square millimeters vs. 367.5 square millimeters in terms of total negative area used for a 2.40 image. Bigger negative = less grain if both images are shown the same size. It also means more detail recorded but you have to factor in the softer quality of anamorphic lenses with the typically shallower depth of field. But outdoors in daytime with the lens stopped down, generally the anamorphic image will look higher in resolution than Super-35 unless older, softer, lower-contrast anamorphic lenses are used.

  • Upvote 1
Link to comment
Share on other sites

  • Premium Member

With a 2K DCP, which uses spherical projection lenses for a 2.40 image, you aren't going to see much resolution difference between Super-35 and anamorphic, though you still might see some graininess difference. This is partly because instead of cropping and stretching the Super-35 image to anamorphic as in the optical printer process, you are digitally squishing the anamorphic image down vertically to become normal to create the 2K DCP file (2048 x 858 pixels). In other words, you may scan the anamorphic frame at 2K and have a taller file size because the image is 1.20 : 1, but at some point you have to get rid of the squeeze by shrinking the vertical dimension down. Now with a 4K DCP, you may see the difference between Super-35 and anamorphic a little bit more clearly.

Link to comment
Share on other sites

 

 

Again, what do you mean by a "blow-up process"? Super-35 generally goes through a D.I. so there is no optical enlargement happening to get a 2.40 image.

 

Here is an old chart I drew back when it was an optical blow-up process:

anamorphic3.jpg

 

You can see that Super-35 uses the extra area on the left where the soundtrack goes, but that's just basically giving you roughly 3mm of extra picture width, but you are using a lot less picture height compared to 2X anamorphic photography.

 

It's just math, look at the negative area used to create a 2.40 image in either case:

 

The 2.40 area extracted from 4-perf 35mm or 3-perf 35mm is 24mm x 10mm

 

The 2.40 area used for 4-perf 35mm anamorphic projection area is 20.96mm x17.53mm, but let's round that off to 21mm x 17.5mm (a 1.20 : 1 ratio but expanded by 2X horizontally to 2.40 : 1.)

 

That's 240 square millimeters vs. 367.5 square millimeters in terms of total negative area used for a 2.40 image. Bigger negative = less grain if both images are shown the same size. It also means more detail recorded but you have to factor in the softer quality of anamorphic lenses with the typically shallower depth of field. But outdoors in daytime with the lens stopped down, generally the anamorphic image will look higher in resolution than Super-35 unless older, softer, lower-contrast anamorphic lenses are used.

 

I suppose when I said "blow-up" I meant : What is the standard resolution of 35mm film & what are the possibilities of up-resing it [E.G 2,4,6k or to a bigger piece of film like Imax?]

 

David, the chart is informative. Is the chart describing the S35mm optical blowup process with a spechrical lens and the bottom display describing a ANA lens being applied to achieve the same 2.35 AR?

 

If my question above is right, couldn't one just use an ANA lens on the S35 negative to achieve using the same surface area like the 35mm ANA film since the grayed out area is still usable on the S35mm film?

 

I just have to keep looking over the math.

The used area of the ANA 35mm film is smaller because of the vertical squeezing ANA lens, right?

Link to comment
Share on other sites

With a 2K DCP, which uses spherical projection lenses for a 2.40 image, you aren't going to see much resolution difference between Super-35 and anamorphic, though you still might see some graininess difference. This is partly because instead of cropping and stretching the Super-35 image to anamorphic as in the optical printer process, you are digitally squishing the anamorphic image down vertically to become normal to create the 2K DCP file (2048 x 858 pixels). In other words, you may scan the anamorphic frame at 2K and have a taller file size because the image is 1.20 : 1, but at some point you have to get rid of the squeeze by shrinking the vertical dimension down. Now with a 4K DCP, you may see the difference between Super-35 and anamorphic a little bit more clearly.

ooo, is the horizontal captured area of the S35 film...I thought I was on to something. Im looking at the wiki pages Tyler provided which I have read numerous times before, thanks again, I'm getting a lot of it but I possibly need some hands on too.

Link to comment
Share on other sites

  • Premium Member

The used area of 35mm anamorphic for 2.40 images is larger not smaller than cropped Super-35 because of the horizontal squeezing of the anamorphic lens.

 

There are 1.3X anamorphic lenses for 3-perf 35mm to squeeze a 2.40 image onto a negative that is roughly 1.78 : 1 naturally.

 

But as for why not use an anamorphic lens on 4-perf Super-35, it's because the standard anamorphic lens has a 2X squeeze, so if the end goal is a 2.40 : 1 image, that means the actual area used on the negative is 1.20 : 1, which on 4-perf 35mm, is roughly the Full Aperture height but only the Academy (sound) Aperture width.

 

Since 4-perf Super-35 Full Aperture is a 1.33 : 1 aspect ratio, if you put a 2X anamorphic lens on it, you'd have a 2.66 : 1 image once unsqueezed, so you just end up trimming the width anyway back the standard 35mm anamorphic to get a 2.40 image. Someone would have to make an anamorphic lens with a 1.8X squeeze in order to fit a 2.40 image onto a 1.33 negative.

 

In terms of resolution, first keep in mind that film does not have any pixels, so describing the resolution of film in terms of horizontal pixel counts like 2K, 4K, etc. is somewhat inaccurate.

 

Second, the Nyquist Limit states that you need to sample at twice the frequency or resolution to avoid aliasing, so in an ideal world, film would be scanned at twice the resolution that the image actually contains. However, most real world objects and scenes do not have a lot of fine grid-like details that create moire problems so twice is probably not necessary.

 

Third, this is a controversial topic and all I can give you is my personal opinion based on shooting and putting film through a D.I.

 

Fourth, "2K" and "4K" refer to horizontal pixel count so it gets a bit odd to compare formats with a different vertical measurement but a similar horizontal one.

 

Fifth, there is the issue of resolution on the original negative, resolution on a print made off of that negative and/or dupes, and resolution on screen when that print is projected.

 

Anyway, I think that 35mm film captures detail in the 3K range, and it's definitely a range in real photography -- shot by shot, resolution can vary quite a bit depending on the lens, focus, movement in the frame, etc. Some people shooting line resolution charts come up with figures like 3.5K for color negative film, for example. Now if this is true, Nyquist would then suggest that the film be scanned at 7K if the detail measures out at 3.5K. However, for most 35mm movie photography, scanning at 4K or 6K works fine.

 

And I think a good rule of thumb, not scientific, is that the projected image is more like half the resolution of the original negative. So to be very crude and thus not too accurate, one could say, for example, that 35mm negative is 4K and 35mm print projection is 2K, and therefore 5-perf 65mm negative is 8K and 5-perf 70mm print projection is 4K, and 15-perf 65mm IMAX (which is 3X the horizontal width of Super-35) is 12K and 15-perf 70mm IMAX projection is 6K. Like I said, this is just an opinion and fairly roughly calculated but to me it suggests targets, for example, that a digital IMAX projection system should be 6K.

 

Now with digital acquisition and distribution, you don't lose half the resolution getting onto the screen, so for me, if 6K is a goal for IMAX digital projection, probably 8K should be fine for acquisition, you don't need to build a 12K camera.

 

And I'm not getting into the whole issue of single-sensor Bayer-filtered digital camera having to create RGB versus RGB scans of film (the brief recap is that a 4K single-sensor with a color filter array where 50% of the photosites are filtered green, and 25% are filtered for red and blue, i.e. 2K worth of green, and 1K for blue and red each, is not "true" 4K for red, green, and blue).

 

Things have gotten put onto IMAX film from all sorts of formats with all sorts of resolutions, IMAX Corp. has a process called DMR that basically reduces noise & grain and then adds sharpening. Now whether it always looks good or "holds up" is debatable, but certainly nothing beats true 15-perf 65mm photography, it's more a question of what looks acceptable on an IMAX screen, especially if there is no true IMAX photography to compare to for the viewer. I've seen a number of 2K D.I.'s that were released in IMAX prints (some Harry Potter films, for example), they looked fine, they just didn't look like IMAX photography.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...