Jump to content

Perry Paolantonio

Basic Member
  • Content Count

    607
  • Joined

  • Last visited

  • Days Won

    21

Perry Paolantonio last won the day on November 28 2018

Perry Paolantonio had the most liked content!

Community Reputation

83 Excellent

About Perry Paolantonio

  • Rank

Profile Information

  • Occupation
    Other
  • Location
    Boston, MA
  • My Gear
    Eclair ACL II, Pro8mm modded Max8 Beaulieu 4008
  • Specialties
    5k, 4k, UHD, 2k Film Scanning, Film Restoration, Blu-ray and DVD Authoring

Contact Methods

  • Website URL
    http://www.gammaraydigital.com

Recent Profile Visitors

12427 profile views
  1. At least in versions up to 14, Resolve doesn't like mixed file types when replacing media. So if your proxy is a Quicktime, and you're conforming to DPX, you may have issues with relinking media. Not sure if that's different in v15 or 16 though.
  2. Compression-wise, yes. ProRes uses a similar intra-frame compression scheme with a pretty low compression ratio. However, ProRes is 10 (or 12) bit unlike JPEG's 8bit, so it stands up to a lot more before you start to see problems.
  3. Understood, but my point was that this is not a really useful test in the first place. I mean, why would one open a file and save as to a new JPEG of exactly the same size, crop, color, etc? Why not just make a copy of the file? A more realistic scenario would involve adjustments to the master image, saved to a new file, then further adjustments to that. Translating that to the motion picture world, you'd make a master, then you'd make adjustments to derivative files made off that master. For example - You have a 2k copy of a flat scan of a film (Generation 1). This gets color corrected and rendered to a new copy (Gen 2), and that gets rendered out and brought into a restoration system (Gen 3). That's then put into a final deliverable master and rendered out as the Restored Master (Gen 4). Now you need an HD file so you'd render a downconvert to a new file (gen 5). Then someone asks for a texted version with lower thirds, logos and other stuff, and you're at generation #6. If you did this in a format like Uncompressed, or a high end ProRes (4444, XQ or HQ) or Avid DnXHR or similar, your footage will hold up through many generations without appreciable loss. If you did this with a lossy format like JPEG, it definitely will not. This is not an unusual workflow either. We literally do this 5-6 times per month on feature length films.
  4. I don't know what to say here. Hard drives are cheap. You're definitely missing out on a lot of data at 8bit. This is objectively, empirically proveable. An 8-bit image may look ok to the human eye, but the minute you start manipulating the images, you will see it fall apart. It's a scaling artifact. Because the 1st and 31st are probably subtly different from all those generations of compression.
  5. In the real world, nobody would simply export the same photo out to a new copy, then do the same to that copy and the subsequent copies through 31 generations, without altering the photo in some way. Scaling might be one thing, but any color adjustments, cropping, added text, watermarking, changes in compression level, etc would alter the image in subtle ways and that will ripple through all subsequent generations. JPEG is highly a lossy compression scheme. It's just how it is. But there's another problem with JPEG: it's only 8 bit, so it's not really suitable for much other than viewing on a computer screen. You wouldn't get as good a print from a color scan of a still photo to JPEG as you would with 16bit TIFF, for example. 10bit is sufficient color depth for scanning most (still or motion) film. Anything less is missing a ton of color information. Anything more than 12 is probably overkill.
  6. Paint tools only work in certain situations (such as when the background is mostly solid). The issue with painting out things like this where you're not incorporating pixels from surrounding frames is that the paint operation introduces its own artifacts, which can actually be even more distracting than the defect itself. It's really easy to paint out dust or scratches using clone tools in Photoshop or similar applications, and it'll look great on a single frame. but when you put it in motion the effect is called "boiling" -- where the image is the right color and perhaps even texture, but appears to be roiling like a pot of boiling water. There's a reason restoration systems are both expensive and require serious horsepower - they rely on analysis of surrounding frames to create seamless fixes. Painting out persistent gate hairs in such a way that you can't see them is insanely time consuming work, if it works at all. The tools in Resolve will not solve this problem.
  7. Just to answer the question about a software fix, gate hairs are virtually impossible for software restoration systems to effectively remove. This is because most of them work by pulling pixels from surrounding frames to conceal the defect. This works great when there's one or two or even three sequential frames, but when it's constant, there's nothing for a cleanup tool to lock onto except more dirt in the same spot in other frames. Really the only solution here is to crop it out.
  8. Again, this misses the point. If you project super 8 on a giant screen, of course it's going to look fuzzy. You're blowing up an 8mm image to thousands of times its original size. but it's going to look a lot fuzzier if he scans 2k or even 4k and uses an 8k projection system or huge screen to digitally scale it up to the display res. This is what I'm getting at - scanning at a larger resolution than you need or at least at the native resolution of the intended display is always smarter than scaling upwards because you're starting with Someone in this thread did suggest exactly that, with 16mm. That said, in the very post where you say this, you seem to be advocating that it upscaling is a good thing. What does the Internet Archive have to do with this? Doing any test and then putting it up on *any* streaming service is not a good way to evaluate the image quality. By the time you're viewing it, it's been through hell and back with all the compression necessary to stream an image. The only, and I truly mean the *only* way to compare is to put the uncompressed scan files against one another. You can't do this on any streaming service. I'm sorry, but the retroscan is a toy and really shouldn't be used as an example in a discussion of image quality. It's not a serious scanner. It's 8 bit, it's got a cheap camera and probably not a great lens. It's basically a simple projector with a video camera grafted on to it. It doesn't work in the same way a high end scanner works, and doesn't use the same class of imaging hardware, optics or lighting. Also, if it's scanning to a JPEG sequence then it's immediately crippled because JPEG is a lossy compression scheme that works in part by throwing out high frequency information. JPEG will destroy film grain because part of the way the compression works is to minimize randomness (and grain is inherently random). It does this by smoothing out (that is, decimating) the grain.
  9. With all due respect, this is the logical problem with the direction this thread has headed. It completely misses the point of scanning at high resolutions. Nobody is saying that the plastic lenses, unstable film and old film stocks can resolve details in the way one could with, say, 16mm or 35mm using pin-registered cameras, fine-grained stocks and high end lenses. But scanning at a lower resolution means it will eventually need to be scaled up, digitally, to fit that higher res screen that will be the standard in a few years. When you scale an image up, you always have to make up pixels where there weren't any before. There is simply no way to do this digitally in as sharp a way as you could with a scan done directly to the higher resolution. Yes, scaling algorithms have improved, and you can get away with a higher scaling factor now than you used to be able to, because of those improvements, but you can only scale upward to a point, before you start seeing softening. Then you have to add artificial sharpening to compensate. And now you have an image that doesn't really represent what's on the film, and won't stand up as well to compression, or multi-generation copies. From an archival perspective, this is a major no-no. Think of it this way: Would you digitize your entire music collection at 22kHz? It *might* sound ok on some sound systems - a computer's tiny speakers for instance - but plug a player into a normal stereo or a pair of decent headphones and you'll immediately hear the difference. Why are CDs double that sample rate at 44.1? Why is the recording industry standard in 2019 to capture sound at 96kHz or higher? Because more digital samples of an analog stream means a more faithful representation of the sound. It allows you to do more with the audio without causing artifacts. Will everybody hear the difference between a 48k and a 96k recording? I'd actually argue that most really can't. But again - that's not the point - capturing at 96k buys you flexibility. Capturing a film image to digital is really no different. You're taking an infinitely resolvable analog object and slicing it up into pixels (the same as samples in audio). The more pixels you use to represent that film, the closer a digital representation to the original you'll get. The point is NOT about making the film sharper than it is, it's about buying the flexibility to present it properly on changing display technology, and to be able to manipulate it without artifacting.
  10. It doesn't. It's an 8/16/35mm scanner. There is a 65mm Scanstation, though.
  11. I'm not so sure about this. The fact is, UHD screens are cheap now, and as 1080p screens die (which they do because they're cheap), they're going to be replaced with 4k just because there aren't as many 1080p screens on offer now. I mean, take a look at Best Buy and the number of cheap 4k screens is pretty high. 1080p screens aren't that much less and there's a limited selection. To me, 4k in the home makes some sense. In our living room, you wouldn't be able to tell the difference between it and 1080p, simply because we're far enough from the screen that it won't matter. But if we had a bigger screen, you'd definitely notice. As for transmission - well, it's just a matter of time. I used to have this argument with someone I worked with who insisted that streaming would never happen so we had nothing to worry about with our DVD and Blu-ray business... Compression keeps getting better, and the way it's been going for 20+ years now has been either the same quality at half the bit rate, or better quality at similar bit rates, with each new generation of codecs. With ATSC's digital bandwidth at around 20Mbps for MPEG2, a better compression format would run rings around it, without requiring more data. The cable boxes and transmission stuff just has to catch up, which it will, in time.
  12. The unfortunate thing is that they're targeting 8k towards the consumer. It really makes no sense in a consumer context -- to see a difference between 4k and 8k screens, your screen would need to be massive and you'd need to be sitting far too close to it. That said, 4k+ acquisition (cameras and scanners) makes a lot of sense to me because it opens up framing flexibility as well as the option to oversample and get better results. We see this all the time with Super 2k scans - same concepts apply at higher resolutions, just on a larger scale: https://www.gammaraydigital.com/blog/case-super2k
  13. This is correct, though the frame area is really about 4k for Super 8, not 3k.
  14. I think there's a fundamental misunderstanding here. And also some marketing that's probably confusing matters. The scanner has a 10k sensor. That means if you use every pixel on the sensor in the output image, your file is 10k. However, the scanner is taking a picture of more than just the film frame. It's capturing the perfs, frame lines, and everything out to the film edge (in the case of 8mm). With 35mm, it's going into the perfs, but I don't think it goes all the way to the film edge. This is an optics thing, and it's by design. With 35mm, there are 8 perfs to work with, four on each side. So the scanner doesn't need to get more than about half of each perf, and it has plenty to work with in order to align the frame to where it needs to be (digital/optical pin registration). But in order to capture the perfs, the film frame itself has to be less than 10k. In the case of the Director 10k, the scanned frame size is more like 9k, within that 10k overscan file. With 8mm, the camera and lens have to physically move closer to the film. In order to get an acceptable image of both extremes (8mm and 35mm), the camera has to move a fair bit. If you notice, the Director's camera module is about 18" long. The ScanStation's is about 36" long. They use different lenses, which is part of the reason why. But the Director is primarily geared towards 16mm and 35mm. That it does 8mm at all is kind of a new thing, added with the 10k version, actually. There's another issue with Super 8, in that in order to deal with the sloppy perf positions from the factory, the scanner has to cover the whole *film* from edge to edge, not just some of the perfs, as with 35mm. This is because the scanner uses a combination of the perf and the film edge to stabilize the image. That means you need to pull the camera farther back, so that you get the film edge plus some white to the right of it, where the is no film, in order to align that edge to a known position. This is the digital equivalent to a spring-loaded edge-guide in a camera. The more you pull it back, the smaller the frame gets in the 10k sensor. So while you can certainly output a 10k file with Super 8 film, the area of the film frame within that 10 scan is definitely not 10k. There wasn't someone at the show who could give me an exact dimension, but my guess would be that it'd be in the 5k+ range for 8mm/S8mm. 16mm is something like 8k on this machine. it's worth noting that any scanner that uses optical registration (Kinetta, Lasergraphics, MWA, and others) will have the same limitations. Also, from what I gathered at the show there is only one Director 10K with an 8mm gate out there, and it's at an archive in the Czech Republic.
  15. I will check with lasergraphics at NAB but my understanding is that really small gauge film (8/S8 for example), uses a crop of the sensor and that limits its max resolution. The director doesn't have the same optical path as the ScanStation, which can move the camera and lens over a range of a couple feet. The director is significantly more compact, and the form factor hasn't changed. My understanding is that there is no change to the director or scanstation at this year's NAB, other than the addition of some software features. I'll know more on Monday.
×
×
  • Create New...