Jump to content

Perry Paolantonio

Site Sponsor
  • Posts

    966
  • Joined

  • Last visited

Everything posted by Perry Paolantonio

  1. It's an interesting idea, and that would definitely solve the problem. However, it would still require buy-in from scanner manufacturers, because the basic transport in most scanners and telecines rely on the perforations to know where they are. It'd be easier to do with scanners like the ScanStation, Xena, kinetta and Flashtransfer, since they do what they do in software. But it'd still require a fair bit of engineering, and I think it'd be a tough sell to the manufacturers. BTW - I love the idea of a cheap 2-perf 35mm camera too. We can scan 2-perf on our Northlight and it looks amazing. I'd love to shoot some myself at some point! -perry
  2. That's essentially impossible - you need some way to know where you are, and perfs are the obvious choice. I'm not sure what Carl is working on exactly, but the frame edges are not a good way to either count frames or register the film, since it's entirely possible for the edges to disappear in to the area outside the frame boundary. That is, on a positive image, a dark frame will have no apparent frame edge because it blends in with the unexposed film around the frame. Perfs are the only logical choice for knowing where you are in the film.
  3. It's not that simple. You're talking about scanning a frame that spans 2 frames. Firmware/software in the scanner would need to be modified so that the count of frames uses ever other perf instead of every perf, and gates would need to be widened to accommodate a wider frame. Some scanners may not be capable of this at all, depending on their design and the coverage area of the sensor/lens. Theoretically, the ScanStation should be able to do it, and I'm sure the Xena and FlashTransfer machines could probably also be modified, but it's not trivial, and frankly, I think it'd be a tough sell to the makers of these scanners. There's serious engineering time that has to go into that kind of a design change, and you're talking about a one-camera format that may or may not have legs. It's a neat idea, though I'm not sure it even make sense unless it also involves a 200' mag, since you're halving the duration of the film by doubling the number of perfs per frame. -perry
  4. According to the manual for the Cintel scanner, the resolutions for 16mm are ...odd. 1903x1143 for Super16 1581x1154 for Standard 16 These resolutions betray an old school way of looking at scanning, and it make sense since the basic design is based on an older Cintel model. This is something that's baked into the hardware - I think they'd need to do some major work to be able to do 4k scanning of 16mm on this machine, or even 2k. Not saying they can't, but it'd require significant mechanical changes to the scanner, and that's a bigger deal than tweaking some software. To my mind, the Cintel is a non-starter for 16mm. Lately we've been doing a ton of 4k 16mm scanning of freshly shot film on the ScanStation and it looks amazing.
  5. Depends on what you're doing. Certainly for constant motion, and for fast shuttling, servos are better. Steppers are really slow in most cases. But a good microstepping motor/driver can get you some pretty insane precision. The ones in our Imagica, connected to a microstepping driver and geared appropriately, are able to get the film where I want it almost every time. It's not quick, but for the project being described, steppers seem like they'd be easier to work with.
  6. You need a method for counting frames based on perforations. You can't rely on steps of your stepper motor because there's no feedback from the motor that a step was successful. That is, your controller could say "go forward one step" and the motor might not, for a variety of reasons. So relying on a count of steps isn't going to work. Instead, you need a through-beam style photosensor to count perfs. Basically the film is threaded through this device, which acts as an optical switch. When there's nothing blocking the beam it's on (or off, depending on the switch), and when there's film blocking the light, it's the opposite state. Your microcontroller tests this to see where it is and your code keeps a running count of perfs. The trick is, you probably need to use more than one, because the switch may change state before the frame is exactly where you want it. With two of these, you can dial in the exact location of the frame (well, not exact, but pretty damned close). They look something like this (not necessarily recommending this one, but it's similar to what's in our rebuilt Imagica scanner) http://www.mouser.com/ProductDetail/Omron-Automation-and-Safety/EE-SX672A/?qs=sGAEpiMZZMugITGdVIKd7hsRAa3gItZsZV%2fnoou8%2fFI%3d An alternative is to put a rotary encoder on your stepper motor, and count how many rotations the motor actually makes. It's kind of the same idea as above, only you're testing the drive shaft of the motor, not the film itself. In the Imagica we use both, to cross check that the film is in the right spot, before engaging the registration pins (so the film isn't damaged). The resolution of the rotary encoder is important here, if you're using a big motor that isn't going to make a complete rotation when moving the film. You can rig something like this up yourself using a disc with notches on it and some light sensing diodes. You'd probably want the number of notches on the disc to match the number of steps the motor can do, and that might no be possible to 3D print. I'd opt for buying the parts - it'll be more precise, reliable and will eliminate a lot of variables when debugging. Unfortunately, they'll probably blow your $100 budget...
  7. Honestly, it'd be fun to build, and if you have time, I would encourage you to do it. But it's not as simple as you think. Trust me, I've been working on a rebuilt 35mm scanner for well over 2 years now. It's on hold at the moment because we're too busy, but things that seem like they should be simple often wind up dragging out for a long time while you figure out what's not working correctly and get it fixed. It's definitely a fun rabbit hole to go down, but don't expect it to be as quick or cheap as you think it will be. I'm fairly certain that anyone else who has done this would agree with me. That's expensive. What gauge are you talking about, and what resolution? PM me and I can give you a quote for scanning 8-35mm at up to 6k. -perry
  8. It can be done, but as others have said, it's a ton of effort (much more than you may think) and time, and the results probably won't be as good as what you get with a purpose-built scanner. Even on a budget, 2k and higher resolution scanning has become very affordable, so it's worth asking around for pricing. If you're a student, most facilities will also offer student discounts. We do tons of work on student films for people all over the country, many on very tight budgets... -perry
  9. That points to something either wrong in the scanner or in the camera, though - if the camera is using pin registration, every frame should be in exactly the same place when scanned with a pin-registered scanner (regardless of whether that scanner is using a mechanical pin or a machine vision optical registration system like the one in the ScanStation and others). In what way was the registration off? can you describe it or show a sample somewhere? The problem with using the image's frame line as a reference is that very often it's too close to the density of the space between frames, and effectively disappears. For a frame or two this may not be a big deal, but for a long sequence you'd lose your reference points entirely.
  10. This is kind of impractical. Cameras register using the pins, so should the scanner. Relying on frame lines gets dicey when the frameline disappears against the edge of the frame (due to the exposure). That is, a dark frame and dark frameline make it impossible to find. The perfs, however, are in known positions and can be easily tracked optically. -perry
  11. I've heard the picture quality is ok - not great, but not terrible. Also, 16mm is a window of the sensor, so it's limited in resolution (compared to a scanner like the ScanStation, which can physically reposition the camera/lens so the the full sensor can be used for smaller gauges). I've also heard that it's a little bit funky in terms of actual usage. Also, no audio scanning as of right now (you need to overscan a soundtrack and then post-process the image of the audio. Not a bad thing, just slow). It also needs a mac with thunderbolt that's capable of running Resolve. I don't know that it works on Windows, which is a much cheaper way to go when building a resolve system.
  12. As Tyler says, you need to add 3:2 pulldown. The resulting file is interlaced (since you're repeating fields, not frames) with this pattern. It can be done in a lot of applications, such as After Effects.
  13. They're different. Gamut refers to the range of colors a device is capable of displaying, which is limited by the display technology. If the display, for example, can't properly show deep blacks without crushing them all into the same value, or it can't display particular shades of a given color, that's a limitation of the display and that in turn limits the display's gamut. But the file you're looking at on that display could hold color values well outside of the limits of that screen. Bit Depth doesn't really care about the gamut of the display, it's just the potential number of colors each pixel in the file could be. -perry
  14. It's worth noting that in most cases, HDR scanning is mostly useful with film that has the extreme ends of the range represented - that is, lots of shadow and lots of highlights, at the same time, in the same frame. What HDR scanning does is let you pull that shadow detail out, while preventing the highlights from blowing out. Modern print stock, even older print stock, is typically fine on non-HDR scanners and you can pull tons of detail out of the shadows with a good sensor. Reversal tends towards the extremes, with deeper shadows, so it's harder to get that detail out. That's why the Director does a good job on reversal that has areas of deep shadow as well as areas with nearly clipped highlights, in the same frame. That said, scanners like the ScanStation have approximately 13 stops of dynamic range, and don't typically have an issue with workprint or release prints, because those tend to be less extreme than OCR. -perry
  15. The Bit Rate of a file simply refers to the amount of data the file contains per unit of time, typically a second. So a standard definition DV file (a highly compressed acquisition format) is 25mbps, or megabits per second (note the small 'b' -- that means megabits, not megaBytes). A typical HD Blu-ray AVC encode is also somewhere around 25mbps, but it's a different compression scheme (AVC, a variant of H.264). Bit Rate is not a universal measure of picture quality. Hell, it really shouldn't be used as a measure of picture quality at all, because it's only relevant when you're talking about the same type of file (for example, two MPEG2 files, or two AVC files), and when you're talking about the same encoding tools. And there are many other factors that affect the final quality of a compressed file besides the bit rate (including the quality of the source footage, the quality of the encoding algorithm, the quality of any scaling algorithms used, whether or not there's low pass filtering happening, whether or not there's artificial sharpening, and on and on). Back in the early days of DVD, one couldn't expect to get a decent looking picture at a bit rate below about 6Mbps (DVD MPEG2). By 2005 or so, encoders had come out that were capable of encoding at sub-6mbps rates without significant quality loss. By 2015, we can encode a pretty good looking MPEG2 file at around 4.5Mbps, if the source is clean. But this is 100% dependent upon the encoder. You can't do that in Apple Compressor. You can do it in Cinemacraft. With Blu-ray, you're talking about files that have 4x the resolution of standard definition, but with a codec like AVC and a proper encoder to do the compression, you can easily make a really good looking encode at an average bitrate of 12-15Mbps. That's only double the bit rate of DVD, but for 4x the data. This is because AVC and MPEG 2 are totally different, and can't be directly compared. The bit rate of an MPEG2 file has absolutely no relevance when talking about the bitrate of an AVC file, if the intention is to compare the quality. People do this all the time, though. All that being said, the effect of a lower bitrate file compared to a higher bit rate file of the same compression type typically results in blocking, or quantization artifacts. See this wikipedia article for an example: https://en.wikipedia.org/wiki/Compression_artifact Some of these artifacts can result in banding, as David Mullen mentioned. But banding is also a result of poor (or no) dithering from 10 bit sources to the typical 8 bit files used for display in formats like DVD. Bit rate is one parameter of many within an encoder. In many cases, all you need to do is set this higher to get a better image. But with formats like DVD, Blu-ray, or even web streaming, where you have caps on how much data can be streamed or how much space the files can take up, lower bit rates are required. And this is where good encoders shine, because they can handle it. -perry
  16. Does this work with the Windows version of Resolve? I can't find anything but Mac software on the JL Cooper site.
  17. We do a lot of this kind of thing - as Will points out, use the guides in your viewfinder, if you've got them, so that when it's cropped you don't lose anything. That said, you do get more flexibility by scanning at 2k with no or minimal overscan. That way you have a tiny bit of left/right wiggle room when you crop (~120px), but more more up/down (about 500px). That means you can crop it yourself shot by shot and do it exactly the way you want, without having to use a one-size fits all approach when scanning and doing the crop at that stage. It's easy enough to crop in most edit systems, resolve, etc. -perry
  18. There is no post-scan stabilization feature in the ScanStation. It's done in real time as the scan is being made, and once the end frame of the film is reached, the scan is 100% complete. Any post-scan stabilization is done in different software (pick your poison), and not by the scanner or its control software. -perry
  19. Moises says "It has not been edited or processed in any form other than the color change to B&W." Presumably Moises did that on his end, but it could be done in the scanner as well. There are basic primary color grading tools in the ScanStation software that allow you to control Lift, Gamma, Gain and Saturation. When scanning color negative, the scanner is calibrated to the film base, and that process removes the orange mask. A proper flat color negative scan of correctly exposed film should require no grading in-scanner - you literally choose "no grade" from a popup menu to turn off all color correction, and just a base calibration. But in some cases, such as when dealing with a dense negative, you might have blown out highlights (base calibration sets the bottom end of the scale, putting black at a code value of 95, the standard for DPX log files. In this case, highlights might get clipped, so you'd turn grading on and pull the gain setting down from its default a bit, just to keep the whites from clipping in the highlights. Of course, you could try to do a real grade in the scanner, but because it's not designed for that, it's not really advisable. You have no proper reference monitor for one, and just histograms or an RGB parade to go by, to ensure you're not crushing or clipping. The scanner is aligning the film on the left edge to a fixed point on the Y axis. If there is some slight rotation on the left edge, like if the film is going through the gate at a slight angle), it will correct for that rotation to make the left edge perfectly vertical. The entire image is rotated in that case. It does no scaling in the process of registering the frame, as far as I'm aware. While you can set crop and scale values in the scanner, you're talking about frame by frame changes, not overall changes. The scanner only sets overall (whole scan) crop and scale values, they do not vary from frame to frame. This is the point I've been trying to make. We've had our ScanStation longer than just about anyone else with one of these machines, and I can tell you that we have never seen any rocking in our scanner with any film, ever. None. never. The only time it has appeared was when the film was post-scan stabilized, as in Friedemann's footage. But the original scan had no rocking, it was an artifact of the post-scan stabilization process. We will be scanning the film Moises posted as soon as we get it, so we'll know more then. -perry
  20. ...Except, that's not what it's doing. The scanner ONLY looks at the left edge (perf-side) and aligns that to a fixed point on the Y axis. The right edge falls wherever it may. The scanner does not care about that, and doesn't align anything on the right edge. Indeed, if the film is slit inconsistently, then aligning both left and right edges would warp the picture, because a right edge that's at a slight angle, that's then made vertical would stretch part of the image in the process of making it vertical. but that's not what's happening here.
  21. The large white rounded-corner edge you're referring to is the gate. When the film is passing through the scanner, by design, it's allowed to "float" inside the larger gate area. that is, the gate does not determine the edges of the scanned frame, because it's bigger than the film is wide. The reason for this is that the scanner is designed to handle film that's shrunken or damaged. There is no pressure plate. There is no mechanical registration pin. It's a curved gate with a V-groove channel that keeps the film in position (plus a couple roller bearings on each side of the film at each edge of the gate, again, to keep the film in the correct position for the feed and takeup sides of the transport, but NOT to provide edge guidance within the gate itself. Once the image is made, the horizontal edges (top and/or bottom) of the perforations are used to register the frame vertically. The entire frame is moved on the Y axis until the perf is lined up vertically to where it should be. The Left (perf-side) edge of the film is used as a horizontal reference point, and the film is aligned on the X axis until the left edge of the film is where it should be. Everything else falls where it may. Because the overall image includes the gate, and the gate is absolutely fixed in space relative to the sensor in the scanner (that is, it doesn't float like the film does), when you fix the floating object to a point (the film on the X and Y axes), then the previously fixed objects (the gate) appear move relative to the film. Of course, nobody scans with this much overscan so you would never really see that. On the OP's scan, at about 20 seconds, on the right edge of the overall frame (not the camera gate, but the whole scan), you'll briefly see the white creep into the right edge of the overall scan frame. This is the same thing we're seeing here. That said, the end result is cropped to eliminate all this, so it's moot. It's just how the scanner does its registration and is this way by design. -perry
  22. I understand the logic. I used Friedemann's film as an example, simply because you keep saying there is rocking happening in the scanner. Our experience (and that scan) says otherwise. Where is this rocking coming from? We don't know yet. I'm not convinced it's the scanner because we can demonstrate that footage shot on the Logmar does not exhibit this problem. I've reached out to Moises, and will be getting his film here to test on our scanner, but it will probably be several weeks as he's on location. Stay tuned. -perry
  23. The distance between perfs *in the film* varies from frame to frame, in a 5-frame pattern (like the horizontal position variations). The scanner is aligning the perfs vertically, but because the distance perfs varies, the frameline shrinks and grows by an amount that matches that variation. -perry
  24. I still have Friedemann's film here from May. This film, which we've discussed earlier in this thread: I just put it up on the scanner and scanned a few hundred frames at 4k, with the scanner set to the widest overscan possible. This captures a little bit of the scanner gate (the white rounded-corner area), the entire film including both left and right edge, the perfs and the image as well as a bit of the surrounding frames. I brought this into After Effects and placed 3px red reference lines along the two film edges, and near the top and bottom of the frame. These are in fixed positions, so it's easy to spot any movement in the scan. Here is what you will see: 1) Left edge of Film: There is some horizontal variance in the position, +/- 2 pixels or so at 4k. This amounts to about .04% (2/4096). This is an inconsequential amount. 2) Right edge of Film: There is a bit of variance here, parallel to the left edge. When the left edge moves 1px, the right edge correspondingly moves 1px. Therefore, we know that the film is horizontally positioned within the film gate to a tolerance of about .04%. We also know (from looking at the film edge reference lines) that there is no rotation of the physical film in the scanner gate. When you see the film edge move a pixel or two, the opposing edge is moving in sync with it. The opposing edge's opposite corner is also moving in sync with it, therefore, there is no rotation in the scanner. 3) Top and Bottom edges of frame: There's a variance of several pixels up and down, which corresponds to the varying distance between perforations. This repeats in a 5-frame pattern, which makes sense since it's a 5-perf punch used to make the perforations at the factory. THERE IS NO ROTATION in the scanner. What you see is the entire frameline, perfectly horizontal, moving up and down in a uniform way. 4) The perforations weave left and right, indicating the degree to which the edge of the film has been stabilized to correct for the variation in perf position relative to the film edge. This footage is direct from the scanner, with a quick trip into After Effects to draw the red lines. No stabilization was applied. No color grading was applied. It was scanned to ProRes 422HQ at 4096x3112 and exported to the same codec and resolution from After Effects. Please compare this with the footage on Vimeo which has been post-scan stabilized. The rotation in the frame occurred at that stage. I didn't do that stabilization, so I can't really say what settings caused that, but it is not in the raw scan. Here is the test scan: https://www.dropbox.com/s/jpg8o7rduymilvv/4k_Overscan_ReferenceLines.mov?dl=0 Now Carl, can you please stop insisting this is in the scanner, when it is demonstrably not? Thank you. -perry
  25. Hi Lasse, Thanks for clearing that up. Do you suppose the rocking effect we're seeing in the footage Moises posted could be caused in-camera, if the side-steer is not engaged? Thanks! -perry
×
×
  • Create New...