Jump to content

Perry Paolantonio

Basic Member
  • Posts

    906
  • Joined

  • Last visited

Everything posted by Perry Paolantonio

  1. That has happened to us as well, when some film arrived from a third party that had been hand cleaned using wipes that shed fibers. A clump of fiber got snagged in a broken perf and then it fell off the film and got stuck in the gate. When in the gate it was much more in focus, but it quickly fell out of the gate and down onto the LED surface below, where it showed up in the frame looking burry like this.
  2. There is a sharpening/blurring adjustment in the scan software, which we always leave at 0 (off). That being said, I had the same question when we did this test, so I tried some examples with the adjustment set to full soften (it doesn't do much), and still saw more detail than the regular 2k scan, even a regular 2k scan with sharpening cranked all the way up. Bottom line: there is no artificial sharpening on the resulting image. There are approximately 4 times more photosites being used on the sensor when the scanner is in 5k mode (~19.6 MP) than in 2k mode (~4.9 MP) , so the initial pre-scaled image has substantially more data, which means finer resolution of the grain on the film to begin with, and a better downsampling to 2k. Also probably worth noting - the film used in this test was Vision2, not Vision3, so the grain is a bit different than current film stocks.
  3. You're right. The problem here is the meaning of "resolution" -- in film and optics, it means something different than it does in the digital realm, where "resolution" is not about how much detail can be visually resolved, but about image sizes. I can make 4k digital files where fine detail can't be resolved, by applying too much compression, for example, or defocusing the lens. It's still a 4k digital file, though. Here's a very simple example of exactly that: http://www.gammaraydigital.com/blog/case-super2k This is from our blog, and is based on some tests we did when we upgraded one of our scanners from 2k to 5k. The "Regular 2k" scan is made on the same sensor as the 5k scan, the difference is that in 5k mode, the scanner is oversampling and outputting a 2k file. In 2k mode, it's a 1:1 mapping of pixels. The difference in resolved detail is substantial. Yet, they're both 2k scans from the same film. And it also shows that in fact, scanning 16mm at 4k makes sense because it's able to resolve (in the optical sense) more of the film, therefore more detail in the ultimate scan. The same idea applies to 8mm film. In fact, I'd argue that 8mm film has better (optical) resolution in the real world than Super 8, even though Super 8 is a bigger image area by a not-insubstantial amount. Why? Because most 8mm cameras were modified 16mm cameras with better transports, pressure plates and better lenses. Super 8 was about convenience and a lot of quality went out the window with it: no pressure plate, cheaper lenses on cheaper cameras made with more plastic (partly a function of the era in which Super 8 was introduced), etc. The result is that we see older regular 8mm film that looks better than a lot of more modern Super 8, because the cameras were better and produced sharper images. -perry
  4. The numbers you put together are basically the numbers that have been bandied about on the internet going on a couple decades now. In some cases they made sense when the cost of a 4k scan was calculated in the dollars per frame range. It made sense when the scanners available had fixed sensors (so a smaller gauge had to use only part of the sensor anyway. Someone had to figure out an "optimal" resolution, which really turned out to be more of a minimum than anything. The notion that 8mm = 480 is completely ridiculous. Have you ever seen a 2k or 4k scan of 8mm? There's so much more on the film than you might think.
  5. This is an old and tired notion - that there's a fixed "resolution" for different gauges. It really needs to die, because it's simplistic and doesn't actually contribute anything to the question of what resolution one should scan at. Part of the problem is that charts like the one in the original post are an answer to the wrong question. Film doesn't have pixels so you can't say X film gauge is Y pixel esolution. For one thing, "resolution" means different things in different contexts. In the digital world, it's a number that describes the pixel count, and nothing more. In the optical world, it's about how much detail can be resolved on the film. This is something that's affected by everything from the quality of the lenses being used to the type of film stock, to the aperture and film speed settings, to the lighting, to how the film was processed. There are so many variables, that trying to say in a blanket way that a given gauge is some specific digital resolution is bordering on nonsensical. Look at it this way: If you scan a film at both 2k and at 4k, then put the 2k image on a 2k screen, and right next to it put the 4k scan on a 4k screen of the same physical size, you will see no appreciable difference in quality between the two. That's why scans to SD can look pretty damned good on a good SD monitor - you're viewing the scan on the appropriate screen for the resolution of the source file. Now if you take that 2k scan and put it on the 4k monitor, it will look softer. Why? Because something in the signal chain has to scale that image up about 4x to fit the larger resolution 4k screen. That's making something up out of nothing. Scaling algorithms can be pretty good, but it'll never look as good as the same film scanned at its native resolution. This same idea applies if you want to do a digital projection of film on a large screen. You're going to get a substantially better image if you scan Super 8 at 4k and then project it 30 feet high then if you scan it at the recommended 480px in your chart above. The 480 scan is never going to hold up to that kind of scaling. The 4k scan will.
  6. Some do, and it doesn't. It's not a panacea, and it only works for specific problems - namely physical gunk on the film itself (vs baked in dirt in prints). IR light is used to determine where the stuff on the surface of the film is, and then clean only that stuff up. It's a neat idea and it does speed up some aspects of restoration (and most professional restoration software can take dust maps made by scanners with IR cameras, to only apply automated effects to those areas. But it's not perfect, and it only gets at some of the problems). You also can't compare single-frame cleanup to motion picture cleanup. Eyes are tricky things - it's very hard to spot a defect from cleanup in a single frame, but it can be incredibly obvious once there's motion. There simply is no software that can do this automatically for motion pictures without major compromises. We've been doing digital restoration work for almost 15 years and have tried most of what's out there from high end professional systems to homegrown setups using open source software. None of it works without causing problems, which is why restoration is primarily done manually, or you wind up with footage that looks like what you posted: With reduced resolution, reduced grain, an overall waxy appearance, and artifacts. The owner of your film is correct: automated restoration software all works by looking at surrounding frames to estimate the motion and figure out what things are transient defects. In this case, the software is not looking both forward and backward, at least at the end of the scene, and they're just stopping when they hit a scene break. "All" they need to do is find a scene change to know they're at the end of the cut, and then on those last few frames start looking at previous frames. I put "all" in quotes because it's more complex than that. That will kind of work, but it will still result in artifacts because it's an algorithm, not a person, doing the decision making. The problem with using surrounding frames in an automated way is that the amount of motion in those frames may mean there's nothing to pull from when either detecting the defect or concealing it if it was properly detected. Even using these same techniques in manual cleanup, the end of shots are hardest because of this very problem. In most manual tools you can specify the direction to do motion estimation in (forward or backward), as well as the number of lookahead (or behind) frames. You can also specify what type of motion there is (fast motion, slow motion, etc) to help the algorithm along. It's a painstaking process because it's so subjective. The algorithm may find something that's a defect, but it may not do a good job in the cleanup. "Police" is clearer because they're smoothing out the image first then applying artificial sharpening. At the end of the around frame 130, you can clearly see the fake grain they're applying. it's like a screen over the image that doesn't move correctly.
  7. Yeah- there's no manual to speak of for the ScanStation - some HTML files you get to from within the application that go over stuff like how to thread it. Really barebones, to be honest. Lasergraphics spends their time on software updates and improvements more than on manuals. As Rob said, you get training when you buy the machine. Our 35mm Northlight manual is ok - similar to a typical software operators manual, but even that doesn't touch on a lot of stuff in the application and the scanner, all things you just learn how to do from sources other than a book...
  8. I can only speak for the ScanStation here, so everything below is about that machine: The perforation detection is done by overscanning the film from edge to edge. The perforation itself is detected using machine vision algorithms, and is placed in a fixed location on the X and Y axis in the scanned frame. The Scanner gate is designed such that the film can "float" through the gate and the film itself is not positioned mechanically other than by using some rollers on either side of the gate to make sure that it stays on track. That is, no spring-loaded edge guides or anything like that. All registration is done optically. For Super 8, which is a special case because Super 8's perforations are not precisely placed relative to the edge of the film, the scanner does things a bit differently: vertical registration is done with the perf, horizontal is done using the edge of the film. In terms of splices, it really all depends on the splice itself. a clean tape splice will generally run through just fine, but of course you can still see it sometimes, because it's a physical layer of stuff on the film. The v-groove channel that the film passes through on the ScanStation minimizes focus changes in these situations, because only the very edges of the film are actually touching anything in the gate. Cement splices can vary, but because they are by nature an overlapping of the film, the film is thicker at the point of the splice and that can cause the film to rise up at one end of the frame, which can cause a momentary focus issue until the frame passes. Splice bumps shouldn't happen, unless it's a sloppy splice that's not well aligned. Most consumer splicers didn't make very precise splices. Some cut right through the middle of the frame, some along the frame line. Some used press tapes that covered several frames of film, others used tape that goes across the splice and only covers 2 frames. It's a consumer format, and expecting it to be perfect, especially at splices, is unrealistic, regardless of the scanner used. I don't know how the Retro8 does its registration, but it sounds like it's got some issues if it's having trouble at points where the exposure changes. That shouldn't have an effect on the registration. -perry
  9. Most of them are between 8-12 drives, as RAID 5, using a dedicated controller card. Just standard issue off-the-shelf drives, none of that enterprise or raid-specific drive nonsense and definitely no SSD (too expensive per GB). We are moving towards setting up a centralized storage system though, because feature films as 4k DPX are a data management nightmare. A typical feature is about 5TB for the scan, 5TB for the graded file, and if we do restoration, 5TB for the restored files. And that needs to be on fast RAIDs, which we have in all of our systems, but the file copy time to move between systems is a major drag. The centralized storage system will have 60TB of RAID 6 (or we may do XFS, still haven't decided that part yet), on an infiniband fabric, with 20-40gbps going to each machine over iSCSI mounts. So that avoids the overhead of typical networking protocols like SMB and increases the bandwidth to 2.5-5GB/second to each node (theoretical max). I don't expect the RAID will be able to keep up with those speeds, but it will easily handle a couple of 4k DPX streams at once. With iSCSI, it's a simple matter of mounting a volume on the scanner and scanning to it. Then unmounting that and mounting it in the Resolve for grading. That would write the output to another volume dedicated to that project, and when that's done, it could be mounted on the restoration system for final cleanup and mastering. All of the mounting/unmounting is done in software since they're all part of the inifiniband fabric - no more cables to trace, no more sneakernet. We just moved into a new office, so this project has been on hold, but it'm going to get back into it in a few weeks, I hope, once we're all back to normal here. -perry
  10. You actually don't need a ton of CPU speed to move 4k uncompressed. an i7 is plenty. What you need there is ridiculously fast storage. Most of our RAIDs can move close to 1.5GB/second, and we can play 4k easily. But it depends a bit on the software as well, how it caches, etc. If you're working with compressed 4k, CPU (and possibly GPU) speed is a much bigger factor because the video needs to be decoded before it can be displayed. With uncompressed files, it's largely about storage I/O. With compressed files, it's less about storage and more about CPU. In either case, lots of RAM usually helps, but not always. Again, all depends on the software being used and what kind of caching it does to speed up playback.
  11. A wet gate isn't going to do much. It might get loose surface dust (so will PTR Rollers, which are standard on most scanners anyway), but it won't take off the deeply embedded dirt that's accumulated over the years. a good ultrasonic cleaning could do it, but 8mm Ultrasonic cleaners are few and far between. We're modifying our alcohol-based film cleaner to support 8mm film, but it'll be a few months before that's up and running since we have a million other projects to deal with. For now, the best bet is a gentle hand cleaning using something like Solvon or a similar film cleaning solvent. But that has to be done very carefully or it could cause more damage. The thing a wet gate is really good for is concealing base scratches, and even then, only when the light source in the scanner is collimated light, not when it's diffuse light like what you get in most modern film scanners. Neither will do anything for emulsion scratches, just base-side scratches. -perry
  12. Resolve is not a dust busting tool. It's an add-on feature in free software, so I think you're probably expecting too much here. But the algorithms used by the dustbust tool are ported over from the now defunct Davinci Revival, which was quite good high end restoration software. One could use Photoshop for this, but it's not a good idea. A fix that might look completely seamless in a single image can look terrible in motion. Really bad footage can lead to an effect known as "boiling" where a given frame looks fine on its own, but when in motion it looks like the frame is bubbling with fix artifacts that you wouldn't normally see in a single frame. You just can't compare still image restoration tools to motion picture restoration tools because motion picture footage has to take into account the context of the surrounding frames. I've never really used Resolve for restoration beyond quickly looking at what it can do, but I believe you can change the algorithms it uses for a given fix. When I last looked at it there was a preference panel where you could specify the type of fix to use (for example, spatial or temporal, or a combination of both, stuff like that. There's a good reason why restoration software is as expensive as it is - it's highly specialized and not something you can do seamlessly in Photoshop without a ton of work. And even high end motion picture restoration software makes mistakes when automating fixes. We've been at this for 11 years, and it wasn't until this past year that we finally started using some automated software (PFClean and soon DigitalVision Phoenix). It's a *really* difficult technical feat to pull off and it requires serious horsepower to do. It might be possible to do something like this in Photoshop or even command line tools like ImageMagick or GraphicsMagick, but it would be painfully slow and probably not too reliable. And it certainly won't have features like Undo if you're scripting it (unless you're an programmer and are writing your own software). -perry
  13. As Rob said, there is no inexpensive automatic dirt removal software out there that doesn't leave more artifacts than it cleans. The quickest/easiest way to do dirt removal is also the least desirable in terms of quality: degrain the film, run an aggressive noise reduction pass, then regrain the film. This softens the image, removes a lot of texture, and replaces it with artifical grain (that never really looks quite right). We don't do this. The best way to do it is manually. There are tools for this, some are even reasonably priced: if you have Resolve you can do manual dust busting to get the big stuff. Your source files have to be DPX for this to work (it doesn't work with Quicktime files), but it does work. And it's free, assuming you have hardware that's powerful enough to run Resolve -- lots of GPU power is required. Manual cleanup is slow, painstaking work, and costs a fortune as a result. On a typical feature film, it takes one person about 3-4 weeks doing nothing but dust busting to clean the whole film. We've done dozens of feature film restorations this way and it's a slow process. But it's the most accurate, because you instantly see if you're making it worse, and can fix it. When you automate that, you're bound to miss stuff. Even on the very high end, automatic tools have to be used with great care, because they will usually add a lot of their own artifacts. PFClean's automatic tools are pretty good, but the application is a buggy minefield of crashes when you start doing a lot of auto-dirt stuff. Also, it's $6100, so not exactly cheap (though cheaper than many others out there). DigitalVision Phoenix Touch is similarly priced to PFClean, but requires a PC that costs almost as much, so you're into it for over $10k easily. It's got some really nice automatic cleanup tools, as well as a manual toolset. IR dustmaps and IR-based cleanup is interesting, but doesn't deal with a lot of situations you'll hit in restoring a film. The defect has to be a physical thing on the film itself in order for the IR scan to pick it up. If the thing you want to remove is a chemical stain, or a hair in the camera gate, or if the film is a duplicate and contains baked-in dust, it does nothing. I don't think it does much for scratches either since it's only able to pick up stuff on the film that's not supposed to be there. A scratch would, by definition, be the opposite: a lack of stuff (emulsion or base) because it's been scraped off. Unless things have changed, by the way, part of the reason this only exists in a very small number of scanners is that there are (or at least were) exorbitant licensing fees due to Kodak, which holds (or maybe held, not sure) the patent on this technology. I remember getting a quote on a scanner 7 or 8 years ago, and Digital Ice (which just created a dust map to feed into restoration software, where the actual cleanup happened), was an additional $60,000 on top of the cost of the scanner. -perry
  14. If you don't want to set up a RAID, then the bare minimum you should do is make sure, as Landon said, that you're using separate drives for your source media and your final rendered files. And they should each be connected to the computer directly, not daisy chained like you can do with, say, Firewire. This ensures that each dive gets access to the full bandwidth of the connection you're using, and that helps to prevent bottlenecks. -perry
  15. A feature can be done, but it's a lot more expensive now than it used to be. We scanned and restored a Super 8 zombie film last year. It was shot in the 1980s, but they went back and did a 2k scan and restoration. It looks really good considering it's Super 8. Of course, it's not a practical format for long form work, nor is it cost-effective. Better off shooting 16, or if you can get your hands on a camera, 2-perf 35mm. -perry
  16. MPEG files don't store each frame as a complete frame. This is a big part of how they're able to be so much smaller than a frame-based file format like ProRes or Uncompressed. Basically, the footage is broken up into Groups Of Pictures (GOPs), which consist of a complete frame, then intermediate frames that tell the decoder what has changed since the last complete frame. Sometimes the structure uses fixed GOP sizes, sometimes it's variable. There are also intermediate frames that are pretty much complete frames, but don't sit on the boundary between GOPs (but they're there to deal with big changes in the image, such as scene changes. In any case, you can only make a non-destructive cut at the boundary between GOPs. If you need to cut at a different frame, then the GOP containing that frame has to be split into two GOPs, with a new boundary placed at that frame. That requires re-encoding the picture data for up to a couple seconds worth of footage. This is a file format that was never designed to be edited. It was meant for final presentation. What the guy from MP4Tools was telling you is basically the same thing I'm saying.
  17. That's a late-1990s era telecine. Pretty good for its day, but way outdated by todays standards. Any machine that's designed to output to video, that is, a telecine, is going to have to either pull up or speed up your 16fps film. There's no way around that because it's inherently tied to the output side, which requires that the signal conform to broadcast standards. Of course, it could be rebuilt - the people who make the Xena scanner sell kits for this, but it's not very common. The other scanner, the Nikon, must be custom built. Nikon never made a motion picture scanner as far as I'm aware. It's probably an Oxberry or similar optical printer, with a scanner on it in place of a film camera. Those are fairly common, very slow, but can be high quality. It all depends on the specs of the sensor or scanner. There's nothing inherently wrong with that type of scanner - I'm rebuilding an ancient Imagica scanner in my spare time, using the transport and mechanicals but replacing the entire lighting and imaging system with much more modern hardware that's a hell of a lot faster (but still slow) and much more capable. But a lot depends on how it was done and the quality of the optics and sensor. The lens in my Imagica alone sells for like $3500 on the used market (a 95mm Nikkor printing lens), and is worth more than I paid for the whole scanner. Optics and sensors matter, so on any custom scanner, it'd be important to know that it's all high quality. MP4Tools is just cutting an MPEG file (AVC is MPEG4), which you can do with command line file manipulation tools if you know what you're doing. But it has to be done at the boundary between GOPs, which is less than ideal. If it's able to cut at any arbitrary frame, then it's re-encoding that GOP where the cut is. That's just the nature of that type of file, and it's one of the reasons those types of files shouldn't be edited directly, but should be converted to something like ProRes or DNxHD or some other intermediate frame-based file format for editing. If you cut a ProRes file in a tool such as Final Cut Pro, and then export out to exactly the same file format, it's doing the edit non-destructively, just copying the bits from one file into another with no recompression. You're generating another file, but it's not particularly big. Other tools, even on Windows, can do this as well. I thought you were talking about cropping the frame - there is no format where you could do a crop "in-place" without making a new file or recompressing, other than with image sequences, and even then your software has to support it. I doubt most tools that give you much of a GUI will do this, but there are definitely command line tools such as GraphicsMagick or ImageMagick that can do it.
  18. Whoever is doing the scan should ask what you want. If they're using old equipment (limited to broadcast standard frame rates such as 25 and 29.97) you would have no choice but to get the repeat frames you're seeing, unless you wanted the film run much faster than it should be (that is, running your 16fps film through the machine at 25 or 29.97fps). Our scanner, like the Xena I believe, can scan the film at its native frame rate to a container format like Quicktime or AVI. With those formats, as long as you're not interpolating new frames or doubling frames to pull it up to a higher frame rate, changing the frame rate of the film is a simple matter of bringing into software that can do that (such as Adobe After Effects, among many others), and telling it what frame rate to treat the file as. If, however, the pullup was done in the scan, it's harder to back out of that. It can be done with AVISynth scripts, but it's kind of a pain. What film scanner did they use? If it's designed to go to a video format, you should move along to a service that can scan it frame by frame. With any container format, you would read from one file and write to another. With an image sequence, you could theoretically read from one file and then write back to that file, but why would you want to do that since what you're doing is destructive. Better to keep the full version of the film around, just in case, and do your cropping upon export to another copy. That way you have both. Hard drives are cheap! There's no way to know the exact frame rate of the film by looking at it. There's no metadata in the film to tell you, and you can only go by the way it looks, or what the client says. As a general rule though, it's usually a safe assumption that footage shot by the same person would have been shot at the same frame rate. Otherwise they'd have to constantly fiddle with playback speed on their projector for each reel. We typically assume 16fps for old 16mm, 18fps for 8mm and Super 8. If it was shot by a professional, 24 is usually a good guess (or 25 outside North America, Japan and Brazil). Regardless, it can be changed, whether it's an image sequence or a container format, as long as each frame of film is in its own frame in the scan. The size of the sprocket holes has no relationship to the frame rate. With Super8 for example, you could shoot 18 or 24 as standard speeds. most people did 18 for home movies because you got more running time. But it's the same film either way. The smallest diameter is typically for Regular 8 film. Super 8 uses a larger diameter. Even though the film is identical in width, the sprocket size and placement are different. On the ScanStation the hubs are removable and we swap them out depending on the reel type. On other scanners it may use an adapter to fit various reels. I just can't see how it could give quality results without the expertise in colour, software, mechanics, and general R&D that the big boys have. The quality of the camera, the quality of the sensor, the quality of the optics and the precision of the machining all come into play here. I don't have any experience with this scanner so I can't say for sure but it's a tiny little camera in there, which means it's probably a tiny and inexpensive little sensor. The camera that's in our ScanStation, and most of the cameras available for the Xena, cost much more than the entire Retro8 scanner. And that's because they're much higher quality cameras.
  19. You would not want to vary the exposure frame by frame. Our ScanStation can do this as an option, but we never use it. You have to remember that you're not talking about a single image, but thousands of images shown one after another. In that context, variations in exposure from one frame to the next, to compensate for variations in exposure on the film, can result in odd flickering and other artifacts. The correct way to do this is to set an exposure for the film based on the film's base. A Flat scan will get you a scan where black is elevated and white is reduced, and your midtones are low contrast. From this, you can pretty easily color correct it back to the way it should be. It's done this way for flexibility - if you color correct while scanning, you bake that correction in, and in some cases that's irreversible. So the process is two-steps: Scan then Grade. Depends on the scanner. I can only speak for our ScanStation, which uses the perforations as registration points, aligning the frames to a fixed position in the final scan. It is arguably more accurate than a mechanical pin, since it can handle variable levels of shrinkage. Variations of Digital ICE exist on some scanners. None of them can do 8mm, as far as I know. The Lasergraphics Director, for example, has this as an option, but that's for 16mm and 35mm only. We do our restoration work in an additional pass after the scan. It can be done manually or automatically, but the best results are generally obtained with a combination of the two. Bear in mind that motion picture restoration is expensive. The software to do this kind of work starts at about $6000, on the low end. Cropping depends on the scanner, but most can overscan the frame. Ours can overscan almost to the edges of the film. LZW compressed TIFFs are unusual in film scanning. Frankly, TIFFs are unusual in film scanning. We can scan to them, but they're either 8-bit (don't scan to 8 bit) or 16bit (massive and clunky files). A better format is DPX, which was designed for motion picture scanning, is uncompressed, and can be either 10 or 16bit. Or ProRes, which can be 10 or 12 bits, and is much more convenient to work with in editing software. Very much so. It depends somewhat on your ultimate goals, but a modern film scanner that adjusts its optics so that the entire frame fills the sensor (rather than just using a small area of the sensor for smaller gauges) will get you outstanding results. We regularly scan 8mm home movies at 4k. Bear in mind that 4k television is here, and 8k isn't far off. The more you have to scale an image up to fit the larger screen, the worse it will look. So if you scan to HD, you're getting an image area of about 1440x1080. To make that 4k, you have to scale it up 8x, which will dramatically soften the image. If you scan at 4k, you don't do any scaling at all. For lower resolutions, you lose nothing by scaling down. But you *do* want to avoid scaling up because that forces you to make up image data that wasn't there before. Also, DPI has nothing to do with pixels. That's a number that's applicable to the print world, so it shouldn't be confused. The notion that there's a limited amount of data on the film is based on studies done at a time when the available scanners were limited to 4k. It was assumed that 4k worked for 35mm, 2k for 16mm and HDish resolutions for 8mm. That was because the scanners worked that way at the time, and because the thinking was that you would want the grain to remain the same size on the display medium for 8mm as for 35mm. Think of it this way: If you're a scientist looking at something tiny, do you want a magnifier or a microscope? You're going to resolve more detail with a microscope, and that's the equivalent of a higher resolution scan. The more detail on the film you can resolve, the more faithful a representation of the image you'll get.
  20. This is demonstrably false. See the link to Backblaze's data on consumer vs enterprise drive reliability. Spending money on expensive drives in a RAID is basically a waste *IF* that RAID has redundancy built in like Raid5 or 6. A 2TB WD Red (NAS) Pro drive is $144 A 2TB WD Black drive is $109 (prices from MicroCenter.com) The MTBF on the NAS drive is 1 million hours. I couldn't find published data on the MTBF for the black drive, but let's say for the sake of argument that it's 20% of the NAS drive (which I bet is way lower than the actual amount). That means the MTBF on the cheap drive is going to be 22 years. You will not be using that drive for anywhere close to that time frame, because the tech will have changed within 3-5 years. So if you populate an 8-disk raid with overpriced NAS drives, you're looking at $1152 for the disks. You can make the same RAID with for nearly $300 less with cheaper drives. You would have to replace three of them to have broken even when compared to the same RAID with NAS drives installed. The main thing you get with the more expensive drives is a better warranty and tweaked firmware (that doesn't really do enough to justify the doubling in price). I can't remember the last time we got a new drive as a replacement for a warranty swap. Every manufacturer has sent us a refurb when we've tried that, and a lot of those failed as well. I gave up on warranties for drives long ago. The cheaper drives also have energy saving features, which you'd want to turn off, because that can affect performance. Most RAID controllers can do this. Our current longest-running RAID is a 16-drive NAS that's been in operation since 2010. There have only been a handful of failed drives in that time (maybe 3-4). All the disks in there are cheap Seagate 2TB drives at 5400RPM, in a RAID 6. Replacing a failed drive is as simple as popping it out and putting in a new one. Rebuilding happens automatically, so I'm not sure how that's more complicated than a RAID 10 (I don't have much experience with RAID 10, because we've had such good luck with RAID 5 and 6 setups over the years). While the specs and the firmware tweaks may in fact benefit users of RAIDs, in those more expensive drives, the benefit is marginal, and often can be matched by simply adding an additional disk to the array. When you're using cheap disks, this is more cost effective than having 8x really expensive drives.
  21. Actually, I don't think there is a sprocketless version of the director. There is a pinless optical registration option like on the ScanStation, but the transport itself is sprocketed, so you should be sure that the film isn't too badly shrunken. For film from this era, it could go either way, and will depend largely on how the film has been stored over the years. We've seen film from the 1960s that was in worse shape than film from the 1920s... Also, depending on how the film was exposed, you may not need multi-flash HDR (which is much more expensive than a straight single-flash scan). The ScanStation has roughly 12-13 stops of dynamic range, but the Director will do a better job of pulling really deep detail out of the shadows when it's in HDR mode. If the footage isn't underexposed, you'll get a lot of good detail on a single flash scanner with the kind of range the ScanStation has. -perry
  22. It *is* about 3:2 pulldown but it's got nothing to do with drop frame timecode. There's no such thing as drop frame timecode at 23.98, it's always non-drop. Similarly, you can have 29.97fps with non-drop and that's ok too.
  23. We use internal RAIDs in most of our machines. For reference, we mostly work with DPX sequences at 2k and 4k. Some machines have different requirements than others, but as a general rule, here's how we do it: 1) Inexpensive dedicated RAID card. A SATA 3Gbps PCIe card can be had for next to nothing these days, and with 8 drives on it, you can easily play 2k at realtime (assuming 30fps or less), or 4k at slightly less than realtime. Again - DPX, which is about 10x the bandwidth of an equivalent resolution ProRes file. This can be made faster with a 6Gbps RAID card, but it's not necessary if you're primarily dealing with compressed formats. Just about any drive you buy today is going to be 6Gbps, which will work fine with both 3 and 6Gbps cards. 2) The RAID is typically a RAID 5, or if there's room for additional drives, a RAID 6. RAID5 is fine in most cases. Never RAID 0. 3) Cheap hard drives. Spending a lot of money on super fast drives is a waste. There's simply no point. We usually buy whatever is on sale at the time, so we have a variety of discs across a dozen machines. Stick with the same size/brand/speed in each RAID, though over time you will find that you can't get the exact models, and putting in a rough equivalent is just fine. Lately we've been buying WD Green drives at 5400RPM. The notion that they have to be fast is false, unless you're only using one drive at a time. Once you put them in a RAID with 6-8 other drives, most of the benefit of faster RPMs goes away and the bottleneck becomes your RAID card, not the disk speed. Remember, we're talking about DPX sequences here, so for anything that's compressed, such as ProRes, the specs above are overkill. FWIW, as I type this, I'm capturing an HD film to a single 7200RPM drive as a ProRes 422HQ file (1080p/23.98). We've been doing this for the past 16 years on probably 15-20 different machines over that time, from SCSI to IDE to SATA and SAS. You have to expect that drives *will* fail, which is why we don't use RAID 0. With RAID5 you get most of the performance benefit of a RAID 0, with some built-in redundancy. If one dies, you can pop in a replacement and rebuild, while running the RAID in degraded mode if you need to. We usually do rebuilds overnight. For some machines, like where we store client projects semi-long term, we use RAID 6, which allows for two disks to fail before you lose data. We keep a stack of spare drives on the shelf just in case. Over the past year, we've had to replace 3 disks. All total we have about 60 drives in various RAIDs throughout the office, so that's not bad. And the cost of those replacement drives is far less than the cost of expensive 10k drives or "enterprise" drives. We don't use SSDs for RAIDs because it's not cost effective. We do use them for system drives, and they make a world of difference in OS responsiveness and boot times. For system drives, keep a clone of your system disk handy so if/when it fails you can just swap it out and keep going. Don't store anything on the system disk that wouldn't be on the clone (that is, don't put stuff in your personal documents folder that's inside your user account folder, keep them in a folder on the RAID. That way the system disk can fail and it won't matter when you pop in the clone. We have 11 machines with SSD system drives in them, some of which have been running for 3+ years. We've had one fail in that time. If you have a recent PC or Mac, doing the RAID in software isn't as much of a problem as it was back in the day, when those CPU cycles couldn't be spared. I personally like having a dedicated RAID card because it has other features (like sending you an email or sounding an alarm when a drive fails). And if you don't believe me that those more expensive drives aren't worth it, check this out: https://www.backblaze.com/blog/enterprise-drive-reliability/
  24. Ahh, ok. That makes sense! Usually 23.976 is abbreviated as 23.98 (but they're not the same thing, and on long form material, that extra .004 makes a difference for audio sync!). Since we primarily deal with film and with tapes that have already been made, I'm not too familiar with how the camera manufacturers do that. If they're really abbreviating 23.976 as 24, that seems like a really bad idea!
  25. No it doesn't. It just plays at a different speed. Every frame is the same size.
×
×
  • Create New...