Jump to content

Satsuki Murashige

Premium Member
  • Posts

    4,560
  • Joined

  • Last visited

Everything posted by Satsuki Murashige

  1. With regard to LUTs, I’m not sure what he means. At a given setting, the camera captures a certain number of stops over and under middle grey in raw or Log, regardless of what LUT you happen to be viewing with. It might look clipped or crushed on the monitor, but that doesn’t affect what’s being recorded.
  2. I haven’t listened to Mr. Libatique’s episode yet, but I’ll hazard a guess at what he means. Most people commonly use light meters for relative measurements in f-stops (after inputting ISO and shutter speed values), rather than for absolute measurements in footcandles (incident) or footlamberts (reflected). This is problematic if the ISO value that you input on the meter doesn’t match the camera’s ISO calibration. Many digital cameras display their gain settings in ISO values, but these are not really standardized and can be whatever the manufacturer want them to be. This is why when you line up different cameras together in a row and set them to 800 ISO, use matching lenses and set the same T-stop, the exposure can be different for each camera. Light meters are designed to help you reproduce an 18% middle grey accurately. However, there are other considerations for digital camera manufacturers that help sell more cameras - more highlight retention, less noise in low-light, ‘higher’ sensitivity rating - all of which can lead to, shall we say, inaccurate ISO values that don’t match the light meter reading. So, if you test beforehand to find out the actual sensitivity of your particular camera and take those differences in ISO values into account, then your light meter should work as expected. If not, then you may be surprised with (usually) underexposed results. From what I understand, Mr. Libatique doesn’t have one single digital system that he sticks to, unlike Roger Deakins for example. So I suspect that in his case, relying on a light meter is more trouble than it’s worth. This also holds true for film stocks, by the way. It’s a good idea to always test a new film stock to find an exposure rating that works for you, rather than relying on the box speed. Though Kodak Vision 3 stocks today are better at dealing with underexposure than ever before, all manufacturer box speeds tend to be overly optimistic. So when in doubt with film negative, overexpose.
  3. It costs a lot of money to develop and produce new film stocks, so I doubt Kodak will be doing that any time soon. 5219 500T is great as-is. Personally, I liked the higher contrast of 5218 better, but ‘19 scans are cleaner. Since playing with warm/cool white balance is one of the basic key looks in cinematography, tungsten stocks are very useful. Most cinematographers still shoot at 3200K white balance quite often for dusk/night scenes on digital cameras. Fujifilm used to make a 500D stock, but it wasn’t very popular.
  4. I think that’s fine if you’re not relying on a communicating a specific look to post. From a DP’s point of view, the problem is that this puts the control over the camera’s base look back into the hands of the colorist, rather than the cinematographer. One other workflow method has been suggested by Michael Cioni of Frame IO: https://blog.frame.io/2020/04/23/protecting-your-image-hurlbut-academy/ https://youtu.be/IxmfkcXlnDc/t=51m38s 1. Use a standardized Display LUT for the bulk of the transform work from Log to Rec.709, Rec.2020, P3, etc. This can be changed independently based on the display and output required. 2. Make a Creative Show LUT that gets applied to all clips on top of the Display LUT which broadly communicates the DP’s creative intent. 3. Make ASC CDLs with DIT on a per-scene basis that get baked into the metadata and follow the footage all the way thru post. The CDLs would be for smaller, more specific corrections that communicate the DP’s intent. I think this makes a lot of sense for larger projects. It’s probably too complicated for the types of things I’m shooting, but I like the intent to give the cinematographer as much creative control as they want over the image while shooting, without causing headaches for post down the line.
  5. Noise tolerance is certainly a personal preference, so if you’re finding the High Base ISO too noisy then I can see why it’s a problem for you. That said, have you tried lowering the EI in High Base to 3200-6400 EI and increasing exposure to compensate? Also, have you tried the in-camera noise reduction setting to see if it is sufficient for YouTube projects? I can sympathize about the slow rendering time of Neat Video in post, it is painfully slow for long clips. Can’t argue with the results though.
  6. Though the two cameras are quite similar, I think the FX6 is more versatile camera. The electronic ND is a pretty huge benefit in a single operator situation. And as you grow in your career, you may start working with a sound person recording double system audio, in which case having timecode input/output on the camera may also be a big factor.
  7. This should have been 65x65x65, my apologies. Couldn’t edit the post, so adding the correction here!
  8. It’s consistency but also communication. You’re telling everyone, I want the final image to look like this. You light and expose to the LUT, just as you would to your own dailies printer lights. The director can say whether they think it’s too dark or too green while you’re shooting. The editor sees what we saw on set. I believe it helps them to edit when the footage looks close to how it will be in the end. And of course, you still have the Log image to go back to if any changes need to be made. I find it to be very similar to shooting on film in that sense. Re: uploading to camera Really depends on the specific camera in question. Alexas take ARRI Looks, their own proprietary format which you have to make in their free software ARRI Look Creator, which is not very flexible. The nice thing about these Looks is that they are part of the camera metadata, so they should follow the footage to post. But they can’t be used in monitors or grading software, so you need to also export .cube LUTs for other uses. You put them on an SD card and save them to the camera. Most Sony cameras (F5/55, Venice, FX9, FX6, FS7) as well as most SmallHD monitors accept standard 33x33x33 .cube LUTs, usually on an SD card. Sony recently added an option called Advanced Rendering Transform in the Venice to apply Looks before video processing, supposedly resulting in fewer image artifacts: https://www.newsshooter.com/2020/04/30/sony-advanced-rendering-transform/ Red cameras only accept 17x17x17 .cube LUTs, and you have to take their IPP2 transforms into account since I don’t believe you can apply a LUT to a straight Log3G10 image in-camera yet. You also have to put them on RedMags and the process to import them into the camera is a bit convoluted. You can also apply LUTs on a Teradek Bolt XT receiver. Not sure about Canon or Blackmagic cameras. Of course, you can also get a LUT box and have the DIT apply them from their cart, but at that point it’s a bit convoluted to me. Fine for studio multi-camera shooting, but you’re back to being tethered on location. What’s the point of all this wireless technology if you’re just going to end up tethered to a DIT cart? That’s assuming there will be a DIT on the job, almost never in my case.
  9. Assuming your shutter angle is 180 degrees, then your shutter speed at 24fps is 1/48 of a second. (shutter angle)/360 x 1/(fps) = shutter speed. 180/360 x 1/24 = 1/48. So at 36fps and 180 shutter: 180/360 x 1/36 = 1/72. That is a 1/2 stop difference, not 1 stop. One full stop would be 1/96 sec, which you would get a 48fps.
  10. Thanks for digging that up. Long story short, I think I will stick to my current workflow then. ?
  11. If I ever got a project where the colorist was hired before shooting, then I would probably run this through them and have them grade the tests and make the LUTs. But usually on the projects I’m on, we don’t know if there will be budget left for a colorist at that point. So I do it partially to protect myself and make sure that even if all they do is have the editor apply the LUT at the end, it will look no worse than it did on set.
  12. Yep, I shoot tests in prep based on the look we want. For example, this was the first time I’ve shot at 1600EI on the Alexa Mini combined with underexposing to capture more highlight detail (thanks for the tip Miguel Angel!). So I had to test that. Also shot a LowCon filter test, and an eyeball macro shot test I posted in another thread. Since they all have to be rendered out and presented to the director, it’s a good chance to also grade and make a few LUTs. In this case, we just used one custom ARRILook that was very similar the Alexa LUT, but with lower highlights, deeper blacks, and a bit less green in the midtones. Once you make an ARRILook in their ARRILook Creator software, you can put them into the camera and also convert them 64x64x64 .cube LUTs for Resolve and 33x33x33 LUTs for monitors on set. On other projects, I might have two or three to represent different story locations like cool for rainy London, warm for sunny California, yellow-green for ‘exotic’ Madrid. Of course, that’s in combination with changes to lighting, production design, etc. I do my best to make sure the LUTs get sent along with the footage to post, though at that point they can choose to disregard them and start working with the Log footage if they so choose. But I think they usually find it helpful to see exactly what we were looking at on set, at least as a starting point.
  13. My understanding is that each serial node affects the next one in the chain, which is why it’s often recommended to put LUTs last in the chain instead of first. But I could be wrong, I’m not a colorist, nor that well-versed in Resolve really. I’ve just found a method that works for me and keep it simple. I’d suggest asking a real colorist on the LiftGammaGain forum!
  14. Generally always a custom one that I make per project in prep and also use in camera for monitoring. I tend to think of it like a custom printer light - using the provided camera LUT is a bit like getting your film dailies printed at 25-25-25. It’s not going to be ideal if you’re going for a specific look. It doesn’t have to be extremely different from stock, but every little bit of specificity helps.
  15. Yes, I agree with your assessment. My usual workflow in Resolve is to use serial nodes for primary corrections with the LUT node (if there is one), last in the chain. Then parallel nodes off of the first node for secondaries so that they’re working from the full data. I should try the Luma vs Sat curve after the LUT node (or just use a color space transform instead). Thanks Joel!
  16. Interesting, I tried that in Resolve too. Perhaps an order of operations issue.
  17. That does look pretty good. Did you do a secondary mask on just the sky, or just a global adjustment? When I tried masking myself, the luma key was difficult to pull due to the camera movement and LowCon filter.
  18. That was cute, I had a good laugh! Your kids are gamers for jumping into the filmmaking life with you, congrats ?
  19. That’s a nice shot! Yes, basically as long as you don’t clip the highlights you can do quite a lot in grading.
  20. Thanks. Yep, 90% of this project was Steadicam with a compressed shooting schedule, so I lit more generally and tried to squeeze as much DR out of the camera as possible with 1600EI and a LowCon filter. I guess maybe Arriraw would have helped, but that wasn’t really an option for us.
  21. Alexa Mini, 3.8K, PR422HQ. *800EI* *Sorry, my mistake - we shot this whole project at 1600EI.
  22. I dunno, the colorist and I tried pretty hard with the clipped Alexa shot I posted previously, and I could never get it to my satisfaction. But maybe others can. Graded: Log:
  23. Fuji Velvia is more saturated than color negative partly because it’s much more contrasty. But Provia is (was?) less saturated and contrasty, and Astia was even less so. All Fujichrome reversal stocks. I don’t think Alexa looks anything like reversal film!
  24. Were you looking at S-Log3 or did you have a Rec.709 LUT applied? Log will always appear noisy, since the shadows are raised to maximize the captured dynamic range. You will normally crush them back down in color grading for viewing, either with a LUT or manually with grading tools. From what I understand with the FX6 (haven’t used it myself yet), the High Base is only 12,800 ISO in CineEI mode. With S-Cinetone in Custom mode, the High Base is 5000 ISO. Since S-Cinetone is basically Rec.709, the blacks are already crushed for you, hiding the noise in the shadows. If you don’t need the exposure, then you can stay in the High Base, dial the EI back down to 6400 or 3200, and use the internal ND to compensate. This will give you less noise in the High Base, at the cost of some highlight range and it will be less noisy than using the Low Base of 800 ISO and pushing the EI up to 3200 or 6400. Of course, if you do need the exposure, then 12,800 is very useful. So you get the best of both worlds. It’s also possible to turn on the in-camera noise reduction setting if you want to reduce the appearance of noise in the High Base, at the cost of some blurring and smearing artifacts. Really though, noise reduction is best done in post as once it’s baked into the image, it can’t be undone.
  25. I’m not sure we can make the assumption that modern rooms are always lit. I think light has to come from somewhere, and the choice of what and where makes all the difference to the mood of the scene. Is there sun bouncing off of a neighboring building and reflecting off the wood floor in the next room? Are all of the lights off inside and only cool ambient skylight filtering in thru thin green sheers on the windows? Is tv on at night in a dark room, while everyone’s faces are lit by their cell phone screens, with the only warm light coming from the kitchen door in the background?
×
×
  • Create New...