Jump to content

silvan schnelli

Basic Member
  • Posts

    78
  • Joined

  • Last visited

Everything posted by silvan schnelli

  1. To my knowledge iso, or maybe the more appropriate term Exposure index is just a curve that shifts what sensor signal becomes middle gray. At a higher EI, a lower sensor signal is pushed to middle gray; I often think of it as like an overall brightness adjustment. I’m pretty sure that filming at 3200 at correct exposure vs 12800 two stops overexposed and then pulled down in post are practically the exact same thing. I think thats why when filming in RAW, you can still easily adjust the EI (iso). Im pretty sure the camera engineers purposely thought of this system, so it be easier to transition from film to digital, with this similar concept of rating stocks differently and over- / underexposing. Your sensor has a set dynamic range and I usually think of the EI as just a means to help me Exposee the scene in the manner that I want. Low EI to facilitate overexposure, meaning less signal to noise ratio (more visible noise), more shadow detail (darker shadows) and less highlight detail (less bright highlights) detail and a high EI for the opposite. I highly recommend reading this article on dynamic range for anyone more interested: https://www.arri.com/resource/blob/295460/e10ff8a5b3abf26c33f8754379b57442/2022-09-28-arri-dynamic-range-whitepaper-data.pdf
  2. I’ve always been intrigued by shots that superimpose the view of what is outside and inside of a window. Examples that come to mind are that frame from Klute by Gordon Willis or a series of photographs by David English. I’ve also come across a photograph by @David Mullen ASC titled “The Cloud Minders” which also illustrates this concept. I am assuming the main criteria is to have a rather balanced exposure of the inside and outside. Like in the shot from Klute, where I’m guessing that there was a light on the actor that could compete with the intensity of the outside daylight. Is this correct? Are there any other tips, especially when it comes to still photography, or scenarios where there is no possibility to heavily control the lighting?
  3. @Doyle Smith Thank you for the in-depth response. I definitely drew a lot of inspiration from the Prince of Darkness and Darius Khondji. I think regarding the spot meter situation, I metered the walls to be up to 2 stops under the key, but seeing as the walls are white, it made the room look very dark. I read that in dark moonlit rooms, scotopic vision, white roughly appears as middle gray. I’m regards to the second still, the background is very much intended to be black as it turns into an interrogation room but the setting is the same room. I definitely agree that I should have added a form of eye light. Again I think my emulation is so contrasty and lack of experience are the large culprits. Thank you for all the lighting suggestions, I’ve heard of using tubes for eye light but never a snoot, will definitely try it out. Thank you.
  4. @Tyler Purcell I see, that makes a lot of sense. I think my emulation is still too untested and unpolished with a way too contrasty curve. I assumed that film was just much more contrasty but from my analog still photography experience it doesn’t really seem like that. As you mentioned, in reference to the feature film stills I added above, realism isn’t as mandatory in fostering a mood and atmosphere. Thank you.
  5. Could you please elaborate on this, I don’t quite understand. Intuitively I would think that underexposing would produce a latent image where the dark areas are practically left unexposed, and that this would make it easier to form dark shadow density in the print process (as it’s like the reverse of the negative). On the contrary I would then think overexposing would make the negative denser in the shadows, which in turn would make it harder to print dense, dark shadows in the final print. Obviously this process that I explained can’t be right, as the results of my two photos have shown.
  6. @Gautam Valluri Yes basically that, like for example if I purposely overexposed to obtain more shadow detail and now want to regain correct exposure, what is the difference between pull process in the developer bath or printing down? I also hope to be able to visually test these things and see them myself, but I am not sure when I’ll be able to do that and was curious about the experience and knowledge of the people on this forum. I also greatly appreciate that you went on my profile to cater the response to help me, thank you.
  7. What is the difference in the outcome of, for example: Pull processing the negative in the developer vs printing down the film. Or it’s counterpart, push processing vs printing up. Seeing as the latent image has already been created, no more additional detail can be recorded on the negative. So wouldn’t both the processes just in end effect change the amount of density (exposure) of the final film. The one thing I could think of, is that perhaps grain pattern or intensity would differ, as im pretty sure print film has very very fine grain, to prevent any additional grain from being added onto the final image. However, I’m sure there must be other ways in which these processes differ or perhaps my current understanding of the processes, isn’t quite correct.
  8. I am familiar with how underexposing film, much like with digital, would lead to less information in the shadows and more information in the highlights (although feel does have worse shadow latitude). However, I don’t understand why the underexposed film is much more milkier, when both the correctly exposed and underexposed film are put through the same developing process. When I think of wanting darker blacks I would think to overexpose the negative and then perhaps print it down to have a correctly exposed print. Leading to a correct image with darker shadows. And intuitively if I wanted more detail in the highlights I would underexpose the negative and then I guess push process or print up to regain correct exposure of the final look. However, when I look at the underexposed photo I added below, wouldn’t doing that just lead to an even more washed out image. In the two images I attached below I can see that the first image has less detail in the highlights and more in the shadows and with the latter being the reverse. However I can’t understand why the underexposed image is that much foggier, is it really just because of less shadow information on the negative. What if I wanted more detail in highlights, but still have a contrasty end result. (Apologies for the bad compression on the images)
  9. @Uli Meyer@Brian Drysdale Yeah the images look even more underexposed on the forum here, but I do think I definitely need to do a lot more testing to get a feel for the gamma curve and how to expose.
  10. @Uli Meyer Here are two stills from separate scenes. In the first one, I didn't add enough fill light to the face, mainly because I didn't do enough test shooting and underestimated the gamma of the curve. That's also why I asked how people would expose on film, how can you know what the correct contrast ratio is for the look you want to achieve. You can also see how the white walls are fairly dark now, but everything else is practically black or in silhouette. Regarding the second shot, I had a skirted toplight which I exposed with an incident meter for correct exposure and I also had a backlight which I exposed to be one stop under, but its barely noticeable now. Furthermore during shooting I wasn't expecting the shadows below the eyebrows to be that dark, which again made me think: how would a person know if certain shadows would be too dark if shooting on film or in the past where there wasn't a display to roughly show the exposure of what is being shot.
  11. Have any of you noticed exposing differently for digital than for film? Specifically when it comes to things like lighting ratios on actors, if you for example tend to light one or the other with higher contrast ratios. Or also other things, like tending to add more negative, or more fill light when filming with one medium, or anything else that comes to mind. I also wonder how people expose a scene for analog film? For example if you have an actor in the foreground and want the background to be slightly darker. How would you know how much to light the background relative to the actor, would you use and incident or spot meter? But if you use a spot meter wouldn't you also need to know how said material will look at different exposures and which would you expose first, the actor or the surrounding? I'm asking because I recently wanted to expose for a moody scene, but the walls behind the actor were completely white. I initially exposed with an incident meter to underexpose the "room fill/background light" by -2 stops, but I felt that the walls were still too bright. I then used an spot meter to underexpose the walls -2 stops in relation to the key on the actors face. However, when I now review the footage it seems a bit too dark and more over, all the objects in the room that were not white are very dark to almost black now. Also from seeing other movies moody/dark scenes, I noticed that often the exposure isn't actually as dark as I remembered , with a bunch of detail everywhere, but while watching the movie it definitely felt that way. Thank you for taking your time to read this.
  12. I am planning on using the Sekonic Spectromaster C-800 to correct for any color correction lighting shifts, but don’t have access to any reliable color correction gels. One thing I am confused about is the difference between CC index correction and CC filter number, e.g. how a reading can give a CC# of 3.4M and CCi of 1.3M, as well as how these values are derived and relate to Duv? LED lights e.g. Astera Titan tubes have color correction settings in the CCT controls and so does the camera (tint adjustment). So I’m wondering if I could adjust these settings in accordance to the Spectromaster readings or using other readings such as Duv?
  13. @Robert Houllahan Thank you for the simple and quick reply. It is something I only realized later through trial and error. Nevertheless, for some reason the footage still appears a bit off, but changing from log to linear fixed the issue for other film footage I have downloaded.
  14. I downloaded 5213 35 film scan DPX footage and ARRI XT footage from https://www.cinematography.net/Valvula/valvula-2014.html. My goal is to compare the footages and put them through a post pipeline that I am amending and tinkering with. However, when I loaded the two DPX files into Nuke, I noticed that the film scan is much darker and slightly more contrasty than the Alexa footage. I then applied a OCIOFileTransform to both, a LogC2 to rec709 conversion for the alexa and davincis Kodak 2383 PFE for the film scan. The colours look great but the film scan is much more underexposed. On the CML website the footage looks relatively the same exposed, but for me the film looks much more underexposed. What am I doing wrong or misinterpreting? ARRI XT (16bit) with classic LogC2 to rec709 and Kodak Film with Davinci 2383 PFE
  15. I am unable to understand why, let’s say the Sony Venice at EI 3200 with a native setting of 3200 would have less noise than EI 3200 at native 800. I have read the whitepaper https://www.photonstophotos.net/Aptina/DR-Pix_WhitePaper.pdf , but to my understanding, the paper talks more about the benefit of increased dynamic range by running a second capacitor in parallel. How is it possible for cameras with a second native ISO to be more sensitive (higher conversion gain) and have better SNR with less incident photons. All whilst not making the lower ISO rating obsolete? I feel like this would also suggest that I would need to have two separate light meter calibrations for the two ISO settings, if one setting converts x amount of photons into x exposure and the other converts y amount of photons into the same x exposure. EI latitude representation of the Sony Venice 2
  16. @David Mullen ASC Thank you for the tips. I will definitely have to put this into practice.
  17. How could you recreate these two shots from Lessons in chemistry, photographed by Jason Oldak. In the first one it seems that the reflection from outside the window is only present where the curtain is. I’m guessing this is because it blocks out the light incident from the room going out, which then allows for the reflection to be more present relative to the darker background. In the second one the reflection looks like a superposition, which may perhaps just be a balancing of inside and outside lighting. However I am curious about other thoughts, opinions, tips and experiences in recreating a frame like this. Thank you
  18. I'll answer my own question in case anyone reading this also wants to know the answer. I am still not certain why Steve Yedlin calibrates his light meters (I'll have to re-read his post most carefully). However, I have found out that from what I have read, ISO values differ slightly from one manufacturer to the other. This is perhaps also another reason why the term Exposure Index is used instead of ISO (besides the fact that a cinema camera's ISO is always fixed). So an ISO of 100 to a Sekonic lightmeter means something different than an ISO of 100 to an ARRI Alexa, which both could differ from what a Sony Venice perceives an ISO of 100 to be. Hence by calibrating our light meter, we can ensure that the EV calculations are calibrated to accommodate the way the camera processes the exposure. This type of calibration hence doesn't alter the Luminance or Illuminance values obtained when measuring Cd/m^2 (foot-lambert) or lux (foot-candle). Difference between Exposure Index (EI) and ISO https://125px.com/docs/techpubs/kodak/cis185-1996_11.pdf
  19. @Nicolas POISSON Thank you for the thorough answer. Yeah with being indirectly tied to bit depth, I was referring to how even in log encoding, for example, there still needs to be enough bits to be assigned per stop value to ensure an image with minimal posterization. The part about the DR definitions is also very interesting, I wasn't aware of the latter two, so I'll have to do more reading on them. I especially appreciate the last part of your response, so would you say it would be fair to imply that by compressing the highlights the "gradation in the luminance stops are just not be represented faithfully" (relating to the wording of my original question)?
  20. I recently read the ACES Primer, where one of the headings was “bringing back the light meter”. My understanding on this was that often scenes would be lit to accommodate for the output device (display referred) and would hence bottlenecking the look of the film, by not using the cameras full DR potential, especially (or at least) with respect to future exhibition. It made me think about a question I have had for a while, and that is in regards to the lighting of a scene and placing of the highlights. If the camera has 16stops of DR but the displayed image (exhibition) will be in SDR with a lower bit depth (which to my understanding is indirectly tied to DR), does it make sense to place information into the upper and lower bounds of the cameras DR. What will happen to the highlight information in the 14/15/16 stops of the image, if the exhibition can only display a DR of 10 or even 8 stops; will they clip? Will the gradation in the stops luminance just not be represented faithfully? Or does it make more sense to light a scene in a way that keeps the information within those limited 8 / 10 stops. But then what is the point/use or happens to the information in the higher stop values? (I am aware that important information is always best placed in the gamma of the characteristic curve.) I hope my question makes sense, I have yet to find an answer that clearly paints the reasoning to me and would greatly appreciate anyone willing to explain it to me. Thank you for your time
  21. I was attempting to re-read Steve Yedlin's #NeryFilmTechSuff on Light Meter Calibration, but still have a lot of questions that made it difficult for me to even understand what was being talked about. Perhaps these questions are addressed in the first version he published, but I cant seem to find it. My question is: Why is he changing the calibration constants from his light meter and how does this differ from the compensation settings (which I think he also changes). I am quite certain I must have misinterpreted many aspects of the text, but this is the first time I've heard of calibrating light meters. I am assuming that changing the calibration constant with respect to the EV formula will result in different calculated/suggested camera settings, but shouldn't doesn't change the illuminance/luminance being measured. Yet I still don't know why one would or maybe more appropriately phrased, should, do this? He does mention something about personal preference, so my best guess is that it might have something to do with that. If it is based on personal preference my next question would be how does one even figure out their preference. I've read that different manufacturers or even light meters have different calibration constants, but I'm assuming all have the same goal of obtaining correct exposure through the calculated readings, so why change this calibration number? Thank you for taking your time reading this question and apologies for the repetitive structuring of it. Steve Yedlin's page that I am referring to: https://www.yedlin.net/NerdyFilmTechStuff/ExposureEquationsAndMeterCalibration.html
  22. @Albion Hockney@David Mullen ASC Thank you for the great replies. I know this has been touched on before, but I am still a bit confused about: If I intend to have windows with no details in them, would it be okay to just clip the windows during acquisition or should I still try to obtain information and do it in post. I am wondering because I don’t understand how I would be able to clip the windows in post, without affecting the exposure of the whole scene. I also understand that, to my knowledge, the value of keeping information includes, having better control of the roll off and that it can avoid the problem where different RGB values saturate at different levels, which can lead to hue shifts. So what is the best method ?
  23. I recently just watched Huirokazu Koreedas newest movie Monster, in the cinemas and was blown away by both the story and naturalistic beauty. I was especially engrossed by a kind of golden glow that seemed to be particularly visible in the highlights. I tried to attach screen grabs I got from the trailer, but it definitely doesn’t do it justice compared to the image I experienced in the cinemas. I'm sure that if you have seen the movie, its evident what glow I am talking about. My question really being how they achieved the glow, the images didn't seem particularly soft, in terms of acutance or contrast or resolution, but then again its tough for me to judge just like that. It also got me thinking about overexposed lights or especially windows (as they are very large), or in this example the highlights of the fish tank. Do people usually expose the frame to clip the windows or always best practice to keep information in the windows, and then do the overexposure in post (even if the final shot should have overexposed windows)? If the latter is the case, how would you overexpose it in post without affecting the other highlights? Thank you for your time
  24. @Joerg Polzfusz Thank you for the links, although most of those do seem to be catered mainly to photography. However, I did just see, which for some reason I didn’t notice before, that dofmaster does have cinematography section in the selection tools. In regards to the Alexa mini which has sensor dimensions of 28.25mm x 18.75mm (open gate) and a Diagonal 33.59mm, what Is it in terms “of x/y inch sensor” notation and why is it that? From what I have seen a 2/3 inch sensor in the cinema section, gives a coc of about 0.02mm, which to my knowledge is the estimated coc of the Alexa mini in terms of cinema viewing.
  25. @David Mullen ASC I suppose it would be a nice concept if one were to be able calculate the hyperfocal distance for a camera, but I guess there are too many unknown factors especially in the exhibition portion. I do also feel like focus peaking on monitors, which is what I assume you are referring to, must have a similar issue. I definitely don’t know how peaking is calibrated (user can also change the sensitivity) or engineered, but I can imagine that when monitoring DOF on a monitor or even a larger display, it will always be shallower on the big screen. However, as you did mentioned before sometimes the DOF is calculated. So I am wondering which COC values would be used, would you use the photography chart values or are there other values?
×
×
  • Create New...