Jump to content

Nicolas POISSON

Basic Member
  • Posts

    89
  • Joined

  • Last visited

Everything posted by Nicolas POISSON

  1. I would not categorize all DSLR in a single set and make generic conclusions based on two models that are 6 years old. All I can say is that the 645Z delivering lower quality video than a BMPCC is not really surprising. Concerning the images you posted, well I cannot tell which camera has better skin tones since there is only one sample. But I would not link skin tones to any debate like pixel binning vs line skipping. I do not understand what you mean. Whatever video device you use, there is always a compromise between the amount of data, image size, frame rate, compression ratio, and image quality. That is true for a smartphone as well as an Alexa. But this is a different story than the algorithm used to create a 2K stream from a high resolution sensor.
  2. Reading reviews here and there, it seems that the 645Z was already not that great for video when it was released in 2014. It has a medium format sensor and crops only a little. So the area used for video remains very wide. But there is much more to image quality than the algorithm used to output 2K video from a high resolution sensor. Video on DSLR and mirror-less has improved a lot in the recent years. So it is not really surprising that an "old" dedicated video camera delivers better image than an "old" medium format camera.
  3. Some cameras even use different strategies depending on the frame rate, e.g. using the full sensor width with a high quality downsizing algorithm at lower frame rates (24-60P), then switching to a mix of lower quality or sensor cropping algorithm to reduce CPU workload at higher frame rates (120P and above). It really depends on the model. With the increase of embedded horsepower, using the full width of the sensor seems to become more and more common amongst recent DSLR and mirror-less. Well, "best" in terms of what ?
  4. Blackmagic provides a tutorial in its beginner's guide to DaVinci Resolve. It contains both "correction of obviously wrong colours" and "define your own aesthetic" examples, with sample videos. It is very basic but it is a good introduction.
  5. Although not a pro, I have made a lot of research about dynamic range, ISO, and all that stuff. Here is a summary of what I understood. Basically, ISO is gain. But this gain is the multiplication of analog gain at the sensor output – and before ADC – and digital gain. If analog gain is fixed, then ISO is simply digital gain and the camera is ISO-less. It means ISO setting changes absolutely nothing when shooting RAW still images in a DSLR. It just increases the brightness of the image displayed in the electronic viewfinder or on the rear screen of the camera. However, even if the camera is ISO-less, ISO setting does affect jpeg and H264 video. Some cameras have variable analog gain. Changing the analog gain does affect RAW. I am not aware of any consumer camera that allows to explicitly configure analog gain and digital gain. Some cameras allow this indirectly. This is what the “DR” setting does on Fuji X cameras. Compared to “DR100”, DR200 halves the analog gain, and doubles digital gain. The overall gain is the same (hence same ISO), but it is less before ADC and more in “post processing”. It is useful in the situation where the sensor is not clipped but the ADC is. Why not simply setting a lower fixed analog gain? I guess it is a question of image quality. If one want to increase the gain, it could be better to first raise the analog gain a bit, and then raise the digital amplification. It suggests that there is no clear winner in the battle between analog vs. digital gain. It seems analog gain is better if applied first, but in a small amount. If one want to go further, it is better to switch to digital. https://www.dpreview.com/forums/post/58627880 There is another technique called “dual native iso”. I am not an expert of sensors. My understanding is that a sensor integrates onboard components to read the signal. Driving these components at higher or lower voltages modify both the sensor noise and its ability to accept a lot of photons. The dynamic range does not change. If one reduce the noise (better low light performance), one reduce the sensor clip threshold by the same amount. This is useful in low light + low contrast conditions, where no pixel receives much photons anyway. Concerning “high end cameras”, ARRI uses another technique, reading each pixel of the sensor with two different circuits in parallel (simultaneously), with different analog gain, combining the two images to produce a single higher dynamic range image. Canon does the same in some cameras. https://www.arri.com/en/learn-help/technology/alev-sensors https://www.newsshooter.com/2020/06/05/canons-dual-gain-output-sensor-explained/ “dual native ISO” does not increase dynamic range, while “dual gain” does. A “dual native iso” camera either use one or the other ISO, not both at the same time. ARRI’s technique is closer to some kind of “exposure bracketing”, like if you were taking two shots of the same scene, at the same time, using one shot with the sensor optimized for low light, the other for high light. A camera with dual native ISO is not ISO-less, while, if I understand correctly, the Alexa or the C300 are. http://neiloseman.com/alexa-iso-tests/ Thus, for your question, of whether setting ISO is important, I’d say : 1- it is definitely important on non ISO-less cameras, and not all cameras are ISO-less. 2- even on an ISO-less camera, you would set your exposure (time, aperture) according to what you see on your monitor. If your monitor is off because of weird ISO settings, you may record poor footage choosing wrong aperture or exposure time. Setting the ISO will not change the dynamic range, but it will dictate how much you open the aperture, which in return dictate how much stops you allow over and under “middle gray”. What highlight you keep, what noise you get.
  6. It is mainly a question of direction. If the light comes from behind the camera toward the subject, the beam will not be that visible. On the contrary, if the source is behind the subject and comes more or less toward the camera, the beam will become obvious. This works for both camera and the human eye. Think about live concerts: from the house, you do not see the beams of key lights hanged downstage that face the performers, but you clearly see the beams of all these moving heads upstage.
  7. It does not need to be very high. It would work from a 45° angle or even lower, at least on one side of your head. However, as you said, the challenge is to hide the stand, which is why someone invented the "grid" one day. Another problem is to avoid to hit the wall behind, as it would become too apparent. In theatres one use profile spot to control the light beam. But that is not something that you typically find at home. This is pretty expensive by the way. Here, you could move the couch away from the wall. That is what you have in your second series of shots. On close-ups that would be very easy to add a little light on your face. On wide shots, not so. Fresnels usually have wide beams. Using a Fresnel several meters away, you might not be able to close the beam enough to have just the face to pop up. "Pinspot" PAR36 might help. It is cheap. It seems that your problem is similar to the "per shot" vs. "environment" lighting debate : lighting per shot vs. environment "Real men" seem to have hard time lighting wide shots as well. It does not help, but you are not alone.
  8. Hi Michael, What follows is my point of view as average moviegoer, not a cinematographer. Problems I see first, before lighting, are related to composition and props. In the first series, there are too many “stuff”: your white shirt reflects more light than your skin. Cushions are near white too. Bottles are too obvious and in front of you (I would rather they hide the cushions). The picture on the wall, and even the whole wall, are too obvious too. I would expect the face to catch the eye, so probably it should be one of the brightest elements. My eyes do not know where to look at. You seem surrounded, sent to the back, not popping out. Working in theatres, I am a big fan of “drop light”, or any substitute, that allows to create that thin bright edge around the head. This is present in your second series. This gives more depth I think, and this would help in the first series. The iPad does not make good lighting alone. It is not powerful enough, it is too blue. I expect it to be slightly blue (as one expect any screen to be blue), but not that blue. You could try to change the colour temperature that by displaying some king of light orange uniform image. Switching to “night shift” mode would probably be too warm. Anyway I doubt it makes a good “white” (in terms of TLCI): it is obtained from a RGB mix, which is totally different from white LEDs. I think you need to cheat a bit and get a tungsten main lighting on your face. Then adding a slight bluish colour contrast using the iPad screen could be great. Hope this helps.
  9. Thanks David. Indeed, I was not thinking of the benefit of Log in general, but the comparison made in that very video above. If I understand correctly, it is basically recording log, then slapping the LUT provided by Fuji, and voilà. So if one just use a LUT that aims at doing the same thing as a built-in profile, is there still a benefit ? And, by the way, I read plenty of people recording Log with this camera, then telling "I used the LUT provided by Fuji as a base". So if one record log, say 10 bit 422, then use Fuji's "Eterna" LUT in Davinci Resolve (without modifications), then add other steps of colour correction, is there a benefit over recording directly with that profile in the camera ?
  10. Hello, My first post here. I practice video as an enthusiast. I own a Fuji X-T30 that I use for stills. I am learning how to use it the best way for video. I am trying to understand the path of image processing within the camera and in the editing software. My concern is about comparing those two methods: - use built-in film simulation, like “Eterna” (low contrast, low saturation, aimed at giving a cinematic look out-of-the-box) - use “log” profile, then apply the “log to Eterna” LUT provided by Fuji in an editing software. This video is comparing those two paths: The author of the video, as well as most people in the comments, seem to agree that the “log+LUT” path is way better, with 2 stops increased dynamic range and so on. I don’t. I prefer the “built-in Eterna” path. I prefer the color (less greenish). I find it retains more details in the skin. But most above all, if the “log+LUT” path seems to retain a little bit more details in the highlight, it looses ten times that gain in the shadows. OK, might sound 100% subjective right now. But with my limited understanding of how all this work, I do not see how a “Log+LUT” path could be better than “built-in” if one do use the same target color profile. My understanding is that either way, one apply some kind of LUT. When using the built-in film simulation, that LUT is applied to a high quality set of data: full depth (14 bit or so), no lossy compression. When recording with a log profile, one start form the exact same data, but one first reduce it, then one apply the LUT in the editing software. The reduction with log profile is more optimized in term of DR than a linear reduction, but it should be inferior to using the full set of data. This is a different story for stills, as “raw” really is “RAW”. For stills, there is no reduction of quality before applying image processing in an external software. The overall image processing chain is the same, it is simply split at a different point between “built-in” and “external software” corrections. The trade off is not quality, it is “amount of data” vs. “ability to correct later”. But video with log profile is not “raw”. It seems that even Blackmagic or Arri “raw” are not raw. My understanding is that “log+LUT” has no benefit if one use a LUT similar to the film simulation built in the camera. It is only useful if one plan to use custom LUTs that have no built-in equivalent. So… what am I missing?
×
×
  • Create New...