Jump to content

Nicolas POISSON

Basic Member
  • Posts

    122
  • Joined

  • Last visited

Everything posted by Nicolas POISSON

  1. I am really not an expert of that topic, but I am interested. Maybe you already have read it, but just in case, you will find plenty of information in this document: https://www.arri.com/resource/blob/31918/66f56e6abb6e5b6553929edf9aa7483e/2017-03-alexa-logc-curve-in-vfx-data.pdf I do not know where the "1638 tones" value comes from. My understanding is that LogC maps the 18% gray (294th step in 16 bit linear at ISO800) to relative "0.391", which should be 400 in 10bit, 1601 in 12 bit. This does not mean there are 400 or 1601 "real" tones below. After all, the logC curve does not start at zero: pitch black is mapped at relative 0.0928 (at any ISO), which would be 95/1023 or 380/4095. If I understand correctly, the 294 lower tones are mapped to 95-400 in LogC 10 bit and 380-1601 in 12bit. So you have more steps in the output than digitized by the ADC, which does not improve quantization, but at least preserves it (seems almost 1:1 in 10 bit). The drawback of "more quantization steps than needed in the lower tones" is "less quantization steps than possible in the higher tones". But you still have 623/1023 or 2494/4095 steps for the higher part (well, indeed a bit less since clip might not be at 100%). Maybe still more than enough ?
  2. There is no direct relationship between stops of DR and bits. But yes, 8-bit does not explain crushed shadows nor blown highlights. However, I guess what Tyler and Mark really meant is not truly about bits, but rather "who knows how that video has been processed ?". Internet is full of images with contrast pushed beyond seemliness. By the way, if you are viewing this youtube video on a software-calibrated monitor on a Windows PC, calibration does not fully apply on videos: gamut is not compensated. Again, this does not explain crushed shadows and blown highlights, but colours may be off depending of the gamut of your monitor.
  3. A "zip" compression is a true lossless compression. This is a rather simple and fast algorithm compared to lossy compression. Every lossy compression algorithm performs an additional "zip" type compression at the end of the process, as it has no drawbacks. Hence you will not gain anything significant trying to "zip" jpeg images or h264 video: the lossless compression has already been performed together with the lossy one. For the same reason, you will gain almost nothing zipping multiple compressed images/videos. But this is an easy way to pack them.
  4. Oops, I mistook the direction of correction for the big red object, but you get the idea.
  5. But the light going to the camera depends of the colour of the object that reflects the light. If a big red object is lit with a 5600K source, it will reflect red light to the camera. You would set the white balance to 5600K so that the object is of correct red. You would probably not want to raise the white balance up to 10.000K so that the big red object appears "neutral gray". Having the same coloured object in a great part of the frame is the typical situation where automatic white balance can be completely off. Measuring the light that hits the scene, as you did initially, seems the right way to me. The mystery remains unsolved. I was thinking of coloured specular light that could reach the Sekonic but not the camera. But this would be a bit strange since I would expect manly diffuse reflection. in your scene.
  6. Based on calculus, 1.85 format cropped into FullHD should be displayed as 1920x1038, with 21 pixels black bars on top and bottom. I read somewhere that the resolution shall be a multiple of 4, although I do not know why (maybe linked to the macro-blocks of the compression algorithms ?). Starting from 1038, the closest multiples of 4 are 1036 or 1040, which leads to black bars of 20 or 22 pixels height. Maybe the variation you see is just different people choosing different closest multiple ? Doubling the resolution, the problem does not exist any more: 2076 is a multiple of 4 so the exact height of 42 is acceptable.
  7. Maybe: the Sekonic is measuring the colour temperature of light coming from all directions, including reflections on side objects that are not neutral gray. The camera's meter is more like a "spot" meter, measuring only what is reflected form the scene to the camera.
  8. Not sure, but might help. I only have a PC. By default, Handbrake automatically tries to change resolution and crop based on its guessing the "true" resolution of the video and removing black surrounding. It may fail if the content has a lot of black areas. Even when it detects correctly, you might prefer keep the black surrounding. You have to manually revert back handbrakes's changes. Have you checked that ? Also, by default Handbrake is always configured to deliver 30fps whatever the input frame rate is. You have to set it as "same as source".
  9. I am a bit confused: I would think a "convex" mirror like the one in the first post would broaden the source, making a distant source (the sun...) to appear like a nearby point source. To have a near source appear like a very distant one, I would think one would use a concave mirror, like a parabolic one with the actual point source at its "focal length". Is there something I did not get, or is just we interpret convex/concave a different way ?
  10. Soderbergh shot Paranoia (released in 2018) with iPhones 7. Upstream color (released in 2013) was shot on a Panasonic GH2. Arround 1995-2000, there was a bunch of film shot on DV cameras (The Blair Witch Project, Festen, Dancer in the Dark…). I do not know how Chris Marker did shoot La Jetée (1962), but since this is mainly based on still pictures (except for a short scene), he could almost do it without any camera at all. There have always been movies shot with cheap equipment while much better cameras were available at the time. So yes, you can shoot with a DSLR and dream of an international distribution in theatres or on Netflix, as long as you produce a masterpiece… … Or just shoot with what you have. Considering your list, I guess you are rather in the "no budget" category ? Then the question is not whether this is acceptable for a feature, but what drawbacks you will face. I guess your lenses have fly-by-wire focus and aperture control. Most if not all Fuji X-mount still lenses do, as well as other brands AF lenses. Even if there are rings, they do not control physically anything. These are just encoders, and a stepping motor is set accordingly. From my little experience, this is usable, but the relationship between the rotation of the ring and the actual change in focus is not perfect. If you write down marks and pull focus once and revert back, focus should be OK. If you do this one or two more times, focus has started to shift. You need to set the focus again on a very regular basis. Note that this behaviour still occurs even if you set the ring control to « linear » (which is mandatory anyway, « non linear » is unusable with a follow focus). Using purely manual lenses with adapters will not have this problem. But you loose autofocus, built-in image correction, and Fuji-X lenses are usually great optically. Ring strength is a bit on the hard side. Using a follow focus will tend to raise or push down the lens, as the camera body is thin and the attachment to the rods will have sufficient flexibility to allow some deformation. This is more obvious at longer focal lengths. The workaround is to have something attaching the front side of the lens to the rods, so that mechanical efforts are balanced. The aperture is stepped and cannot be de-clicked. You just cannot modify the aperture while shooting, because it jumps in 1/3rd of a stop increment in a very apparent manner (again, it is electronically controlled). Some lenses have focus breathing, some do not. In my own collection, the XF23mm f/2 and the 56mm f/1.2 have remarkably no visible breathing. The 35mm f/2 breathes. The Fuji X-series have clever built-in tools to tweak the image processing. Using F-log, the dynamic range is claimed to be a little above 11 « real » stops (source : Cine-D). Using Eterna film simulation with DR400 and highlight setting at -2, you loose only half a stop. If you do not have the ability to shoot log correctly and if the Eterna look please you, this will ease your life and you can focus on other important things, or save money and rent for more lighting.
  11. On this very example, it seems to me that there is absolutely no technical constraint at stake. It is the director's wish to have that look. It is made on purpose. Whether we like it or not is just personal taste. Moreover, it is really hard to tell whether this look deserves the story, just based on the trailer.
  12. I would not call modern LEDs "spiky", as even the cheap ones now have a rather continuous spectrum. They are much better than fluorescent lamp (definitely spiky). However modern white LEDs share the same drawbacks: - lack of deep red. By the way, this also means "lack of infra-red", or "lack of heat", which is the very reason for their high efficiency. You cannot have it all. - cyan gap: even expensive LEDs find it hard to produce light around 480nm - blue peak: this indeed can be described as a spike This is a totally different story if one use RGB LEDs to recreate white. Week-end warrior DJs do that, but I very much doubt that is used in movie production. Maybe the waxy aspect results from make-up as well ? Or too high compression ? Or some post-processing in VLC ?
  13. the colour depends on the source, but also on your white balance settings. If your sources all have the same colour temperature, there is no reason to gel them (I mean CTO/CTB, not ND) : you would loose a bunch of light, whereas setting your WB correctly would render the same without gels. You would think about gels if you want to match sources of different colour temperatures (like tungsten and the sun). You may want this... or not. Search this forum, there are plenty examples of people using different colour temperatures and still creating a believable sunlight.
  14. In every picture there is fog/haze that makes the beams apparent. The use of diffusion is not expected, as it would create the soft light of a cloudy day, not hard sunlight. If what you mean by "light strength" is "power", it does not matter for the hard/soft aspect. But low power sources will require greater aperture or raising the ISO on the camera. However, distance does matter: the further the source, the more parallel the beams. If you put your source far far away, then you need high power sources that can be focused (like HMI frenels), to avoid loosing too much light.
  15. There are plenty of free software console that will be OK for controlling a few fixtures. Film lighting does not require the same level of direct access to controls as live performance since you usually do not fire 100+ cues in a single hour. Personally, I like MagicQ, although the learning curve is a bit steep.
  16. It is a bit like asking for a "good brand" of camera. Many brands have products that cover various ranges, for different user needs and budgets. Usually one define needs and budget first, then look what could match.
  17. Didn't know Mr Steed and Mrs Peel worked for Kodak.
  18. Different skin tones are expected to be at different "gray levels". You would not want a deep black skin tone talent to be exposed at middle gray. You can expose using skin tone as reference, correcting with a personal scale inherited from your experience. Something like: "if the talent has typical Caucasian or Asian tone, expose at middle gray. If she is Afro-American, expose one stop under middle gray. If he is Ethiopian, expose two stops under middle gray. If she is particularly white skin with red hair, expose one stop above middle gray,". That is very similar to using a gray card. It is less reproducible, but it still works if you do not have a gray card.
  19. Hello, Here are two pictures taken from The Queen's Gambit. To me, the upper image is fine, the lower one not so much. The skin of the actress looks like if it were a colourized black and white image. What could make that ? And, more important, how to avoid it ?
  20. The 50mm is claimed to be close to human eye for full frame. For "super35", the equivalent is 35mm. But... The human eye is not a simple lens. And another question is to know what criteria one use to estimate the eye focal length. If this is the angle of vision, well... The angle covered by either one eye or the other is over 180°. The angle covered by both eyes simultaneously is like 120° (+/-60° around centre). But one do not really "see" in the surrounding angles, only suspicious movements are detected, and then one turn our eyes to check for any danger. The angle in which one see colours is around 60° (+/-30°). The angle were accuracy is high enough to read text is only 20° (+/-10). And the maximum accuracy is within a pinspot 3-5° angle. Which one of these angles should be used to estimate an equivalent focal length of the human eye ? Moreover, one cannot isolate the information given by the eyes from the processing performed by the brain. When looking around you, the brain concatenates images from different focus distances and directions, a bit like the panorama mode of some cameras. And it uses the long term memory for object recognition. That way, you know that Macmini on the desk has a power button at the rear, although you do not even see it. You know you should not sit on that aluminium chair that has been taking sun for hours at the café terrace. Another argument for the 50mm is not angle of vision, but the fact that an image shot with a 50mm lens projected on a screen of typical size will look "real size" for a member of the audience seated at a typical distance. Or, for still photography, a picture printed in a typical format size, seen at a typical distance, will also look "real size". There is a lot of undefined "typical" here. The claim of the 50mm should not be taken too strictly. It just tells that the image will appear more or less "natural". But a 35mm, 85mm or 135mm will also look rather "natural".
  21. I am no expert, but I do not see how low bit depth is responsible for noise. My understanding is that noise comes from the randomness of electronic reading circuit, but as well from the inherent random nature of light. When light is low, its randomness becomes more obvious, even with the perfect noise-free sensor and 1-zillion bit AD converter. Hence the best solution is to add more light. I would think that low bit depth could even reduce noise: with levels more distant from one another, there is less digital levels to choose from. Analogue levels slightly varying due to noise will lead to the same rough digital level. But this could create banding. In the audio domain, "dithering" is the art of adding noise to reduce low bit depth artifacts.
  22. Well, some fundamental parameters are missing here. A RAW file cannot be seen by the eye, it cannot be displayed. So it cannot “look good” or “look bad”. It has to be processed in some ways : de-bayering, white balance, and so on. Indeed, there is not such thing as a "RAW image", even if everybody uses this term. There is only "Raw data", that can be processed immediately in the camera or stored as RAW data to be processed later. You cannot compare the quality of a RAW and a Jpeg image. You can only compare RAW data processed by a user and the same RAW data processed automatically by a device. And YES, automatic algorithms can do a better job than an inexperienced user. Algorithms may even do as good as an experienced user. What’s more, in most imaging devices, the automatic algorithm generating the Jpeg will be tweaked by the user. White balance is a tweak. Offsetting green/magenta in the white balance is a tweak. Adjusting ISO is a tweak. Choosing what “picture profile” / ”film simulation” / ”whatever the manufacturer named its tone shaping algorithm” is a tweak. Depending on the camera, adjusting the ISO might change the Jpeg only or both raw data and jpeg image. On the other hand, the user might process the RAW file externally in LightRoom/CaptureOne/otherApp using the same algorithms as in the camera because the manufacturer made them available. It could even be that the user performs in an external software the exact same processing as in the device, getting exactly the same jpeg image eventually. Using Jpeg compression-decompression 31 times in a row does not prove anything. What is important is the compression ratio. Compress an image only once with a very high compression ratio and artifacts will already be obvious. This is exactly the same for video, audio, as soon as one use lossy compression. Again, the compression ratio is usually tweakable in a camera, but even the highest ratio is still conservative compared to what will be used to post an image on the internet. I really do not see the point of a “RAW vs. Jpeg” battle. These are just tools you choose according to your needs, work, motivation… I am a jpeg shooter. I do not want to spend my Sunday afternoon processing family portraits. If I were a pro selling pictures, I would probably shoot Jpeg+RAW: I could then use the RAW data for highest quality work, heavy correction and so on, and I could use the jpeg version to show the client what the picture looks like on the rear screen of the camera.
  23. Very interesting ! My feeling is: - I do not find the soft-box too powerful. Indeed, the problem is mainly a question of direction. If it was at a lower height and slightly upward, it might work better at selling the book as a reflective source. - yes you could cut a hole in the lampshade, or hide a light behind. But in such a scene, I would not put the lamp in front of the character. I would rather put it on its side. Then this kind of trick would no longer work. - I am not at all annoyed by the lampshade being too thick. A slight hot spot would be good too (slight variations are great), but I think that is really a question of taste. - the HMIs are too powerful. I really like the idea of creating a geometric flat pattern in the background, but with that power it kills the intimacy. Are these HMIs creating the back light on the head ? You might cheat with this too: having lower power HMis to light the window outside, and another cold white source for the rear of the head.
  24. I do not understand what you mean by "dominate". The pen is just a stuff an actor can play with, no more.
  25. The shadow on the upper third is due to the bulb with a silver cap on its top. Now I use aluminium gaffer tape instead. LED bulbs do not heat much, and this allows to shape the cap as desired.
×
×
  • Create New...