Jump to content

Nicolas POISSON

Basic Member
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Nicolas POISSON

  1. If you are to use it in a professional context, you should replace the plug by a rubber one, as It might be required by local regulations. It could even end up cheaper than a plastic plug if you search in a "pro" shop.
  2. Also those spray bottles may need some practice to spread in an even manner. Lighting may increase the visibility of uneven coverage. And those I know are not cheap.
  3. Some LEDs are sync'ed on the mains frequency. It might be possible to get rid of the flickering using adequate frame rate and shutter speed. Some LEDs are controlled by PWM dimmers, that might be rather low-frequency of any value. I encountered 216Hz, 490Hz, 980Hz... The PWM frequency may vary slightly from one sample to another one (like one at 981Hz, the other one at 984Hz). It might not cause flickering, but bands appear on the image. There is almost nothing to do in such cases, except replacing the dimmer with a high frequency one. The higher the frame rate you want to shoot, the higher the PWM frequency should be. For 24, 25... up to 60 fps, a PWM frequency of 2kHz is an absolute minimum, 5-8kHz might be safer. If you want to shoot high frame rate, it is advised to have PWM frequency of 25kHz or higher.
  4. It is always hard to define a trend while it happens. Low contrast, low saturation, shallow depth of field ? But that may apply more to youtube / corporate / week-end warrior indie filmmaker. Maybe the increasing number of small displays used for viewing (smartphones) and decreasing budgets both lead to more close-ups ?
  5. I do not have enough experience to give advises. I often search for examples I appreciate to build a library of reference pictures. I guess the following pictures taken from "Loki" (The TV series) and "Patterson" illustrate this. This is interesting to load them into an image processing software like Photoshop or Gimp, and to play with the curve tool to simulate a bad monitor (crushed black, incorrect gamma...). One can see how they remain "readable" and robust to bad viewing conditions.
  6. In live sound, we are used to say drummers are frustrated musicians.
  7. You can set up the audio level before any real recording, looking at the VU-meters at the back screen of the camera. Then you activate the limiters (if any) and just use the camera facing you, forgetting about sound and trusting your previous set-up. It may require a few trial-and-error, as you might speak louder during real takes (3 to 6dB more is common). Big plus: no need to sync audio afterwards. Even a $100 shotgun mic above your head on a stand will give much better sound than a lavalier. I guess this is the way most solo you-tubers with decent sound do. Nobody is able to think to its speech while constantly checking audio levels simultaneously.
  8. The phone should switch to the lav mic as soon as it is plugged in. The fact that the devices do not recognize the microphone is enough to explain it does not work. Usually smart-phone or tablet jacks are 4-pins TRRS (Tip Ring Ring Sleeve): Tip (left channel) and first ring (right channel) for stereo audio out, second ring for mono audio in, and sleeve is common ground. Maybe you are not using the proper cable. If your lav uses a 3-pin TRS minijack, you probably need a 3 to 4 pins adapter. Smart-phones use "intelligent" detection: if nothing is plugged, no need for audio out nor audio in. If a TRS jack is plugged, 2nd ring and sleeve are shorten so no need for audio in (the phone guesses it is a simple pair of headphones). If 2nd ring and sleeve are not shorten (impedance different from zero), there must be an external mic plugged in so audio in is activated and replaces internal microphones. Other than that, smartphones or iPads are not meant to be pro-grade audio recorders. Especially with the internal microphones that face the challenge of getting decent intelligibility without being properly placed, and discard the ambient noise. Using lavs or external mini-shotgun microphones should strongly help though, and may meet your needs.
  9. Comparison is always a bit complex since there are two parameters: the overall luminous flux (in lumen) and the luminance at the centre of the beam (in lux). A very narrow beam can reach an impressive value in lux, but will be unusable because it is too narrow. Even when the angle of two beams is the same, that does not mean the falloff are the same. A very approximate rule of thumb would be to consider that a LED source is twice more efficient than a HMI with equivalent angle and optics (like a Fresnel lense). A 600W LED would more or less correspond to a 1200W HMI, and a 1200W LED would match a 2500W HMI. That may be a little optimistic, though. I doubt a 1200W LED could compete with a 5k HMI. Do not get too impressed by the marketing fashion trend that sells the LED technology as "THE ONE" with high efficiency. HMI also has high efficiency compared to incandescent, and has been used for decades now. By the way, the Aputure does not "output" 1200W, it "draws" 1200W. And all HMI are "non LEDs".
  10. I am afraid there is no strict definition. In the theatre and concert world, "upstage lighting" usually means that the source is mainly high above the talent, and a little behind. Some lighting technicians consider that it must be definitely behind, and use a separate term for fully (or close to) vertical : drop lights. Some others may consider that fully vertical or even slightly in front of the talent is still "upstage lighting" as long as it is high enough. What's more, maybe the lighting plot was designed with a great height in mind, and the reality of the performance in the local theatre next week is a stage with a grill 3m above it that forces "upstage lighting" to be more horizontal than vertical. But the lighting designer still talks of "upstage lighting". The common aspect in any case is that it creates a strong edge light. I have never heard any technician calling side lighting "upstage lighting", although it also creates edge light. "Back lighting" means the source is behind, but not necessary high above. It might be on the ground. How to know what the guy means by "upstage lighting" ? Ask him.
  11. There are relationships between hardware limitation and contrast. In the old days of affordable DV cameras (Sony VX1000...), the usable dynamic range was like 5 stops. There was no other choice than getting a high-contrast image. It was a concern for many videographers that jealously looked at guys from the film world. Nowadays, any DSLR below $1000 allows for 10 stops or more of dynamic range. There is MORE usable dynamic range than ever before, and it keeps increasing. It is interesting to compare several generations: https://www.photonstophotos.net/Charts/PDR.htm#Panasonic Lumix DC-GH5,Panasonic Lumix DMC-GH2,Panasonic Lumix DMC-GH3,Panasonic Lumix DMC-GH4 What you consider to be "like film" seems to be low-contrast. Although not very scientific and highly debatable, it is a very common point of view (together with shallow depth of field). It is what so many video guys have dreamt of before and the Canon 5DmkII made it possible. Thus: - is there a technical reason for increasing contrast ? NO! That is the exact contrary. - is contrast actually increasing ? Well, that is your feeling but "clearly" and "evident" are no proof. However, does this matter ? - more important: can you get low contrast / high dynamic range (not HDR) with a recent and affordable camera ? SURE! Plenty of people have done it for years and keep doing it everyday. Usually, the more recent devices will give you slightly more DR than the older ones.
  12. I do not think the fashion is that clear. As I said in my first post, one can also find many low-contrast low-saturation videos on You-tube, which I think would be considered by many as THE cinematic look. I would not have more data to support my statement than you, however. This is why I think it is more a question of what you do not like - and do notice - than a real trend. The only clear fashion I can think of might be super shallow depth of field or open mouth on the timestamps. If the goal of this topic is more to talk about aesthetic choices than camera-related constraints, then it is hard to say anything without knowing the intent of each video.
  13. I do not know Canon cameras, but on my Fuji there must be like 10 looks built-in, including high-contrast + high-saturation (Velvia), low-contrast + low-saturation (Eterna), and several other looks in-between. These are all basis that I can tweak further in the shadows, highlights, and saturation. The dynamic range can be as low as 5 stops or close to 11 stops. Some looks allow for 64 times more DR than others on the same single camera! There is the same kind of system on the Sony A7 series (Picture Profiles), and I think on almost any hybrid/DSLR camera released since the 5DmkII. It is even more so on cameras that can record RAW or when using log: you are no longer limited to built-in camera looks. Again, I do not think it is possible to define any generic trend for a camera or a brand.
  14. As always, it is hard to link the final aspect of any youtube video - that will often be heavily corrected in post - to any generic topic like "film vs. digital" or "Sony vs. Canon". I am not even sure there is a real trend (proved statistically) of high contrast video. One could also talk about all these low-contrast low-saturation slow-motion videos that pop-up continuously. Maybe your feeling is just you not liking high contrast videos with no shadow details, and you notice/remind of these more than any other. By the way, high-contrast high-saturation has always been a way to make the image more impressive. Postcards printers have been knowing this for ages.
  15. Analogue gain is not a win-win. I try to explain with my own words (not very scientific). Take some camera with analogue gain that can capture 12 stops of DR at a base ISO of 100. That camera will only be able to capture 11 stops at ISO 200, 10 stops at ISO400, and so on. I am assuming the ADC (not DAC, sorry) clipping threshold fits the full well capacity (FWC) of the sensor at base ISO of 100 (say the manufacturer has set the analogue gain at ISO 100 in that purpose). If you double the gain, now the ADC clipping threshold corresponds to half the FWC of the sensor. If you double the gain one more time, the ADC clip is 1/4 of the FWC. The DR of the sensor never changes, but the overall DR of the chain [sensor + gain + ADC] halves each time you double the ISO. You loose DR when raising the ISO on such a camera, even if you shoot RAW. There are some tricks with dual-ISO or similar, but the "ISO x 2 = DR / 2" rule is the trend you can see in almost all Bill Claff"s measurements, since DSLRs are using analogue gain: https://www.photonstophotos.net/Charts/PDR.htm#Canon EOS 5D Mark IV,FujiFilm X-T4,Panasonic Lumix DC-GH5,Sony ILCE-7M2 If you set your analogue gain camera at ISO 800 and then push/pull exposure with LUTs, yes this is similar to digital gain (the LUT does it). But you start with a DR reduced by 3 stops in the highlights compared to ISO 100, at 9 stops. If you were using a camera with digital gain, the overall DR would remain unchanged at 12 stops. However, one could define two DR: - the ratio between the highest luminance the device could get and the noise. Let's call it "peak DR". - the ratio between the actual average luminance of your scene and the same noise. Let's call it "avg DR" On analogue gain cameras, peak DR halves as ISO doubles. On digital cameras, peak DR remains unchanged whatever the ISO. However, the "greatest DR you could capture" is not the "actual DR of your scene". Imagine your highlights are 3 stops above the average luminance of your scene. Having 5 stops of DR above middle grey is useless: the two highest stops are unused. Yes the camera could capture more DR, but your scene does not have more DR to be captured. The evolution of avg DR depends of the main source of noise. If the noise mainly comes from the camera electronics, then analogue gain performs better than digital gain. Raising the ISO improves the avg DR on analogue cameras, but does nothing on digital gain cameras. This is due to the fact that the signal-to-noise ratio of an analogue amplifier improves at higher gain. That is one good reason to use analogue gain (the other one being ADC quantization step). If the noise mainly comes from the inherent randomness of light (photon noise), typically in very low light conditions, then the avg DR is equal at any ISO, whether analogue or digital ISO. The only way to improve avg DR is to fill the shadows with more light in the real world. That is very simplified. Cameras use de-noising techniques, which can artificially improve DR and may perform differently depending on analogue or digital gain. I do not know.
  16. My understanding is that the A7 series has "analogue ISO", using an amplifier between the sensor and the DAC, like most DSLR. Then the philosophy of changing the ISO to modify the stops allocation above/below middle gray - keeping the same overall DR - does not apply. With analogue ISO, you basically loose one stop of overall DR each time you raise the ISO one stop. You cannot raise the ISO to protect the highlights either. The culprit is not the sensor, but the DAC. Some DSLR allow to trade one or two stops of analogue amplification for the equivalent digital amplification. On that very limited range, they do work like a "regular" cinema camera. I do not know for the A7S2. There are advantages to the analogue ISO (otherwise nobody would use it): better low light performance and no risk of banding due to the quantization step being too rough.
  17. I am really not an expert of that topic, but I am interested. Maybe you already have read it, but just in case, you will find plenty of information in this document: https://www.arri.com/resource/blob/31918/66f56e6abb6e5b6553929edf9aa7483e/2017-03-alexa-logc-curve-in-vfx-data.pdf I do not know where the "1638 tones" value comes from. My understanding is that LogC maps the 18% gray (294th step in 16 bit linear at ISO800) to relative "0.391", which should be 400 in 10bit, 1601 in 12 bit. This does not mean there are 400 or 1601 "real" tones below. After all, the logC curve does not start at zero: pitch black is mapped at relative 0.0928 (at any ISO), which would be 95/1023 or 380/4095. If I understand correctly, the 294 lower tones are mapped to 95-400 in LogC 10 bit and 380-1601 in 12bit. So you have more steps in the output than digitized by the ADC, which does not improve quantization, but at least preserves it (seems almost 1:1 in 10 bit). The drawback of "more quantization steps than needed in the lower tones" is "less quantization steps than possible in the higher tones". But you still have 623/1023 or 2494/4095 steps for the higher part (well, indeed a bit less since clip might not be at 100%). Maybe still more than enough ?
  18. There is no direct relationship between stops of DR and bits. But yes, 8-bit does not explain crushed shadows nor blown highlights. However, I guess what Tyler and Mark really meant is not truly about bits, but rather "who knows how that video has been processed ?". Internet is full of images with contrast pushed beyond seemliness. By the way, if you are viewing this youtube video on a software-calibrated monitor on a Windows PC, calibration does not fully apply on videos: gamut is not compensated. Again, this does not explain crushed shadows and blown highlights, but colours may be off depending of the gamut of your monitor.
  19. A "zip" compression is a true lossless compression. This is a rather simple and fast algorithm compared to lossy compression. Every lossy compression algorithm performs an additional "zip" type compression at the end of the process, as it has no drawbacks. Hence you will not gain anything significant trying to "zip" jpeg images or h264 video: the lossless compression has already been performed together with the lossy one. For the same reason, you will gain almost nothing zipping multiple compressed images/videos. But this is an easy way to pack them.
  20. Oops, I mistook the direction of correction for the big red object, but you get the idea.
  21. But the light going to the camera depends of the colour of the object that reflects the light. If a big red object is lit with a 5600K source, it will reflect red light to the camera. You would set the white balance to 5600K so that the object is of correct red. You would probably not want to raise the white balance up to 10.000K so that the big red object appears "neutral gray". Having the same coloured object in a great part of the frame is the typical situation where automatic white balance can be completely off. Measuring the light that hits the scene, as you did initially, seems the right way to me. The mystery remains unsolved. I was thinking of coloured specular light that could reach the Sekonic but not the camera. But this would be a bit strange since I would expect manly diffuse reflection. in your scene.
  22. Based on calculus, 1.85 format cropped into FullHD should be displayed as 1920x1038, with 21 pixels black bars on top and bottom. I read somewhere that the resolution shall be a multiple of 4, although I do not know why (maybe linked to the macro-blocks of the compression algorithms ?). Starting from 1038, the closest multiples of 4 are 1036 or 1040, which leads to black bars of 20 or 22 pixels height. Maybe the variation you see is just different people choosing different closest multiple ? Doubling the resolution, the problem does not exist any more: 2076 is a multiple of 4 so the exact height of 42 is acceptable.
  23. Maybe: the Sekonic is measuring the colour temperature of light coming from all directions, including reflections on side objects that are not neutral gray. The camera's meter is more like a "spot" meter, measuring only what is reflected form the scene to the camera.
  24. Not sure, but might help. I only have a PC. By default, Handbrake automatically tries to change resolution and crop based on its guessing the "true" resolution of the video and removing black surrounding. It may fail if the content has a lot of black areas. Even when it detects correctly, you might prefer keep the black surrounding. You have to manually revert back handbrakes's changes. Have you checked that ? Also, by default Handbrake is always configured to deliver 30fps whatever the input frame rate is. You have to set it as "same as source".
  25. I am a bit confused: I would think a "convex" mirror like the one in the first post would broaden the source, making a distant source (the sun...) to appear like a nearby point source. To have a near source appear like a very distant one, I would think one would use a concave mirror, like a parabolic one with the actual point source at its "focal length". Is there something I did not get, or is just we interpret convex/concave a different way ?
×
×
  • Create New...