Jump to content

Gabriel Devereux

Basic Member
  • Posts

    54
  • Joined

  • Last visited

Profile Information

  • Occupation
    Cinematographer
  • Location
    Australia / United Kingdom

Recent Profile Visitors

1262 profile views
  1. Hi, I'm currently in need of an IR converted mirrorless camera (preferably Sony) to rent this coming weekend in Australia. I've seen them all around the US, some even CFA stripped (which would be preferable). It preferably would susceptible from anywhere between 700/750nm - 810-850nm for deep IR recording. The goal is to film 'invisible light' with an IR emitting source that is far more susceptible to our cameras than the eye. Thanks Gabriel
  2. As said above, everything LED is missing spectral content - now this is OK as long as you can control the camera systems being used under said limited frequency bands of lights. Now I'm going to assume (purely from the point that doing otherwise would be almost absurd) that you can't tell the studio to only shoot on one particular camera system. With this, unless you go down the route of say Kino Freestyle panels (that attempt to match the spectral response of numerous camera systems) you may want to give up the versatility of RGB based (RGBWW, RGBW etc) LED's for just plane white LED's (either CW or WW) for the sake of somewhat colour accuracy. As said above Aputure and Nanlite are good when looking at it's simple single white LED fixtures (WW in this instance if your going to use large tungsten fixtures for sunlight) so it may be your best option. White light LED's, while lacking in versatility do have the most uniformed spectral power distribution emitting a singular wider frequency band that will harmonise with more camera systems and allows a larger gamut of colours to be resolved, meaning less inorganic skin tones and serious colour shifts. At the end of the day you may want to look again at tungsten space lights. Permanently installing LED fixtures may be a little unwise considering the extreme amount of misinformation and the technologies infancy. I remember listening to an interview with John Higgins 'Biggles' on his work on 1917 and he talked about purchasing LED fixtures and how doing so at the moment is unwise as a long term investment.
  3. To jump in, and disagree. On film the image is not 'created' on the set anymore than digital. I've yet to see chemists on set altering chemical compounds of said emulsion in an attempt to boost the magenta channel/layer while you're attempting to get a shot. If anything the look of film is somewhat predicated by the stock you choose disregarding the post process the emulsion and prints will go through similar to that of a digital pipeline however arguably with less control. I don't see why you view a film being graded in a certain way sad. Is choosing a certain stock for a certain look, sad? Is it because the film maker has more latitude and can make more refined decisions? Because it means they can potentially fix an error? The latter two I believe are the main causation for the DI to be knocked but personally I don't mind it. The amount of times people throw around the term 'The DP's job is too *insert remark here*' I find interesting yet, isn't our primary job, above all else, to deliver an image? The images that are being delivered 'nowadays' I would partly consider more exhilarating, exciting than in previous decades - this is of course disregarding the actual creating of the image which as said above is becoming easier with more latitude and tools. With that - the bar of quality is getting higher, while the required skill is getting lower. It's almost as if cinematography is becoming easy to learn, hard to master. Which is different, but I wouldn't knock it.
  4. As said above, your options are endless. There is no singular way to replicate sunlight. To break it down typically you'd have a singular powerful harsh source (acting as the sun), and then a surrounding softer source acting as the sky (at least that's typically how I'd look at it). So, with your goal being to achieve an outdoor scene at noon (keeping in mind, I don't know if this is an exterior set, an interior set or if this outdoors... if so you may just want to shoot at noon). I'd imagine you'd want one harsh powerful overhead source, as harsh and as high as possible with a wide beam spread - maybe a 20k skypan (keeping in mind to operate and rig said fixture requires a skilled crew)... or something of that nature. Then a surrounding softer source, a hypothetical array of space lights, potentially with a very mild diffusion bellow. This, like everything said above is up to personal taste.
  5. Contrast is a difference in illuminance between two points. For example, to take it out of light and/or radiology. The contrast between 'terrible' and 'wonderful' - there is no path between those two words they are just on the opposite ends of the meaning. Such as light and dark. By definition - the state of being strikingly different from something else in juxtaposition or close association. Fall-off, in theory, is looking at plane wave propagation in free space. 'Modifiers' and different set-ups that David is using as examples does not alter the fact that light is an electro-magnetic wave. You cannot modify light into something else. If you diffuse light (scatter it) or refract it etc - which I may add your source that you most likely are using for your example, say a blondie or a Tweenie are doing said 'modifications' on a much smaller, potentially minuet to the eye, scale. There are a dozen great practical examples in this thread - however light on an elemental (which it isn't) level is just electromagnetic radiation. That, to put it crudely, does dilute over space, that has no real relationship with a relatively artistic term.
  6. Yes and no, If you want to obtain an 4k image, so you shoot at 8k (in S35, so note photo sites smaller than 3-4ym (micrometer)) to hypothetically crop into and deliver, as said above, a 4k image, while your overall total pixel count may be 3840x2160 pixels, disregarding the camera it won't have the same fidelity on an optical level and it most likely NEVER will. With resolution and cameras it's always hard to say '8k will never have the fidelity of 4k' as camera technology (especially digital) is constantly advancing. Of course a higher resolution camera will most likely always suffer from low capacity photo sites (if kept to S35) however technological advancements over the years may eventually negate or help the issue. However, one technology that isn't in its infancy and that we can look at the properties of, is optics and can calculate from a hypothetical 'perfect lens'. Which has attributes such as its elements being as transparent as air (somewhat unrealistic) among others only limited by the diffraction limit. However it still has its limitations when it comes to resolving power that already modern day cameras (such as the 2.2ym photo sites of the UMP12k) are already exceeding. Bellow is a bit taken from another post on the forum (Joshua Cadmium). This is calculated from a hypothetical perfect lens at MTF(50), it is important to note that it's only at MTF(74) that an airy disk is similar in size to that of the photo site of the camera. "For the Blackmagic 12K sensor at green 550nm, that puts MTF(50) at f3.2 and for the Alexa sensors that puts it at f12.1. At red 700nm light, the Blackmagic 12K would hit MTF(50) at f2.5 Alexa would at f9.5 At violet 400nm light, the Blackmagic 12K would hit MTF(50) at f4.4 and the Alexa would at f16.7" Just to quickly duck dive into MTF, to put it crudely you're looking at the way lights channeled through an optical system. Looking at how it channels an airy disk (the point of light the camera measures which is formed by light passing through the diagram (iris) of the optical system). You calculate it through line pairs per mm in relation to what the lens can resolve. Here is a quick insert I wrote on another post - to show the point I’m making point we need to calculate the line pair per millimetre for a sensor. That is done by taking 1mm (1000um) / pixel pitch (2.2um) / 2 (line pair). The UMP12k would be 227.27 lp/mm and just for a comparison the Alexa is 60.61 lp/mm Now, to briefly look at a modern lens that we all know and love, the master prime. One of the 'sharpest' lenses one can use on a S35 system - "If I remember correctly a master prime just about gets MTF (74) at 70lp/mm". Sadly I don't remember exactly which focal length this is however the point is there. The increase of a pixel count on a S35 system to 'crop in' is flawed. One most likely will never be able to achieve the same fidelity, unless you use a larger format camera (not compromising photo site size while being able to increase total pixel count).
  7. Yes, and no... mostly no. Here is a crude (and somewhat inaccurate example) https://na.panasonic.com/us/audio-video-solutions/broadcast-cinema-pro-video/dual-native-iso-camera-technology-cinematic-low-light-video-production Different sensors have different analogue outputs, different 'native' analogue amplifications. Some 200, 400, 800 and so on, and in this case, for extreme circumstances, the ability to switch to a higher analogue output. It's similar to that of ARRI's DGA (dual gain architecture) where each photo site has two analogue 'native' outputs - one high one low. However in this instance, rather than combining the two, they are for two separate outputs. Hence the name 'Dual Native ISO' because each photo site has a dual 'native' analogue output. You are right, there is most likely analogue noise mitigation (I believe there is in most contemporary Sony cameras, and most cameras nowadays) and I will add Panasonics graph stating in a way that noise starts at digital amplification... is wrong. However, it does allow to theoretically capture a certain amount of latitude above and bellow middle grey (similar to that of shooting at a regular cameras native, most commonly 800 ISO) at a much higher sensitivity, without any digital amplification. I think.
  8. The image fidelity is quite poor. This does bring up an interesting question of shooting things in a wide to crop in etc. I remember watching older RED advertisements that you can 'Shoot in a wide! Crop in, multiple shots in one!' blah, blah. However from looking at the RED image fidelity and also the resolving power of contemporary optics, or even perfect optics I doubt we'd ever be able to seriously crop into an image without impact to the image fidelity, unless we all started to shoot on incredibly large formats.. oh wait... As already with the UMP12k the 2.2 micrometer photo sites are far too small for any contemporary lens or even an 'impossible' perfect lens to resolve... well... (MTF(70+)) at most likely the f-stop required in the case of shooting for multiple shots. I do wonder if to achieve the possibility of using one camera to shoot multiple shots, broadcast companies will start moving towards larger formats as well.
  9. I've had relative luck and have faith in the Kino Freestyle line. However, you're asking a lot from any fixture. Most, if not all LED's can't resolve every LEE/Rosco/whatever filter colour, for every camera. Even if you accurately input information due to the discontinuous spectrum of all RGB LED fixtures it's likely the frequency bands emitted while being perceived as accurate from our photopic vision or even one camera system it may not be on another (due to different camera spectral response curves). Even if you measured with a colorimeter or such to match the two. You're matching them both to one constant when in reality the 'detector' - to put it crudely, is also a variable.
  10. I'm not sure if a true formula or calculation would ever arise and be user friendly/simple. I keep getting stumped (admittedly no mathematician or physicist, in fact far from it). Even once understanding how to calculate light fall-off from a large source - assuming that the diffuser in this instance or bounce's intensity is uniform (which it often isn't with the nature of our lighting instruments). All that allows is calculating the fall off of light from said source (bounce, diffuser). Attempting to calculate the fall-off of the bounce source with only luminance information being the source lighting the bounce source adds variables upon variables and sadly not ones one can easily dismiss for an approximation, which there are many of as well.
  11. Miss-information is a bit harsh - now disregarding minor discrepancies of which there are MANY. And the following is a tested approximation. I found that a frame (6x6' frame of Ultrabounce) when 6ft away from the subject (distance is the same as source size). With the frame evenly lit by a source (again another variable, however I found it gave minor discrepancies) placed 3-4' in front of the frame (in-between the subject and the frame). The light loss - is around 2 1/3 stops - 2 2/3 stops in relation to the fixture at the combined distance from source to frame and frame to subject. Now, as you can see - even here the light loss isn't the 'guesstimate' of the 2.5 stops above. HOWEVER, I find it to be a good guesstimate. I will openly disclaim that there are a sheer number of variables - in my original post above I disclaimed the sheer number of variables and I also stated a good course of action to potentially get a more accurate answer. However having tried going down that path myself, it's a long one. The above helps me, as a guesstimate - an estimate based on a mixture of guesswork and calculation.
  12. Emphasise on the word 'guesstimate' - of course you're correct but I find it helps better than nothing.
  13. As said above there are too many variables to be able to make an accurate calculation, however, I did once go down the path of trying to find a solution of calculating light fall-off from a soft source. I asked several cinematographers and none knew an exact answer (some recommended with Ultra bounce just cut 2 1/2 stops from your original output - this isn't accurate when your measuring close to the bounce however at a distance it is somewhat a good guesstimate). I went ahead and asked on a physics forum and got this answer - this answer was about shooting light through a diffusion. "If your diffuser is good enough that the light from any portion of it close to uniform from any direction, and the light from all portions is equal in intensity, then the total illumination on your point is proportional to the solid angle from the point that intercepts the light. At large distances (small angles) this will become equal to inverse square. At shorter distances, the falloff is slower. Very close to the source, the intensity becomes nearly uniform with distance. The actual solid angle formula is not one that I tend to use much, but there are some formulas and references on Example Formula's" I got this response sadly at a busy time (the original question was in relation to an upcoming job) so I never truly followed it up but plan on doing so. I do think having a somewhat accurate formula of calculating light fall-off from a large source, especially considering - "the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%"
  14. Out of curiosity regarding the RGB clipping - how would one fix it? From my understanding the white clipping stems from all RGBG photo sites being fully saturated (full, unable to interpret more incoming data) and therefore interpreting it as white light. Surely the only way to fix it would be to increase the physical size of said photo site (something camera manufacturers are doing the opposite of).
×
×
  • Create New...