Jump to content

Gabriel Devereux

Basic Member
  • Posts

    58
  • Joined

  • Last visited

Profile Information

  • Occupation
    Cinematographer
  • Location
    Australia / United Kingdom

Recent Profile Visitors

1536 profile views
  1. Correct me if I'm wrong - The Bayer CFA exists with twice the amount of green photosites - 'sensors' because our eyes - our photopic vision is more sensitive to the green (550nm or about, frequency band). With this, let's look at the sensitivity and the collection of data in an incredibly simplistic sense from a Bayer CFA. Looking at colour science and colour rendition in a similar simplistic fashion. My eye is perceiving an incoming wavelength with a certain amount of value in each RGB frequency band, an amount of energy/photons in each. This theoretical ray of light has a value of 20 in the red channel, a value of 20 in the blue channel and a value of 20 in the green channel (emphasising this is an extreme hypothetical). For this example, attributes such as the density and efficiency of each filter on the CFA pattern will be ignored, each filter is even in its respective frequency band and as efficient as possible (the same with the QE of the photo-sites bellow). So with this, the data collected from the imager should be R20, G20, B20, G20 and after demosaicing (I'm tempted to attempt a calculation using a relatively basic Bicubic interpolation algorithm however I think it'd be unnecessary with such simplistic numbers) the general gist is the two green channels would be more 'sensitive' a higher value, than all the others. The hypothetical pixel created would be around R20 G40 B20. The goal/hope would be my eye - my photopic response would achieve similar with this increased sensitivity in the same frequency bands. If I took the same ray of light, the same amount of photons and somehow got an RGB readout like a camera from my eye the values would be similar. Cameras are weird. The goal from my understanding is not to achieve a true RGB value from incoming light but match or achieve near similar results if it'd be our photopic vision interpreting said information. With this, I do wonder what kind of nifty amplification has been applied to the green channel on the UMP12k or other RGBW CFA cameras in an attempt to match this. I do believe if they were to capture all RGB values at 'full resolution', sampling all three values at the same sensitivity, the image produced would look a lil weird.
  2. To expand and question, now that we've had time to somewhat dissect and look at the image and its pro's and cons. I find the ideology - potentially the wrong term; of a sensor out resolving a lens flawed. As you say above the photosite pitch, the total pixel count crammed onto the sensor exceeds the resolving power of near, if not all-optical systems especially in realistic situations even with a hypothetical perfect lens. The main issue argued with the above is low capacity photo sites, easily saturated, total camera latitude etc. What you say makes logical sense about noise - I can understand to a certain extent assuming the singular white/clear photo site in each 'matrix' acts as a higher gain channel to interpolate shadow information to improve the dynamic range but the compromises are endless, yes all imaging and optical systems are a series of compromises. However, with the rather sudden and rigid highlight roll-off that is especially apparent with RGB clipping, the lack of workflow infrastructure around the total resolution and even with that, they haven't necessarily made a 12k camera as to somewhat even compete with image fidelity downsampling is a must. The selling point of the camera seems redundant. To return to my initial statement about a sensor out resolving a lens. Maybe I'm being a fool, but, from my rather limited understanding of the engineering of an imaging sensor the ARRI with its 8.25um (diagonal) photo size 'size'/pixel pitch is in near-perfect harmony with the majority of its optical systems, a perfect example being the master prime. I believe achieving MTF(74) at 70lp/mm resolves an airy disk around a similar size to the photo site. It's relatively balanced so why compromise further on photosite size for pixel count. - it almost seems if they're attempting to reproduce the qualities of multiple channels (with different gains) from each photosite but in a two-dimensional sense which compromises, as you say above the total area to sample from. Which realistically results in a lower fidelity image (concerning SNR and other aberrations) even downsampled to 8k at a 1:1 pixel ratio with its competitors.
  3. Just got off a shoot with 13 or so FX6’s, an FX9 and a few A7’s. I needed all cameras to output a logarithmic signal for me to monitor while the rest of VV needed to see a regular standard rec709 signal (about 5 or so monitors, 2 on switchers). I personally went with the switchers going through inline LUT boxes (Teradek COLORs) and the rest of the monitors where looped through BM 12G HDMi to SDI converters and a few on a teranex and occasionally a ultra studio rack. In your scenario you can load custom LUT’s onto 12G converts and I believe 6G and 4G (maybe) converters as well (however unsure) they have a small form factor and are relatively cheap. Teradek color’s are good when partnered with a live grading solution however in your case I don’t really believe it necessary. I did end up using an older BM I/O 3G box it was able to take multiple channels and worked with pomfort so in my situation it was perfect but yeah for something small the newer 6G-12G converters are something to look into. Apologise for terrible wording, wrote on phone.
  4. As long as your willing to pack accordingly. A few times around Australia I flew with a shot bag for my monitor stand. A friend of mine would travel with one as well while we waited for either freight or for the truck arriving with the rest of our gear. I always kept it in one of the pelican's though. I'm not sure if traveling with them loose would be the greatest idea. It'd be an interesting check in experience.
  5. Hi, I'm currently in need of an IR converted mirrorless camera (preferably Sony) to rent this coming weekend in Australia. I've seen them all around the US, some even CFA stripped (which would be preferable). It preferably would susceptible from anywhere between 700/750nm - 810-850nm for deep IR recording. The goal is to film 'invisible light' with an IR emitting source that is far more susceptible to our cameras than the eye. Thanks Gabriel
  6. As said above, everything LED is missing spectral content - now this is OK as long as you can control the camera systems being used under said limited frequency bands of lights. Now I'm going to assume (purely from the point that doing otherwise would be almost absurd) that you can't tell the studio to only shoot on one particular camera system. With this, unless you go down the route of say Kino Freestyle panels (that attempt to match the spectral response of numerous camera systems) you may want to give up the versatility of RGB based (RGBWW, RGBW etc) LED's for just plane white LED's (either CW or WW) for the sake of somewhat colour accuracy. As said above Aputure and Nanlite are good when looking at it's simple single white LED fixtures (WW in this instance if your going to use large tungsten fixtures for sunlight) so it may be your best option. White light LED's, while lacking in versatility do have the most uniformed spectral power distribution emitting a singular wider frequency band that will harmonise with more camera systems and allows a larger gamut of colours to be resolved, meaning less inorganic skin tones and serious colour shifts. At the end of the day you may want to look again at tungsten space lights. Permanently installing LED fixtures may be a little unwise considering the extreme amount of misinformation and the technologies infancy. I remember listening to an interview with John Higgins 'Biggles' on his work on 1917 and he talked about purchasing LED fixtures and how doing so at the moment is unwise as a long term investment.
  7. To jump in, and disagree. On film the image is not 'created' on the set anymore than digital. I've yet to see chemists on set altering chemical compounds of said emulsion in an attempt to boost the magenta channel/layer while you're attempting to get a shot. If anything the look of film is somewhat predicated by the stock you choose disregarding the post process the emulsion and prints will go through similar to that of a digital pipeline however arguably with less control. I don't see why you view a film being graded in a certain way sad. Is choosing a certain stock for a certain look, sad? Is it because the film maker has more latitude and can make more refined decisions? Because it means they can potentially fix an error? The latter two I believe are the main causation for the DI to be knocked but personally I don't mind it. The amount of times people throw around the term 'The DP's job is too *insert remark here*' I find interesting yet, isn't our primary job, above all else, to deliver an image? The images that are being delivered 'nowadays' I would partly consider more exhilarating, exciting than in previous decades - this is of course disregarding the actual creating of the image which as said above is becoming easier with more latitude and tools. With that - the bar of quality is getting higher, while the required skill is getting lower. It's almost as if cinematography is becoming easy to learn, hard to master. Which is different, but I wouldn't knock it.
  8. As said above, your options are endless. There is no singular way to replicate sunlight. To break it down typically you'd have a singular powerful harsh source (acting as the sun), and then a surrounding softer source acting as the sky (at least that's typically how I'd look at it). So, with your goal being to achieve an outdoor scene at noon (keeping in mind, I don't know if this is an exterior set, an interior set or if this outdoors... if so you may just want to shoot at noon). I'd imagine you'd want one harsh powerful overhead source, as harsh and as high as possible with a wide beam spread - maybe a 20k skypan (keeping in mind to operate and rig said fixture requires a skilled crew)... or something of that nature. Then a surrounding softer source, a hypothetical array of space lights, potentially with a very mild diffusion bellow. This, like everything said above is up to personal taste.
  9. Contrast is a difference in illuminance between two points. For example, to take it out of light and/or radiology. The contrast between 'terrible' and 'wonderful' - there is no path between those two words they are just on the opposite ends of the meaning. Such as light and dark. By definition - the state of being strikingly different from something else in juxtaposition or close association. Fall-off, in theory, is looking at plane wave propagation in free space. 'Modifiers' and different set-ups that David is using as examples does not alter the fact that light is an electro-magnetic wave. You cannot modify light into something else. If you diffuse light (scatter it) or refract it etc - which I may add your source that you most likely are using for your example, say a blondie or a Tweenie are doing said 'modifications' on a much smaller, potentially minuet to the eye, scale. There are a dozen great practical examples in this thread - however light on an elemental (which it isn't) level is just electromagnetic radiation. That, to put it crudely, does dilute over space, that has no real relationship with a relatively artistic term.
  10. Yes and no, If you want to obtain an 4k image, so you shoot at 8k (in S35, so note photo sites smaller than 3-4ym (micrometer)) to hypothetically crop into and deliver, as said above, a 4k image, while your overall total pixel count may be 3840x2160 pixels, disregarding the camera it won't have the same fidelity on an optical level and it most likely NEVER will. With resolution and cameras it's always hard to say '8k will never have the fidelity of 4k' as camera technology (especially digital) is constantly advancing. Of course a higher resolution camera will most likely always suffer from low capacity photo sites (if kept to S35) however technological advancements over the years may eventually negate or help the issue. However, one technology that isn't in its infancy and that we can look at the properties of, is optics and can calculate from a hypothetical 'perfect lens'. Which has attributes such as its elements being as transparent as air (somewhat unrealistic) among others only limited by the diffraction limit. However it still has its limitations when it comes to resolving power that already modern day cameras (such as the 2.2ym photo sites of the UMP12k) are already exceeding. Bellow is a bit taken from another post on the forum (Joshua Cadmium). This is calculated from a hypothetical perfect lens at MTF(50), it is important to note that it's only at MTF(74) that an airy disk is similar in size to that of the photo site of the camera. "For the Blackmagic 12K sensor at green 550nm, that puts MTF(50) at f3.2 and for the Alexa sensors that puts it at f12.1. At red 700nm light, the Blackmagic 12K would hit MTF(50) at f2.5 Alexa would at f9.5 At violet 400nm light, the Blackmagic 12K would hit MTF(50) at f4.4 and the Alexa would at f16.7" Just to quickly duck dive into MTF, to put it crudely you're looking at the way lights channeled through an optical system. Looking at how it channels an airy disk (the point of light the camera measures which is formed by light passing through the diagram (iris) of the optical system). You calculate it through line pairs per mm in relation to what the lens can resolve. Here is a quick insert I wrote on another post - to show the point I’m making point we need to calculate the line pair per millimetre for a sensor. That is done by taking 1mm (1000um) / pixel pitch (2.2um) / 2 (line pair). The UMP12k would be 227.27 lp/mm and just for a comparison the Alexa is 60.61 lp/mm Now, to briefly look at a modern lens that we all know and love, the master prime. One of the 'sharpest' lenses one can use on a S35 system - "If I remember correctly a master prime just about gets MTF (74) at 70lp/mm". Sadly I don't remember exactly which focal length this is however the point is there. The increase of a pixel count on a S35 system to 'crop in' is flawed. One most likely will never be able to achieve the same fidelity, unless you use a larger format camera (not compromising photo site size while being able to increase total pixel count).
  11. Yes, and no... mostly no. Here is a crude (and somewhat inaccurate example) https://na.panasonic.com/us/audio-video-solutions/broadcast-cinema-pro-video/dual-native-iso-camera-technology-cinematic-low-light-video-production Different sensors have different analogue outputs, different 'native' analogue amplifications. Some 200, 400, 800 and so on, and in this case, for extreme circumstances, the ability to switch to a higher analogue output. It's similar to that of ARRI's DGA (dual gain architecture) where each photo site has two analogue 'native' outputs - one high one low. However in this instance, rather than combining the two, they are for two separate outputs. Hence the name 'Dual Native ISO' because each photo site has a dual 'native' analogue output. You are right, there is most likely analogue noise mitigation (I believe there is in most contemporary Sony cameras, and most cameras nowadays) and I will add Panasonics graph stating in a way that noise starts at digital amplification... is wrong. However, it does allow to theoretically capture a certain amount of latitude above and bellow middle grey (similar to that of shooting at a regular cameras native, most commonly 800 ISO) at a much higher sensitivity, without any digital amplification. I think.
  12. The image fidelity is quite poor. This does bring up an interesting question of shooting things in a wide to crop in etc. I remember watching older RED advertisements that you can 'Shoot in a wide! Crop in, multiple shots in one!' blah, blah. However from looking at the RED image fidelity and also the resolving power of contemporary optics, or even perfect optics I doubt we'd ever be able to seriously crop into an image without impact to the image fidelity, unless we all started to shoot on incredibly large formats.. oh wait... As already with the UMP12k the 2.2 micrometer photo sites are far too small for any contemporary lens or even an 'impossible' perfect lens to resolve... well... (MTF(70+)) at most likely the f-stop required in the case of shooting for multiple shots. I do wonder if to achieve the possibility of using one camera to shoot multiple shots, broadcast companies will start moving towards larger formats as well.
  13. I've had relative luck and have faith in the Kino Freestyle line. However, you're asking a lot from any fixture. Most, if not all LED's can't resolve every LEE/Rosco/whatever filter colour, for every camera. Even if you accurately input information due to the discontinuous spectrum of all RGB LED fixtures it's likely the frequency bands emitted while being perceived as accurate from our photopic vision or even one camera system it may not be on another (due to different camera spectral response curves). Even if you measured with a colorimeter or such to match the two. You're matching them both to one constant when in reality the 'detector' - to put it crudely, is also a variable.
  14. I'm not sure if a true formula or calculation would ever arise and be user friendly/simple. I keep getting stumped (admittedly no mathematician or physicist, in fact far from it). Even once understanding how to calculate light fall-off from a large source - assuming that the diffuser in this instance or bounce's intensity is uniform (which it often isn't with the nature of our lighting instruments). All that allows is calculating the fall off of light from said source (bounce, diffuser). Attempting to calculate the fall-off of the bounce source with only luminance information being the source lighting the bounce source adds variables upon variables and sadly not ones one can easily dismiss for an approximation, which there are many of as well.
×
×
  • Create New...