Jump to content

John Brawley

Premium Member
  • Posts

    855
  • Joined

  • Last visited

Everything posted by John Brawley

  1. Hi Herc. The ideal interocular distance is about 64mm or about 2 1/2 ". That's the average distance between human eyes. It's actually very difficult to achieve this practically though with cameras, especially with a camera like the RED. They are just so big that it's impossible to get the sensor's that close together when they are side by side. Even the lenses themselves can be larger than this. So a beam splitter is used to allow for interocular distances that can be more precisely controlled, and that wouldn't even be possible if the cameras were set up side by side. (The beam splitter is sort of an autoprompter in reverse.) Even with very small cameras like the Iconix HD's, it's still almost impossible to realistically get the right IO when they are setup side by side. jb
  2. OK, so I finally dug out my CP Mini Worral service manual. As discussed, there is no question at all that the head continuously pans forever. They talk about using continuous loop wires from the aircraft industry. There are two cables used in the tilt mechanism. They did make a "super" variant, the CP Mini Worral Super, that was different in only one way. It was designed for rental houses that wanted Sachtler quick release plates as the camera interface instead of the arri sliding dovetail. That's it. JB
  3. +1 on ICE. Find an Arri scanner and a company that has PAID for the ICE option....I've found a few that haven't. jb
  4. On the few 3D shoots I've done the stereographer tended to not want to converge the cameras at all unless it's an extreme situation. What is more often adjusted is the interocular distance, that is, the distance between each camera. My understanding is that convergence can mostly be left to be a post decision with very little compromise. (just some framing issues at the edges of the frame) This way it's not baked in and can be determined in a more controlled environment later. The RED's are pretty big cameras for 3D. Forget hand held or steadicam. I like the Si2K's more and the scarlet's should be great. jb
  5. In my experience they are still poor in terms of CRI and especially red deficient. White LEDS are also nowhere near as bright either. jb
  6. Your problem is that you have a 5D and are using it shoot video and expecting it to be perfect. Sorry. **Just to add. Flash always cause this kind of problem on cameras that have rolling shutters. There's nothing wrong with your camera. jb
  7. LEDS have a long way to go. At present they are a useful tool in certain applications. Although they are the most energy efficient light source around in terms of energy==> light the *quality* of their light leeds a lot to be desired. Colour for starters is a big issue. It's only in the last couple of years they have been able to to hit 3200K. And even then whenever I've measured they aren't quite there. The harder issue is CRI or evenness of spectrum. Like fluro's they tend to spike in certain frequencies and dip in others. Although there are LED inits that mix colours (like the already mentioned ARRI's) the problem is, is that LEDS are HIGHLY monochromatic. So you can mix red, green and blue leds, BUT each colour is so tuned to a certain wavelength, that the output falls away very quickly outside of that wavelength. There are still colours that will fall in the gaps outside of these wavelengths. Yes they are energy efficient, and therefore low heat. being solid state that can turn on and off very quickly and can even be synced to the shutter pulse of an electronic or film camera for intra and inter-frame pulsed lighting effects. Fire light, TV's etc as well as different motion blur characteristics in certain wavelengths etc. Solid state makes them more reliable, less fragile and lighter in weight with a smaller size footprint. The other major issue is that there really aren't any individual LED sources that are bright enough to be used as a single unit, except maybe Dedo's offering. Otherwise, multiple LED's are used, for either colour mixing or just brightness or both. This creates a problem with multiple shadows and colour fringing on shadows. They have a long way to go, but they are only just finding their way into the market, and over the past 30 years, the brightness of LEDs has doubled every 18 months or so. jb (once started a company to build LED based film lights)
  8. I think Keneu illuminates a point I've felt for some time. It is homogenised. Take a cameara like the RED for example. It can look fantastic. But it inherently ALWAYS produces the same result. A really clean looking video image. Which is great if that's wehat you want. But what if that's not what you want. Add it in later I hear you say...fix it in the grade. Well. Sure. It can be graded,. You can push it around and get a totally different look out of it. BUT It will still look the same as anyone ELSE who's shot with a RED, found they want it to look different and then graded it. Lets assume we all have access to roughly the same tools in post. The camera has the same inherent look that is defined by it's actual workflow. What's missing is BAKING a look in....doing a look IN CAMERA. That's the brilliance of film. Is that you can mess around with it....totally disobey the rules and get a great result, that's not technically correct, but looks great. Cross processing, flashing, bleach bypassing, cooking the film in an oven for 6 hours at 75deg, using 5 year old stock. Even using Fuji instead of Kodak, all inherently change the look IN CAMERA before it even hits post. And, applying a Bleach bypass look in the grade IS NOT that same as doing it in camera, simply because it AFFECTS the way the film responds to light. So you're starting from a different position when you eventually get to post. With a camera like RED, you are ALWAYS starting from the same position. You can't mess it up in camera. All that will happen is the camera won't work !!!! Digital imaging doesn't allow for these kind of accidents that with organic, still give a result, but one that is somewhat unexpected and inconsistent and totally at the discretion of what you do in front of the camera. So now, I look for ways to get the image a little different in camera if I can. Not always easy to do !!!! jb
  9. I worked recently on a cooking show that was shot using Panasonic 3000's. They wanted to use the 5D for some trickier shots, doing aerial flyovers over the food or for car interiors. Well the first episode the camera was always out and we shot a great deal of footage with it. We also had a RED for certain scenes. Once they cut it together and tried to grade it, the EOS only came out when there was no other way to get the shot. And this is what the EOS is great for doing. There was no way they could get the EOS to grade anywhere near the Panasonic 3000's. It just didn't have the same dynamic range, couldn't handle the contrast and it just fell apart in the grade, not to mention the temporal mismatch from converting 30 FPS to 25. The shooting style of the show favoured long lens work so the difference in DOF from the 2/3 A and B camera wasn't that apparent and might have matched if it weren't for the other issues. I was doing 2nd camera and usually the C camera too. The DP was an emmy nominated and very experienced TV shooter. Given we were doing a reality / doco style show and maybe with drama you'd have more control and have a better chance.... The director and producers went very quiet after previously insisting we use the EOS as a genuine "C" camera. After the first ep it became the *trick* camera. I seriously don't understand the logic of trying to make this camera do more than it's capable of. Just yesterday at a rental house I saw a pimped up EOS with the zacutto rig, follow focus, viewfinder attachment, mattebox... It's like they've taken the worst features of film and video cameras and put them all together in one uncomfortable and awkward to use camera. Suddenly it's not that cheap anymore, it's not easy to use or comfortable to operate and it's still recording a compressed and crippled image.... jb
  10. It's so unfortunate that they have butchered the AFTRS course, mainly over money. I was one of the last few to go through as an MA student. As mentioned the selection criteria was very competitive and the course was essentially highly subsidised by the Australian government. Now it's user pays. They loved to tell us when I was there that we were more expensive to train than FA 18 Hornet pilots, the next most expensive students that the Australian government trains. It averaged out to $186 000 per student per year i think. And that's what made it great. The fact that there were few students in an intimate and high tempo immersive level of course work. With only 4 cinematography students and a 8-6 scheduled work day EVERY day there was no slacking. For those that may not know, AFTRS was modelled somewhat on the AIS, the Australian institute of sport. A place where elite athletes go to train together in their respective disciplines but benefiting from shared resources like nutritionalists , fitness and physiology experts. ATRS was the same, the idea being to take away the costs and focus on practical courses operating at an elite level. Now, to reduce costs, you PAY to go whereas before you were paid TO GO, essentially a scholarship that meant you could focus 100% on the course without trying to work at the same time. They greatly increased the intake of students across the board and seemingly dumbed down the course. Now it seems to me that AFTRS isn't doing much more than a regular TAFE course and is just like every other course in the country. At least before there was a point of difference. At least there was something to aim for. Even getting into AFTRS could be considered an achievement. Here was a course that had amazing access to resources, PAID you to attend and at a skill level that is was far above every other course offered in Australia, and was the envoy of the world. It was one of the only courses that placed a high emphasis on PRACTICAL training. 80-90% of the course had you in a studio or lcoation shooting something and honing your practical skills. But economic rationalism means that there's no justification for that kind of cost with so few students. Never mind that 4 directors trained to a higher level probably have more chance of achieving and getting work than 22 directors trained to a lesser level that's the equivalent of 15 other tertiary institutions in Australia which also have 20+ students graduatiing each year. John Brawley MA Film And Television (Cinematography) AFTRS
  11. They would work well on the Si2K as well. This is a 16x9 chip AND it's already PL lens mount. Pity no one in australia has them. jb
  12. A film I shot late last year using the Si2K and a home brew anamorphic lens is screening as part of slamdance Love to say hi to anyone who's here. drop me a line. jb
  13. Hello. A indie feature I shot using RED will have it's world premiere at Sundance. Love to say hi to any c.com's that are here. JB (currently @ Sundance)
  14. Hi All. A feature I shot in 2007 will have it's North Amercain release as part of the After Dark Festival from the 29th of Jan. I used over 40 different cameras on Lake Mungo from 35mm down to my mobile phone. Details here http://www.horrorfestonline.com/ and trailer here http://www.horrorfestonline.com/?p=528 JB (currently @ Sundance)
  15. Super 8 is fundamentally more likely to scratch and play up. it's got a plastic pressure plate that changes every time you change the plastic magazine the film comes in.... ** There are very few truly professional cameras that weren't designed for primarily being used for home movies. That said I've used super 8 for real and it works great. But not everyone can easily access film or a lab for processing, or even a reliably working camera, not to mention post.... Anyone using those wonderful services that provide free movies can look at a film called Lake Mungo. There's a bit of super 8 in there. It's also opening in the US as part of after dark in late January. I don't even think kodak import super 8 into oz anymore, it's all grey market imported.... jb ** I was involved a little in the development of the Aaton A-minima. Originally, Kodak and Aaton had decided to use a single use magazine that would make it faster to load and unload. Just send the whole mag into the lab after exposing ! After testing, they dumped it because it had issues with noise, scratching, the not very environmentally friendly mentality of single use anything, not to mention what to do with short ends and recans... *** EDIT. I've also used an SR2 with a modified super 8 sized gate that worked well too. Panavision have them...
  16. Living in the city that has the largest IMAX screen in the world they had a very public punch up with Imax themselves about this issue. They muddied the water themselves... You see, most imax cinemas are like franchisees. They aren't owned by Imax themselves. So the people that own the Sydney imax cinema were pretty pissed when the local multiplex's started labelling their screens as being imax as well. As already mentioned, this was a decision by imax to use their *brand* to back a digital projection / 3 D system with their imax name. The genuine imax cinema management were understandably pissed that multi's with a screen that was 1/4 of their screen size could also be called imax. They tried very hard to lobby imax to get them to differentiate their digital retrofitted imax screens but to no avail. So now you have mass consumer misunderstanding of 70mm IMAX and digital imax..... jb
  17. Well Roger rabbit was probably pre CG anything so it would have all been hand drawn, lighting included. Roger Rabbit is not CGI. It is 2d composited animation over live action. Are we not talking about an extension of extensive bluescreen work ? What about a film like the recent Star Wars or LOTR which uses extensive bluescreen and CG based imagery ? I wonder how many viewers realise how much of LOTR was photorealistic CG generated ? Most of the battle's in LOTR were with CG armies. How is that different ? Avatar uses perhaps a higher percentage of CG based imagery but because you're in an alien environment, the threshold for believability is different.....and half of the cast have had their motions *digitised* in order to animate a CG character. jb
  18. Agreed... Well you're assuming it's easy to get the result you want in a CG environment because you can so precisely change the lighting and surface textures. Well I can tell you it's far far more complex than that because things behave MORE unpredictably than in the real world. To actually get the result you want and that looks great is far more fiddly than a simple adjustment slider in an interface. This is not photoshop. So when you can actually achieve a visually beautiful and story appropriate result, it's a significant achievement. It's not as different as you are suggesting.... I happen to know that Happy Feet commissioned a vessel at great cost to travel to Antarctica to film most of the wildlife and environment for reference material. My good friend and cinematographer Tom Gleeson, traveled as DP and shot thousands of feet of 35mm of penguins, icebergs etc for 4 months. There was a crew of 14, including animators. He also shot live action reference material of tap dancers for the animators to work from... The fact that you've made this comparison leads me to think you assume that happy feet was made by a bunch of people sitting in an office. Clearly it's not the case. And while his reference work is not strictly photography that makes it into the production, it reveals a commonly held assumption that animated films, especially CG based ones are all done in the sterile vacuum of an office rather than out in the *real world* And Happy Feet was made as a collaboration of hundreds of animators working in different locations all over the world, not just Santa Monica. The production office was in fact based in Sydney. What about the challenge of co-ordinating all those individual animators to get something looking like a visually consistent film ? Im not trying to pick a fight. I too once thought imagery generated from a virtual environment was cheating until I tried to do it myself. I'd invite anyone who's sceptical to try lighting and framing a virtual scene. jb
  19. Does it not depend on your frame rate ? Aren't you better counting frames ? There are 40 frames per foot. jb
  20. Karl that's why I found the reference to cel based animation curious in my previous post. There's one important difference with 3d based animation environments.... Lighting. If cinematography is writing with light, then there is a critical difference between these animation processes. Lighting is so sophisticated in modern 3d. You can choose to have lights behave in very similar ways, obey the rules we're used to in the real world...or not. Either way to get a good result requires someone who knows how to "light" That is a genuine point of difference from animation. jb
  21. In what way do you think ? Do you mean 2D cell (photographed) animation or CG based 2D animation (like flash based work such as South Park). Whilst it is certainly animating, a CG DP wouldn't have anything to do with the actual animation. They would only be *animating* where the camera was positioned and how it moves. They might also animate lights when they want them to move in shot. Like a film set, there are so many specific roles in CG animation that it's entirely possible and in fact necessary to specialise in just one element. A friend of mine right now is a rigger. That means he works on the *skeleton* of the models so that anyone that wants to animate them can do so and they conform to the *rules* defined for that character. Like making it so an elbow can't bend in an impossible way. Of course it IS possible to do it in CG, but not for the character. CG bloopers are usually these kind of animation accidents. He uses the same work environment that I have spent time in, Maya. Yet I would have no idea how to even START doing what he does. But I can light a scene faster than he can. In fact, even when I was learning the incredibly complex interface, I was still faster at lighting than most of the animators that had been doing it for years and I used far fewer lights. It took me 12 months or constant work, focussed primarily on lighting to even become proficient, just at that one job. Their approach was brute force...put a bunch of lights everywhere and move them around until it looked good. I was finding I would first visualise what I wanted it to look like, set a couple of lights and move them at most 3 times and be happy. And with no prior animation experience, I was simply approaching things in the same I do in the real world. Yes, it's a computer and anyone can get a result , but specialisation is what's essential to take things up a few notches. While there are things that you can only do in an animated environment, like making lights invisible and even having negative lights, they are much harder to work with and get looking right. The surfaces in a 3D world all behave differently, so a lighting setup that works for one element, won't always look good for another element. The idea that just because you work in a real environment with weather somehow makes it a more pure form is pretty narrow. I found it's much harder to light and get the result you want in a CG environment than a real one. Like any filmmaking endeavour, there are huge challenges to overcome. Sometimes throwing money at them is a way to solve the issues (aka the Hollywood solution) but there are plenty of indie animation films out there as well, using much smaller teams of people, that are just as resource and time restricted as a small indie live action. We're in a thread discussing Avatar for it's cinematography, we're assuming that all CG based cinematography has similar resources to magically solve problems from the comfort of a desk with a double shot tall latte and a box of donuts. Just like we all know that fixing it in post almost never works this idea that CG photography is less real because its only being produced using a computer is naive. Just like in the real word, it's not about the gear it's about how you use it. jb
  22. There are many. Do you mean commercials ? Drama ? Film ? TV ? jb *EDIT. You can start here It's only for Victoria, the second largest state in Australia.
  23. I would envisage that there should be a separate category for CG based cinematography, but I certainly wouldn't deride it as a genuine form as many seem to feel about it here. It takes just as much skill to light and shoot in a virtual environment as a real one. I spent over 12 months working on an animated project that involved me using almost all of my DP skills. Its just a different working environment. But I found the skills required to tell the story visually are just the same. I approached the film in exactly the same way I would live action. I discussed the look with the director. We looked at visual references. we created a storyboard. He animated rough versions of the characters in a virtual environment. I then framed the shots, INCLUDING recording live camera moves that I would do from a specially engineered input device ( a much cruder version of what was done on Avatar). I had to light them, consider atmospherics, staging etc. I think the perception is perhaps that because it's computer based, it's as easy as pushing a button and takes no skill. I seem to recall the same arguments in the early days of digital video... Well having attempted it for some time, I can say it takes a huge amount of skill to do well and i think anyone who doesn't think it's a genuine version of cinematography probably hasn't been exposed to the actual processes required to do it. We are visual storytellers are we not ? Should we not be able to visually direct etch-a-sketch animation as much as live action ? Why be so hung up upon the literalness of photography being live action ? jb
  24. Not if it's dimmed way down. Remember the trick is NOT to light but to reflect a light into the eye. I've often used a dedo above the lens BARE but dimmed way down. I look down along the lens and set the level with the dimmer until i can *just* start to see the shadow as I wave my hand in front of the dedo. When it's close to the lens axis You won't even see this small shadow from the dedo. I prefer an active eyelight but a passive light works well sometimes too, like a larger piece of card. You don't even need to light it. jb
  25. Thanks Justin. The Perfect Host and Celestial Avenue jb
×
×
  • Create New...