Jump to content

Michael Collier

Basic Member
  • Posts

  • Joined

  • Last visited

About Michael Collier

  • Birthday 11/28/1983

Profile Information

  • Occupation
  • Location
    Los Angeles, CA

Recent Profile Visitors

6514 profile views
  1. I'm a bit late to the party since you already shot it, but... If the light wants to be hard, then I would just use something I could spot in to get that sort of falloff. sure you could use blackwrap to make a snoot, or tape to control the falloff, you could use barn doors to box in the light-but at a certain point maybe you're working to hard to do something that is actually fairly easy. I'm watching Magnificent 7 now, but there's a quote that might apply here - "you can cut the ears off a donkey, but that don't make it a horse". You could use a toplight that is hard and lets you spot it in. The spot will give you the falloff you want, you can control it with how far to spot you go, and it will probably give you the look your going for. Both the client reference and the image you posted shows a kind of rough and dirty look, so why be so precious about it? Spot that light in and get dirty with it. Fortune favors the bold.
  2. I think about sun as more in the tungsten range. I usually shy away from HMI for raw sun unless I need the punch that LEDs just can't do today - not if you need it to be hard. I have always found HMI with full CTO or even half CTO to feel unnatural and green, especially if the globe is old or the gel is old. I can get behind an HMI through no color straw or a light CTO, but that just takes the curse off, it doesn't feel like warm late afternoon sun. A lot of times I start sun in the 3000-3500 range, but I might push to 3800 if it is a location shoot and the ambiance is cooler. It mostly depends what we decide as a camera balance, and the time of day in the script. For winter months on location I'll shift cooler, since sky ambience gets much cooler than the summer. On stage I'll often I'll push sun to 2800-not because that's what matches reality, but because the lights I'm using bottom out at that range. Actual sunset can get much warmer as sunset approaches. Occasionally in that scenario I will offer switching to a non-kelvenic RGB color or gel to get an effect. Rarely do DPs take me up on it, and most times its more like I'm daring them to get crazy. If you've seen the movie grandma's boy, sometimes I'm saying to the DP "I don't give a f***, I'll go to the looney bin with you". I've never regretted going there if they are willing and it works with the story. Mostly, regardless of the actual temperature we settle on, my thought process is based on how far away the raw sun is from the sky fill-and that is regardless of whether I'm on stage or location. Mid-day sun is fairly close to sky ambience, and as you get closer to sun up or sun down, the sun's temperature stretches toward the warmer end and the ambience moves toward cooler. At the end it's a choice how far apart those two are, and where your camera's balance lands. It is also a choice how close to reality you choose to go-before LA I came from Alaska, where a winter solstice (2 days from now!) can be 1800K sun/15,000K sky ambience. That is great if your going for something stylized or impressionistic, but if your on a set that is monochromatic, or going for a desaturated look, maybe you want to be more conservative. If you're just trying to balance the look you're going for with the lights you have on the truck (or the back of your Jeep) then the tail might wag the dog. In that case, I suggest going with the Looney Bin option and going unfiltered tungsten. You won't get placed in movie jail, I promise.
  3. The excitability is what I love most. It's like if Bill Nye after "Bill Nye the Science Guy" did a show like "Bill Nye the Engineering Guy". I loved the former, but the later would take that enthusiasm and get closer to his discipline as a scientist and could have been way more interesting. It feels more personal and free form, with a joy that's infectious. I just wish he did more applied theory and less mailbag.
  4. Also if you're not already, subscribe to the EEVBLOG on youtube. I love that Australian bastard and hope to meet him one day. He makes complex things digestible and interesting. There are lots of others like him that you should subscribe to, but I find him to be both very knowledgeable and a good teacher.
  5. Ok, I see where you're going. That is a difference from my system. I am not familiar with the CTC acronym, but I am sure I know the concept. My system was entirely software driven. When you started the motor there was a grace period to get up to speed (or close to it), and a special control routine to "soft start" the motor. Then an internal timer would start and update an estimate of what time the next optical interrupt would come. It would also create a timer based interrupt to keep that time. Working in assembly meant I could effectively count the clock cycles required for each interrupt, and what would happen if both came in at the same time. That allowed me to verify the max error possible for each scenario. Of course with much faster processors with multiple hardware timers, cycle counting wouldn't be necessary. So when a new timing pulse came in it would do some math to see if (1) the motor was running slow or fast - from the starting point and not the last interrupt. (2) it would determine if the motor was trending closer or farther away from the target position. A variable I called "sensitivity" would scale the trend which would then be integrated into the motor position error. If it was trending farther away and the error was high it would step up the power by more than it would if it was trending away but very close to the target position. If it was trending toward it wouldn't increase the power at all. This meant that each frame would be on time, and errors would not integrate over time. So a 24.001 fps error doesn't become an extra frame after 1000 seconds. That also made tracking footage usage and shutter parking easy - although if I remember right, I think there's an optical sensor for shutter parking. If you are on Arduino stuff, and it sounds like you may be from the 328 you mentioned, I would recommend looking into Atmels products outside of that ecosystem, since you're familiar with their chips already. If you're using Arduino code, there is a way to import sketches as a library, and the debugging tools are much much more powerful. Atmel Studio is based on Microsoft's Visual Studio, so you basically get to walk into an IDE that cost billions of dollars to develop and is free to download. You can use development boards with built in programmers/debuggers that feel very similar to Arduino, but give you more power and and debugging tools. Other mfg's like STM have similar tools. I think you can also program Arduino's with Atmel Studio, though I've never tried personally. You may be able to combine two controllers into one-I have found in times where multiple controllers necessary that a lot of work is needed to handle errors between the two. Especially if there are two, if there is an error, who is the "base truth" and how do you recover from that error. Something to consider, either way I'm sure you'll skin that cat. I totally get that, especially when you're bootstrapping. There are lots of ways to make even very sophisticated development cheap these days. Look into JLPCB for very cheap prototypes. Sounds like you're already skilled at SMT which is very useful. My biggest expense was film, development and telecine, which unfortunately is unavoidable. You gotta prove the thing is running at the right speed and the end goal of on-speed film is achieved. One tool I found very useful was to make a tach light. The optical encoder has a dot painted on it, so if your tach runs on speed, you can see visually what sort of errors you get. When I started the motor could be off by as much as 360 degrees, which would be stepped down by gearing to result in (I think) 0.4 frame error - which was unnacceptable. By the end you can see in my video it's maybe 20 degrees max cumulative error (I think 0.2 frames), which is more than acceptable. Sometimes those visual indicators are more powerful than that same data in a spreadsheet. It gives you a feel for how the motor responds. I could lean on the main gear and see how it would accelerate the motor to keep proper time. Even on my film tests I would break out a variable into an LED to see how the system was performing in real time. But you have to keep going. Even if this project doesn't form the way you hope it does, it will lead to something down the road, and you'll have new skills. I felt I developed as a programmer, and my logic skills improved which are also useful as a filmmaker. If it does work you'll have all of that and a pretty nifty camera to boot.
  6. Wow this is a throwback. Thanks to Robert Hart for bringing this to my attention. Reading through all you wrote Aapo I gotta say it's spot on, and it brings me back so hard. I went through the options you listed above, while deciding on the approach I would take. On my CP-16 the issue was the hybrid chips, but again you are right that they could be replaced either with Jelly Bean discrete logic gates, an FPGLA or you can make an incredibly simple program to replicate the function in software...but you don't get any cool new features. I talked to a guy at Alan Gordon who knew the original Electronics Engineer who founded Cinema Products, and according to him he encouraged the guy to build something new. That is where I found out it was an adaptation of another camera's movement with new electronics (an eclair I think?) He also tipped me off to the fact the hardware might be able to do more than 30fps the original camera could do. I can't find the original development notes, but I think I recall the limit was not the mechanical hardware, but the control electronics didn't have the fineness to do off-speed. I think it used a more powerful motor than would be needed today, in order to be responsive enough with it's very simple control loop. I think I was exploiting that by having more finely tuned software to be able to drive it faster than advertised. (these are all 10 year old memories-take it with a grain of salt) I remember distinctly those schematics, and just how detailed everything was. I wonder if I could redraw them from memory-I poured over them. I think it was a user manual, but it might have been a repair manual. Either way it was incredibly explicit and made it very easy to at least unwind how each signal worked, and its relevance in the overall system. Thinking of that makes me wish Apple would release manuals as detailed as that. I would argue our systems in this regard are actually very similar-the clock driving the microcontroller in my system was a crystal. In the video I posted you can see it was a generic crystal, since that is what I had on hand for prototyping, but the final design included a temperature compensated oscillator. My crystal only acted as the CPU clock, and from there all timing was derived through software. Perhaps that is the difference your saying, maybe you have a separate clock to act as reference on an input comparator hardware, leaving your CPU clock to be an internal RC circuit or something. I am sure on many levels the solution is somewhat similar though. I wish I could offer you some guidance, but these days I'm not sure my code would be any help. I took a look at the latest version I could find on my computer and I was definitely doing some creative things to make up for the slow 8 bit processor. If I remember right it was an 18F series chip from microchip, but I could be wrong. I wrote everything in assembly, and I didn't have the power to do floating point math to implement a proper PID algorithm. I could add and subtract 32 bit unsigned numbers and that is it. Still I could see my approach was kind of a meld between what the original CP-16 hardware did, and what a PID loop would do...albeit implemented in a way that worked well with the limits of the CPUs of the time. These days for the same price or less it is nothing less than astounding what you can do. Manufacturers now include very advanced tooling that would have cost 10K or more back in 2010-all for free. For less than $1 you can get an arm-cortex level processor with a 32 bit data bus and lots of flash memory, lots of ram, internet connectivity, floating point co-processor, etc. Stack Overflow had just been started when I worked on the CP-16 project, but it was just a shadow of what it is today. Things have changed quite a bit. Please update us often on this-I am watching this space with great interest. Also feel free to reach out if there is anything I can do to help, I am sure you've got everything you need, it seems your well down this path. If there is something I can do I would be happy to help. Good Luck! As a epitaph of sorts for my project-that fell off because the weak economy made it difficult to raise money, especially considering the sudden rise of affordable quality HD cameras. I don't recall the exact impetus for switching gears. I do know the few pre-orders I had built up quickly dropped off, and not because I hadn't released-at least I don't think. I recall a few telling me they decided to shoot less film around 2010. To this day I still get an email every few months asking about it, to inquire if I have any left to sell. I still have my CP-16 in storage somewhere with the old prototype still attached to it. In fact in my electronics rack near my desk I still have my original CP-16 board and a few odds and ends. I think after that project I came up with a voltmeter that would automatically detect phase and a really cool idea for controlling lights on a set. After that I moved from Alaska to LA, switched jobs from reality camera op to gaffer. I also founded a startup with some fellow Alaskans that focused on drone safety-that startup you can actually buy product from today, and you should if you have a drone. Now I'm off that startup I am back in LA gaffing-mostly commercials and music videos. Film tech-wise I might have something new in the works coming soon....Hit me up if your a Gaffer or a board op in LA! Best of luck, I really want to see your success, and again feel free to reach out for any help.
  7. It sounds like you're considering purchasing, so you're thinking the most likely use case is book light. If it's a book light a 1.2 fresnel or par is the way to go. If you're buying and you want something that fills one use case, and has potential for other things, I'd go 800 joker with a source 4, lens and bug a beam. But then again until I fell for the sky panel, a joleko was my favorite light. So I'm biased. TL;DR 1.2 for book light, 800 joleko if you can only afford one light in that price range. *edit: if you're buying used. Newer lights like the M18 is versatile in different ways.
  8. I thought I heard a rumor that in england, juicers are no longer allowed to ride in a condor. Not sure if I am remembering correctly, but I am sure I heard something like that about some European country. I am about to do two 80' condors tomorrow night on a SAG modified low budget, so it's not a crazy expensive setup. The cost for 3 nights of two condors is well under 8K. The real problem is cabling. If you're sending up (as I am) two 20Ks, a 9 light and a half dozen par cans, cabling becomes the real consideration, as well as labor to rig it in place. We've got almost a half mile of 2/0 and banded to make that happen, and that is low budget. I have been on sets where you have miles of 4/0 (sometimes 9 wire 4/0) to get power around set. Light is a pain in the ass, but it's a mistress I love. It takes quite a bit to light a very small location, and as things get bigger, all things become exponetially heavier. This includes wattage, cable length, lamp height and generator power. It gets out of hand very quickly. I'm amazed by the shots of NYC David posted, seems there is no opportunity to swing a turret, and limited opportunity to dance the condor up and down the block. The Jib must be critical in those scenarios to get lateral motion on the lamp. Even then a jib can only give you a few feet of fore/aft movement. Most of my experience with condors is in Alaska and New Mexico, where there is endless room to dance around, the only limitation being cable and where the camera is looking.
  9. LCDs have no interaction with magnatism. People used magnets to flip EVFs, because back before iPhones made MEMS motion sensors cheap, they used sensors that simply worked off gravity. The magnet activated the switch, making the LCD think it was upsidedown when it was upright. Over time, especially if the magnet doesn't move much, it won't impart much magnatism on anything, and what little it does won't be enough to do anything adverse to an LCD or related components. Very little electronics these days are affected by permanent magnatism. Things change when that magnatism changes polarity often or a permanent magnet moves against a low voltage, heavily amplified trace without a feedback circuit (which these days is almost never). Electronic design in the very few circuits that could be affected by magnetism or electromagnetic radiation when cell phones came into vogue. Magnets, even when moving, impart only a small fluctuation in most signal traces or wires, so it doesn't take much to design around these constraints. Almost all digital circuits are impervious to typical magnet induced current, or environmental EMI, because good luck getting a single short trace to absorb a 3.3v worth of magnetic induction from any magnetic or EMI source. (place a coil of high current, high voltage wire next to a wifi router. Odds are the range will be shortened, but it will still work reliably....unless you place it next to a 20K dimmer)
  10. But Richard, you're our insufferable bore. (and if Phil gets into a cage death match with you over the IB title, that would actually be pretty awesome. Neither insufferable, nor boring) I got my first LA job (after my other first LA jobs of doing camera prep for a reality show, and interviewing Ed Asner). I am gaffing a low budget film in a couple of weeks, and looking forward to new challenges. It is quite interesting, the differences between LA and Alaska. Over the last decade I have worked with mostly LA and NYC crews in Alaska and New Mexico, so the culture and workflow is familiar. There are two main differences I see. #1- everything is available, whenever you want it, and it can be delivered cheaply and on a moments notice. That is quite nice. #2- although Alaska wasn't quite bereft of regulation, there are necisarrily more rules in a major metropolitain. So I have to figure out how to do what I have always done, within a more stringent regulatory environment. Not a bad thing or an undue burden since I never cut corners on safety. But now it seems the opportunity to *cough* and do something in the grey area is a bit more limited. It will be fun to explore the lay of the land, and to quote Dr. Strangelove, I am filled with a spirit of bold curiosity for the adventure ahead! (also I hate parking in LA. I stay sheltered in Burbank as often as is practical)
  11. It certainly could have been that. The photofloods were ungelled, but dimmed down to almost nothing. The match between fire and artificial seemed to balance to eye and on monitor, and between MED and CU was constant. As the night went on, there was a marked change in the fill light when we went into the CUs, even though both were lit by only fire. The red shift wasn't so dramatic that I saw it during the night, but it was fairly apparent in the grade. It was much easier to see in the pictures I can't post. Our light (on mids and CUs at least) remained constant, how we fluffed the fire remained constant, it was true dark for all of it. Exposure remained constant, the only thing I can point to that gave such a dramatic difference was the coals. They don't give off any appriciable light given our light level and relatively low ISO, so IR is all I can pinpoint. After all the IR polution tests I have done, I wouldn't expect IR to shift skin red either. But if that isn't the case, I am at a loss to explain the change. My sense during the grade was there was something odd going on. When there was variation between kelvin in visible light (ratio of fire to photoflood), that was simple enough to grade away using the gain wheel. But with the later CUs it took a radical amount of correction on the wheels. The direction I was moving the wheel in while watching the skin tone gratical, was markedly more in the yellow direction than I had ever done before. Usually the wheel naturally goes more towards orange. Something felt very, very weird it almost felt like the light was non-kelvinic. which makes even less sense because both the fire and the photofloods, even if they dont match are definitely on the kelvin scale. If I applied the medium shot grade to the CUs, and applied a very small hue shift, everything fell into line. It was very odd, since I haven't ever seen that happen, and have never had a normal use for hue. (Sidenote-there was a long period of time between Med and CU due to an unfortunate moose visitor, allowing lots of extra coals to build up)
  12. So I am starting this thread to share an effect I came across on a film I shot recently. We were doing a campfire scene, and as we got more and more into the closeups, I started to light with actual fire more and more. The obvious bit I found-lighting with fire can be a hell of a lot of fun! Either having someone squeeze lighter fluid on the fire, or placing a coke can with the top cut off, half full of fluid really gives you the feeling of fire. Of course-it is fire. However there was one thing I could have probably remembered from science class, I didn't discover until the grade: most of fires heat is given off in the form of IR radiation. This causes the reds to saturate, and shift the overall hue of the light. In wides, I was lighting with 6- 212 & 211's aranged behind the fire, in a triangle made of 20" C-arms. In closeups (especially with our bespecaled charecter) we started going more and more to fire. In the color grade, that led to a VERY saturated red taking over skin tones. I corrected that with a 3deg hue shift, which to me made sense. Afterall, the extra IR would be shifting the hue outside of the visible range into the red, so a hue shift would bring it back. I tried doing it on the wheels, but the hue shift felt better to me. I thought I would pass along that discovery in the hopes that it someday helps someone out. Obviously if you are doing a shot with propane gas and concrete logs, it won't be as big of an issue, since it seems most of the heat of a fire (and thus IR) comes from the coals. But if you are doing a real wood fire, maybe consider adding a hot mirror to the camera package. I would have never thought to include that in the package, but I think from now on, if I see fire as a major light source, I will be using a hot mirror. I can't post pictures until the film comes out, unfortunately, but maybe once it does I can share some before and after grabs.
  13. C300s were really popular for a while. There are a few companies I could mention that shoot exclusively Z7 to this day. GoPros and 5Ds are ubiquitous. GH4 is getting thrown in the mix a lot lately. The last one I worked on was on FS7s with cabrio lenses. I expect those will be the new workhorses. A lot of shows are going to Log, which is nice. It used to be all F800 (XDCAM) and panasonic 3100s. But variety is insane. I have used a half dozen different camera systems in the same day many times. If I recall correctly, the only thing broadcasters mandate is a minimum bit rate that the camera uses (either 25 or 50). There is also a look that is baked into the pitch to network, so if you pitch a "cinematic reality" on a big sensor camera, you can't really go to a 1/3" camera, except for specialty shots. I've never shot 4K for reality, even when we use 4K cameras, they are set to record 1080. No production company wants to deal with the exta data and processing, if they won't be delivering in 4K. Frame rate is either 24 or 30, and that is decided by the DP/EP/Network. It seems these days there is quite a bit of color work done. Gone are the days of making sure your white ballance and paint settings produced an instantly broadcastable image. Now it is more common to do preset WB so the cameras match, shoot in log, and let the post color handle the final image. It isn't such a meat grinder that nobody cares about the look, but everyone who has something to do with the image and story is aware not to tip the hat and make it look "produced". There is a lot of work done to keep the verite feel. Each show finds their look along that continuum, but it all has to feel within the lineage of documentary, not narrative film.
  14. I moved from Alaska to LA just to be able to see this movie in 70mm. I don't think there is a single celuloid projector left in that state, let alone a 70mm. Maybe that is an exageration, but never let the truth get in the way of a good story. So archlight? Is that the call in LA for 70mm, or is there a better theater? (also did RR change from northface to canadian goose down? I don't think I can critisize him on any cinemagraphic choice, but good on him for the CGD. That certainly isn't a working mans Carhart. I've seen many a man start out with any other brand to find himself $1300 lighter with CGD as his armor)
  15. never fight the light of the location you find yourslef in. The main difference between the two shots you used as examples was where the light was relative to the the subject, and only by a matter of a few feet. There is a reason to use very tolpy light, just as there is a reason to use nice soft 3/4 light. You can push a director into staging for natural light. If the framing keeps that from happening, you can spin out the tubes in that unit, and place a 4x4 kino where you need it. But in every scene you need to make a call about where the key is coming from. Every other light should be meant to wrap that light (even if it doesn't make logical sense) or it needs to augment and stylize that key light. You are best served by deciding what the one light is that lights the scene (in the mind of the viewer) and then add lights that builds on that motif, without disabusing a viewer of that direcionality. I have learned that there is a lot you can get away with that doesn't make logical sense, but makes thematic sense, or sense within the overall structure of the image. You should always know where "the" light source is coming from, and that is usually informed by your DP, but so long as you aren't fighting against the gravity of the scene, there is a lot you can do to make the scene make sense, while also fulfulling the reqirments of lighting. Keep in mind the 3 things lighting must do: provide exposure for an image of appropriate value, compensate for the contrast difference between a camera and your eyes, and give the brain a logical representation of a 3D world in a 2D space. Those are the things lighting must do, what lighting can do is much more nuanced than that (OK, I stole that from cinematographer style. The point remains valid)
  • Create New...