Jump to content

Colin Elves

Basic Member
  • Posts

    5
  • Joined

  • Last visited

Everything posted by Colin Elves

  1. The Jab is pretty powerful, but as I recall has a bit of a green spike in it - the light isn't great and it's not actually too focusable as it has more than one 'bulb' (maybe it was the Jab 1 I used, that was a good 3 years ago now) . I think it is also over your budget, no? I was going to suggest a Hive plasma par, it's 5x as powerful as a Jab and the quality of light is stunning, better than an HMI - but, again, over budget ($3,500 for a kit - £1,750 GBP I paid for mine from Barbizon Europe) - still, cheaper than the Jab by the looks of it. Not sure how much the 1.2K blonde is though (Peter, I'm certainly interested: colin@colinelves.com). I also have one of the cheap chinese 1.2K HMIs (and I mean cheap: £500, nuts!) It does the job. The ballast is noisy as **(obscenity removed)** and it's a touch green (mostly because of the cheap bulb they included I think) and everything is a little sticky and wonky so I've no idea how long the damn thing will last... But you get what you pay for.
  2. Good Morning. I just wanted to share my new film for the band Snow Ghosts http://vimeo.com/148731238 Shot mostly on my trusty Sony FS700 and Odyssey 7Q in Highspeed 2K Raw to Apple Pro Res. Graded and composited in Resolve by the supremely talented Tancredi Monaco (http://www.tancredimonaco.com) Its a practical effects based film using long exposure photographic animation and stop motion. The song recently featured on the the X-men Apocalypse Trailer. Director: Craig Murray (https://vimeo.com/209mm), DoP: Myself (www.colinelves.com)...
  3. Hi Carl, I'm sure I'm bugging the hell out of you with my constant, no doubt dumb replies, unfortunately I'm a Philosophy graduate and get a kick out of arguing with people - suffice to say I enjoy and learn a lot from the process, regardless of the outcome from said argument. So here I go again... Dude, I wasn't trying to make a point about the relative merits of the human vs machine image sampling system - I was simply making a basic point that your brains makes intelligent estimates to fill in information it lacks and I'm sure that works fine for you, so as a methodology it's legitimate to use it. After that the only criterior to be worried about is accuracy. In my defence, I was making no such mistake -I acknowledge that in my previous post, see here: To reiterate - taking two sets of same size projected images - one from a 4K Bayer mask image file (with de-bayer processing), and one from a 2K 3chip image - and all else being equal - the 4K bayer image will represent a closer approximation to the original image - when compared to say a hypothetical 'pin-hole type' optical projection of the same scene. Again I bet a 4K 3chip process would be more accurate still, but that is not in dispute. By accurate I mean the degree to which, for each discreet section of the original image, the luminance and chrominance information recorded by each system, matches the original (whatever that means) information for said section. Discuss. On completely different note, i've clearly misunderstood something along the way, otherwise I wouldn't have been confused by this: I was under the impression that 4:4:4 colour sampling simply meant that for each recorded segment/pixel the full and seperate luminance and chrominance information for it's Red, Green and Blue aspects was recorded/sampled (to whatever degree of accuracy - 10bit, 12 bit, 14bit, 14million bit - whatever (hmmm, probably should have said Bit or something, letting on my degree of ignorance there) - so, um, how can most scanners exceed a sampling rate of sampling everything? Does they sample more than just the RGB a chrominance aspects of a pixel, or, er, what? Or are you simply talking about cameras recording 4:4:4 YCbCr (i.e. colour difference subsampling for each pixel) vs 4:4:4 RGB (i.e. no subsampling) because I was under the impression, could be very wrong, that the D20 and possibly other cameras can provide RAW files containing the 4:4:4 RGB colour info (or at least the Bayered info - which I suppose is not technically true 4:4:4 RGB) - I'm not trying to argue, I'm just wondering if there is a gap in my knowledge somewhere that Michael could hopefully fill. Colin Elves
  4. "Now put that back in front of the camera and adjust the framing and focus so that the images of the grey dots line up with their corresponding filter pixels on the camers's sensor. If you could get it precisely right, the camera would the produce the same output from that monochrome image as it would from the original colour scene that produced it, and so it would produce a colour image." Hmmm? True, it would indeed (re)produce the original colour image (though probably much darker due to additional light filtration by the bayer filters) in that rare, difficult and unlikely case, but my original point was not to consider what the chip actually does, but to ague that resolution as a usefull term (in this discussion) only insofar as the 'processed' image of a certain size (4K Bayer) is more or less better are 'accurately' reproducing reality than another image formed from discreet pixels of another image of smaller size formed from composites of three colour specific alligned monochrome images (2k 3chip system). My argument was that some form of intelligent processing can re-produce reality better in that case, not as accurately as 3 chip 4K, but better than 3chip 2K I suspect. Personally I don't have a problem with intelligent image processing at all - your brain does it all the time 'filling in' the image information missing from your rentina at the point where the retinal ganglia exit the eye as the optic nerve - i.e. your blind spot - in general this processing is pretty good and this gap in vision is not noticble - if you want to experience what happens when the processing is tricked have a look at this visual illusion: http://dragon.uml.edu/psych/bspot.html In fact, going back to your 'monochrome image producing colour' argument - that's how all current imaging systems work is it not? 3 chip cameras produce a colour image from intelligently combining three monochrome images formed by detecting physically aligned, but discreet, photons of different wavelengths seperated by a prism - a bayer filter forms it from three sets of monochrome images formed of 'not as closely aligned' photons of different wavelenghts of filtered out by a bayer filter. The foveon X3 does a similar job to the bayer, as I understand - effectively filtering the three colours by selectively reading photons of different wavelengths at different depths in a silicon layer, but still by using discreet 'monochome' sensors. Hell even your eye does it recording different colour wavelengths through discreet cone cells at different points - in fact it makes things even more complicated as human vision makes use of two other types of non-colour aligned cells to add to all this signal processing. All of these systems form colour images from the intelligent processing of discreet monochrome images. Anyway, that's what I think, how's about you Carl? I'm still after thoughts on the Foveon though - a new potential revolution in image processing? or just daft? Col.
  5. Hi There, I'm new to this forum, so I just 'wasted' (i.e. should have been doing something else) two hours reading the rest of this thread, just for fun. Anywhoo - i've a couple of questions, ready to be shot down. 1) Re: Resolution - based upon my understanding you could equate a 4K bayer with a 4K chip - Assume you ignore all colour data, then you could say you are recording 4K's worth of luminance data sites - i.e. you could have a 'genuine' 4k Black and white image (ignoring luminance lost due to the colour filtration) - taking that as a base you could then add colour data to the luminance data using the Using the data you have for each site (Red, green or blue) and interpolating the other two colours for that site based on mathematical jiggery pokery. I'm not saying that's how it is done, but in theory it could work like that. Following this, while the interpolated image wouldn't be identical to the an image formed that recorded the actual luminance / colour info at each site, it would be closer to that than the raw bayer image would be. Sure there would be errors and occasional artefacts, depending upon what you are shooting i.e. Highly detailed, bright multicoloured cloth may be a problem (problems it would help a DP to know about), but the surely the important thing is how close it is to the original? So I don't see anything wrong with saying that the Red one is higher ress than a 2K camera - maybe not true 4K but close enough for the price tag. 2) Given that last point - hey if they pull it off, sweet - it won't be like shooting film (duh) but it is great for low budget filmmakers to have a camera that shoots really nice stuff that doesn't cost an arm and a leg - to me it's like the jump from DV to HDV - I can't afford to shoot on film but HDV looks better to than DV me, whilst still being affordable. 3) If any of the RED guys still read this - here's a suggestion, have you thought about swapping to something like the foveon x3 sensor (info here: http://en.wikipedia.org/wiki/Foveon_X3_sensor) - a 'proper' camera with this type of sensor would sound pretty sweet to me - since it is a single chip design that does not need demosaicing (hence no arguments about actual resolution), is very light efficient and could potentially allow you to cram more sensor sites onto a 35mm chip area without the aforementioned drop in S/N ratio (I think). Of course, without the need to pay for film, as data storage costs drop, you might just decide to go for a 65mm sensor and use some of those old lenses (or even make some new ones oooh...) Colin.
×
×
  • Create New...