Jump to content

RED production schedule


Carl Brighton

Recommended Posts

  • Premium Member
That is perfectly correct and this is very important to take note from: this means that lighting a digital movie (especially ones that are shot with Bayer sensors) is quite a different task than lighting a traditional film set...

 

I don't agree with that at all -- and I've shot eight digital feature films. Whether you shoot film or digital, you have to adjust your dynamic range for the contrast characteristics of the medium and how you are processing it. For example, you make adjustments for shooting on bleach-bypass negative or reversal compared to normally-processed negative. With HD, you have to watch your highlights.

 

But if I want to light a scene with a single soft light coming from one side of the room, that's how I'd light it whether I was shooting in IMAX or on a DV camera. I'd monitor the highlights and the shadows for the contrast characteristics of the medium, but the lighting is not fundamentally different.

 

From a practical angle, exactly why would lighting two people sitting on a couch be "quite a different task" if you were using a camera with a Bayer sensor rather than a film camera or a 3-CCD camera?

 

Lighting is lighting -- there is no such thing as "DV lighting" or "digital lighting", there's just lighting. ("Oh, you're lighting for a Bayer sensor -- you can't key them with a Chinese Lantern!")

Link to comment
Share on other sites

  • Replies 495
  • Created
  • Last Reply

Top Posters In This Topic

You so right David i have spent to many years that i care to remember convincing [ video engineers] to get look away from their vectorscopes and look at the really nice images on the monitor , i did wrong thing in their minds and lit it the way it suits the subject and not like it should be lit for there cameras ,what ever they be , this goes back to tube cams.

Link to comment
Share on other sites

I don't agree with that at all -- and I've shot eight digital feature films. Whether you shoot film or digital, you have to adjust your dynamic range for the contrast characteristics of the medium and how you are processing it. For example, you make adjustments for shooting on bleach-bypass negative or reversal compared to normally-processed negative. With HD, you have to watch your highlights.

 

But if I want to light a scene with a single soft light coming from one side of the room, that's how I'd light it whether I was shooting in IMAX or on a DV camera. I'd monitor the highlights and the shadows for the contrast characteristics of the medium, but the lighting is not fundamentally different.

 

From a practical angle, exactly why would lighting two people sitting on a couch be "quite a different task" if you were using a camera with a Bayer sensor rather than a film camera or a 3-CCD camera?

 

Lighting is lighting -- there is no such thing as "DV lighting" or "digital lighting", there's just lighting. ("Oh, you're lighting for a Bayer sensor -- you can't key them with a Chinese Lantern!")

 

Hey David - From what you've seen of the Red footage, do you feel you would have to watch your highlights in the same way as other HD systems? (The way linear images clip is a big turn off for me using HD generally) - did you see much clipping in the PJ short? I know you likened it to high contrast stock, but I'm less familiar with the relative merits of each stock...

 

Also - thanks for your input here - I've been silently leeching information from your brains for a while...

 

R.

Link to comment
Share on other sites

But if I want to light a scene with a single soft light coming from one side of the room, that's how I'd light it whether I was shooting in IMAX or on a DV camera. I'd monitor the highlights and the shadows for the contrast characteristics of the medium, but the lighting is not fundamentally different.

 

Thanks for your input David!

 

I am actually 100% with you here. I actually didn't mean to talk about the contrast characteristics of the medium, or the light's intensity whatsoever, but it's spectral characteristics. Let's say a director wants you to light a night scene: would you rather gel all the lights on the set or white balance and apply a filter in post? I personally would say the answer I'd give to this question is going to be different if I was shooting on a digital (4:2:0, 4:2:2 or Bayer CMOS) medium or on film...

 

But, to be honnest, I AM an electronic engineer trying to learn to become a director of photography, so I am definitely going to be biased: I feel in love with my vectorscope and all that kind of things :)

Link to comment
Share on other sites

  • Premium Member
Let's say a director wants you to light a night scene: would you rather gel all the lights on the set or white balance and apply a filter in post? I personally would say the answer I'd give to this question is going to be different if I was shooting on a digital (4:2:0, 4:2:2 or Bayer CMOS) medium or on film...

 

Even there, it just depends -- on film, I have put 1/4 CTO gel on all the lights, but other times, I've just lit a grey scale with 1/4 CTB, shot the scene in white light, and let the colorist time the dailies for the warmth. Same goes for shooting HD -- I've used gels, I've used white balance tricks, and I've used camera filters.

 

Small shifts/corrections in color one direction or the other are not hard to do in post. Extreme color shifts are generally best done in-camera, whether with lighting gels or filters -- or at least do it half-way on the set and do the rest in post. The reason you tend to use gels is because you want to retain some differences in color temps of various sources in the frame, rather than give the overall frame a color temp shift.

 

Extreme shifts/corrections saved for post tend to cause artifacts, and this is true whether you are shooting film or digital, but it's particularly true if you are shooting 3:1:1 HDCAM due to limited color space and high compression.

 

But again, these are not fundamental differences in lighting, this is just knowing what tricks work better or worse -- the direction and contrast and texture of the lighting is not fundamentally different, just adjusted for the particular quirks of the format and post technique planned. I wouldn't say using a 1/4 CTO gel on a light versus doing it in post is a fundamental difference in lighting technique.

Link to comment
Share on other sites

What sort of engineering do you do?

 

How is that relevant to the subject?

 

Extreme color shifts are generally best done in-camera, whether with lighting gels or filters

 

Well, that's where my experience might differ from yours, because, when shooting compressed digital video (and Bayer-sensors do compress the color space somewhat by the inherent fact they don't sample R G and B data for each and every single pixel) I really believe it is best to capture the most details possible on set. This is what gives the best results in my opinion (and I have verified this numerous times). How can you capture the most details on set with color sub-sampling (Bayer) sensors or compression scheme like the ones used in HD? Simply by feeding the camera what it is optimized for. In other words, I want to keep as wide a spectrum as possible (as close from "white" as possible). "Capture the Most, Remove in Post" (I call it the "CMRP" rule :))

 

But of course, if you want to retain "some differences in color temps of the sources in the frame", then you must do it (at least "half-way like you said) on set: I am never going to dispute that.

 

Extreme shifts/corrections saved for post tend to cause artifacts

 

Well, then, I'm afraid there was something wrong with the post-production process you were used to work with. I'm not saying you will never get artifacts if you keep the very same color space (3:1:1) or (even more important) color depth, but if you take advantage of my "golden rule for compressed digital media" (CMRP aka: "capture the most details ON set! Always expose correctly and leave extreme color corrections for post") and if you increase the color depth (to 16 bits) and -most importantly- you only color correct uncompressed (typically TGA sequences for me) footage, I really can't see how you could possibly get more artefacts this way than you would get if you were doing your extreme shift the other way around ("on set")...

 

Quite the opposite in fact: you shall keep the details and get the desired color shift and globally get better results this way. This is not only what a theorician could figure out: my experience really tend to prove this is the best approach. But I think it would be best for us to try to compare both approaches with a little "hands-on" test-comparision. Only problem is that I don't have much time right now, but nevertheless, this is an idea I'm keeping for the future :)

 

these are not fundamental differences in lighting

 

Well... I guess you're right: these aren't fundamental differences in lighting and it all comes down to "what tricks" works best or not... And basically, that's what I meant to say: there are "differences"!

 

Last but not least: thanks a lot for your insights and for sharing your experience with us David. I really appreciate this discussion!

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

Let's say that you want the image to be bright orange and you shot the image with a bright orange filter on the camera and in post you had to do NO color-correction.

 

Now imagine you had shot it completely neutral and had to push it extremely in the direction of orange in post.

 

So how could a shot that needed no color-correction give you more artifacts in color-correction than the shot that needed an extreme color-correction in post?

 

I've shot thirty features, eight in HD, and spent time color-correcting all of them digitally in post for home video at some point and this is my observation: the closer the photographed image looks to how you want it to be in the final project, and the least amount of post manipulation is needed, the cleaner and less-artifacty the look -- especially on the big screen if doing a D.I. Now some types of extreme corrections are less artifacty than others, but generally the less you have to wildly push the red, green, and blue channels, and their secondaries, the less chance you have for artifacts like noise.

 

The main reason to not exactly shoot it in-camera the way you want is to give you some wiggle-room for changing your mind, modifying the effect, or hedging your bet against misexposure. This is why, for example, if you want a blue moonlight look, it's better to get halfway there in terms of blueness and add the last degree of blue in post. Same goes for underexposure for a moonlight look; you don't want your luminence information to be far from the final effect because then you aren't using your recording medium efficiently, wasting it on exposure information at the ends that you don't need. But you don't want to underexpose too much either in case you make a mistake.

 

Now I admit there are special-case scenarios where you might want to make sure you are recording information in all the color layers -- the red-lit darkroom scene being the famous example. This is more an issue of sharpness than noise or grain though.

Link to comment
Share on other sites

how could a shot that needed no color-correction give you more artifacts in color-correction than the shot that needed an extreme color-correction in post?

 

Although what you say is full of common sense, the problem is it's all coming down to the reduction in the color space that is applied by the compression algorithms during the capture.

 

Let's take an extreme example: let's say you want a completly blue scene (as close as you can possibly get to a fully saturated blue). If you shoot it with blue gels-filters, you will end-up with a very soft image due to the 4:2:0, the 4:1:1, the 3:1:1 or the Bayer matrix applied on the sensor. Basically, in all those cases, you end-up with approximatively a quarter of the possible resolution. Sure, you won't need any color-correction and there won't be any associated artifact... But your picture is gonna be incredibly soft and this will be visible. The whole picture will be somewhat like a huge "artefact" itself :)

 

Now, let's say you do it differently: you capture your image without gelling your lights and you white balance your sensor, making sure you capture as much luma samples as possible, as much white light as possible in fact. The white light of course contains all the possible colors, including the blue we are looking for (we'll just have to remove tons of green and red to start to approach the color we are looking for). Then, in post, before you start messing up with the color-correction part, you make a TGA sequence (uncompressed) and you switch to 16 bit per channel color depth. And only then, you start applying your blue filters. What you're going to end-up with is a picture with FULL resolution and the correct tint. AND NO ARTEFACT whatsoever. If you get artefacts, then you're obviously doing something wrong... The only problem you can get is that you may have to add some gain because the picture becomes too dim. But, that's where the experience becomes handy: with some experience, you will eventually never have to face the situation where you must push the gain in post... But, to explain how would take almost 20 pages of a book... ;)

 

This (extreme) example is fairly easy to test: I don't ask you to believe me! Just try it if you don't ...

 

but generally the less you have to wildly push the red, green, and blue channels, and their secondaries, the less chance you have for artifacts like noise

 

That is generally correct except you don't point the actual reason why artefacts appears in post: the only possible reason for artefacts to appear would be to ADD (=PUSH) something that wasn't there to start with. For example, if you believe your picture is too red and you want to add a little green, there you go: artefacts will start to become visible because you can't actually CREATE something out of nothing. Same thing if you want to push the gain because your picture was underexposed: noise will eventually become visible.

 

What I am proposing is in fact the opposite: I always want to make sure I'll never have to ADD anything in post, whether it be some light intensity (gain) or some red, green or blue! I only want to substract information in post! That's the best approach in my opinion when you work with digital media (and that's why I still believe you don't light for film as you light for digital): it consistently gives you the best results...

Link to comment
Share on other sites

Grading in post will almost always result in some sort of artifacting. A red light on a blue shirt is not the same as a red hue shift. Attempting to filter in post is difficult as it affects luminosity just as much as color, something the RGB space does not handle perfectly. Not to mention you aren't operating in the full dynamic range of reality not clipping a channel will prove to be almost impossible.

 

However I can see how there might be reason for concern with a Bayer pattern. If the image is predominantly blue for instance you will only have 1/4 of the sensor wells operating at peak efficiency. It would seem like you would experience an extreme loss in resolution. Advanced Debayer algorithms will take into account ajacent wells for luminosity. Wouldn't a monochromatic scene corrupt the debayer process?

Edited by Gavin Greenwalt
Link to comment
Share on other sites

Attempting to filter in post is difficult as it affects luminosity just as much as color, something the RGB space does not handle perfectly.

 

Absolutely correct! That's why I also tend to agree there is a limit to my approach, a limit only experience can overcome. My golden rule remains: "Capture the Most when shooting, Remove in Post", but it is not always easy to apply in reality, exactly for the reason you mention: removing one or 2 fundamental colors in post is going to affect the overall luminosity, and the last thing you want is to start messing up in post with the gain because you had to remove too much luminosity. GAIN=ARTEFACTS!! That's where I do agree with David when he says it is best to get as close as possible from the end-result when shooting". The only difference is that I would say it is best to get "as close as NEEDED from the end-result when shooting" and leave the rest for post... The difference might be subtle, but it is there :)

 

If the image is predominantly blue for instance you will only have 1/4 of the sensor wells operating at peak efficiency. It would seem like you would experience an extreme loss in resolution. Advanced Debayer algorithms will take into account ajacent wells for luminosity. Wouldn't a monochromatic scene corrupt the debayer process?

 

EXACTLY! A monochromatic scene (in fact any scene that doesn't contain a reasonable amount of GREEN) would be quite a challenge for debayer algorithms, but also for typical digital capture (except 4:4:4, of course) algorithms like DVCAM, HDCAM, etc.

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

Shooting blue on red will gain you resolution like from below attached images.

Left picture is 4:4:4 where all pixels have RGB component for all horizontal 4096 pixels.

The right one is 4:2:0 equivalent of Bayer sensor without green, 2048 pixels per horizontal line.

So 4K format becomes 2K format. This will be the case in Dalsa sensor that is not oversampled sensor, it mean it has 4096 pixels per horizontal line.

RED has 4900 pixels per horizontal line so the loss of resolution will be less visible.

Drop of resolution from 4K to 2K I will not label ?image becomes very soft?

First, it will happen only in the absence of green and near 100% contrast, second 2K in such picture is still very good image quality even for viewers sitting not farther then the width of the screen.

Human eye will not recognize the single pixel on the screen at such distance if we have 3600 pixels per horizontal line. You can try it in front of your HDTV.

When you are watching 1980 horizontal pixels from the distance equal to the width of the screen you will see the raster of the pixels. Yes, because there is less then 3600 of them.

Go bit further and you will not see raster anymore. You need 20/20 vision for this experiment. So all guys out there with home theaters organized in such a way that screen is on the wall along the room not across the room have to switch to 4K projectors.

60 degree FOV is very nice especially when watching all this wide angle shots done with 12mm or less lenses but then you need 4K pixels for horizontal line to have it silky smooth.

 

Note:

At this moment RED uses 4900 pixels per horizontal line to show the safe area in the view finder and sends only 4086 pixels for the projected area, so no oversampling in such a case.

However I hope that you can override this setup when you capture the data via high speed RAW port on the camera. Not good for normal film making since we need safe area but for underwater shots we do not have mikes hanging there and 20% oversampling will probably keep this 4K always 4K providing that you can get the 12mm FL wide angle lenses that can even resolve 4K.

 

Andrew

 

Below are actual 1:1 crops of 4K image, watch it from the distance equal width of screen to see the cinema experience of it for 4K display and if you have 2K or 1980 pixels in the horizontal line of your computer screen you have to watch it from ~1.7 width of the screen.

post-20864-1177153268.png

post-20864-1177153290.png

Edited by Andrew Ray
Link to comment
Share on other sites

I forgot one thing, these images are shown without smooth edges algorithm applied to it. In real post you will smooth the edges so you will not see the staircase effect, the shapes will be just bit softer. But then when do you have 100% contrast and 100% saturation in the image? OK it is worse case scenario.

Link to comment
Share on other sites

I just thought we might have some expertise in common.

 

Well... I doubt it because, although I graduated in electronic engineering, my main area of expertise is in (inter)networking engineering: I actually worked 6 years for Cisco Systems before I switched over to cinematography. :)

 

Anyway, I think the Red is more interesting than "my expertise" might be, so can we please get back to the topic ;)

Link to comment
Share on other sites

I am new to the RED community, but I met and spoke with Jim Jannard at the RED booth at NAB and decided to join the discussion. I have an extensive background in product development and what I saw at the RED booth at NAB was incredible. I'm sure you all have focused on the technical aspects and the production workflows ad infinitum. I know that too many talented, capable and well-funded people are working on the workflow issue and I have every confidence that it will be solved.

 

But in addition to being a filmmaker, I?m an industrial designer, and the questions I had for Jim and his team focused entirely on supply chain management, delivery and support.

Let's face it - Jim and his team have created an incredible product that has the ultimate "Killer App": ( a 35mm Movie Camera with a virtually limitless supply of processed and telecine?d film at no extra charge ). But if RED Team can't deliver quality production units, or they can't manage their supply chain or their vendors, then there's no RED. It's that simple.

 

Because RED Team's expertise is product development primarily, they have gone beyond the other guys like Phantom HD (IMHO: Great high speed photography, but it doesn't look too production friendly) and Sony (who, let's face it, probably had to mitigate their ability to compete against Panavision to turn out that CineAlta camera in time for George Lucas to use it on Episode II).

 

From my perspective, the main reason why Sony and Panasonic and all the other ?Big Names? would have never built a Camera System along the lines of the RED is two-fold. First of all, they?re publicly traded and they have shareholders to please. The result being that no CEO would commit the resources (financial and talent) to chase what, in a Sony-Scale business model at least, is a pittance. Movie Production? Sony would spend more on development of the prototype than they could ever get back in the eighteen-month window that the shareholders would give them to turn a profit. Reason two is the size of the chip. 35mm depth of field is possible only with a 35mm imager. Sony could never bring to market a chip that big. Why? The economies of scale that they function under would require orders of millions of units. Besides, when has anyone heard of Japanese product design thinking result in making things bigger? So, if you?re waiting for Sony or Panasonic or any of those other guys to save your bacon with a RED alternative, you have a long wait. Those companies are Consumer Electronics companies that have small Professional Equipment divisions (very small in comparison to the main market they serve).

 

The RED is a complete System and every aspect of successful product development is visibly present in the final design from how the product interfaces with the user to how the images are downloaded and edited in a production environment. This seems obvious until you compare the RED system to its closest rival, the Phantom HD. While the Phantom is very impressive, it was apparent when I looked at it in action that it would be very unfriendly to a User and is devoid of the elegant production workflow that RED demonstrated. This assertion is somewhat vetted by Peter Jackson's test Film and tempered by my admiration of the Phantom HD guys. I'm a fan of anyone trying to make a real Digital Cinema Camera. Phantom HD appears to still be in some advanced prototype stage, but they claim to be shipping units, albeit with less than ideal memory / storage solutions (The Phantom guys were addressing this need however with a potential vendor on the show floor (!) in front of me).

 

One of the first questions I asked RED Team is how many factories are you dealing with? RED Team said they have contracted with a network of unaligned and independent factories across Asia and in other parts of the world, each one getting contracts to produce RED Spec'd discreet components in high volumes. The factories probably do not know of each other's existence, RED Team preferring (rightly I think) to handle component integration and final assembly domestically. Therefore ideally, no factory deals with the entire product, only RED engineering and assembly see the whole picture.

 

There will no doubt be a robust effort to target RED's technology via espionage and it looks like Jim and the rest of RED Team have learned well the pitfalls of overseas manufacturing. The knock-off guys can sting you hard when you've got a hot product about ready to come out. This was one of my fears. I want RED to recoup their development costs as soon as possible so that they can continue to sell RED camera SYSTEMS and innovate further improvements.

 

Domestic competition from counterfeit RED cameras would have three market forces stopping them: the Patent filing and the legal pressure that can be exerted is the primary one. The second force is an entrenched user base that requires a workflow dependent on firmware (usually very secure) in order to function. It would be incredibly difficult for corporate spies to get hold of that, but not impossible. The third is Sales Volume. Just how many RED Cameras can there be in the world? Your Grandma doesn't need one. Your local church softball team won't need one. It may be astronomically high, but there is a limit to the number of cameras they can sell.

 

Sure you could buy one, reverse-engineer it and possibly figure out how it works (but only up to 80 - 90% functionality) but you still have to mass-produce it. Believe me, someone with a factory in Mainland China WILL TRY. The big barrier to a RED knock-off? Simply that each component is worthless on it's own and only has value within the context of the entire workflow. Anyone wanting to make a RED copy will have to go through their own development process to do it, at least in the short term. And that is a very expensive proposition.

 

IMHO, RED has gone to great lengths to frustrate the pirates and product dumpers out there. So, Product Security seems assured. The other selling point for me was the fact that RED seems to have kept both Prototyping and Tooling either in-house or under tightly controlled foreign shops (my money is on Orange county, RED?s HQ).

 

I was also informed, along these lines, that at the start of the show (NAB), RED had a complete set of Tooling for RED camera and that at least half of them are being recut. The tooling revisions were ongoing as of the show and it probably continues right now. It can take as long as 45 days to cut tooling. The upside of this news signals that engineering is ongoing and will likely never end for this product. The clock starts ticking on the Patent the second the engineers get their pink-slips. Although this may delay delivery for a few weeks, it tells me that the system will exceed the functional envelope that they had when they committed to tooling.

 

In short, they have likely discovered a benefit that was so innovative that they ponied up more cash to alter the molds so that the first shipped units would have it. That fact was staggering to me and should indicate to anyone that knows about these things that RED Team is focused on gaining and keeping substantial marketshare right out of the gate. That would be attractive in the context of say, an IPO? It also reveals the depth and breadth of resources RED has in their development war chest. They understand that if it?s not right from day one, they can flush all those millions they?ve spent so far, down the toilet.

 

Could the tooling revision mean that they forgot a basic aspect necessary to function? I doubt it. No one in their right mind would ever commit to tooling unless the product functioned to the engineered specifications and was ready to hit the factory assembly lines and fully function as advertised. Boris and Natasha prove that. This is, to some degree at least, the Oakley brain trust. They know how to make things people want to buy.

 

So, the fact that they have a complete mold set indicates that development had at least reached the prescribed functional envelope, and the fact that they changed half of those tools (a RED Team estimate, not mine) indicates that they have been able to increase the functional envelope beyond the initial design. It appears that first users are likely getting a RED 2.0.

 

The Apple factor is icing on the cake. Apple could stand to gain enormous market advantage by backing RED workflow. Apple?s involvement in the development of RED also signals that RED is indeed all that it says it is and more. Why? A company the size of Apple, which obviously has a very deep development and engineering division, could never commit resources to an unproven third-party product. RED Team had to have demonstrated early-on a significant capability in being able to deliver on their promise of a production camera system.

 

Just the internal corporate paperwork necessary to allocate time and talent from Apple?s team of engineers and other staff would require significant manpower expenditures. Let?s look at where Apple?s attention was focused during this time-frame: finalizing iPhone, creating Final Cut Studio 2, not to mention they probably had ?All Hands On Deck? for the transition to the Intel chipset. The bottomline as I see it is that Apple probably didn?t have many people standing around with their hands in their pockets and nothing to do. Apple clearly INVESTED time and talent (and cash) in making the RED workflow possible and their commitment obviously continues today. With few exceptions, Apple doesn?t have a history of backing the losing team.

 

Finally, as a SolidWorks user, I have complete confidence in the inter-operability of all the components and their function since SolidWorks has a very robust toolset for testing and ergonomics.

 

So, my report from the show floor is complete (albeit very, very late) and my prediction is that RED is a Revolutionary product (as opposed to evolutionary) that will change the way movies are shot, edited and exhibited, but it won't change how movies look. Good movies shot on RED will look like any other good movie shot on film.

 

I?m reminded that we all tend to refer to the film gauge when describing the look of film. We say that it ?Looks better on 35mm.? The reality is that ?35mm? has been around nearly a century and I?m sure you would all agree that a 35mm film shot in 2007 in no way matches the ?look? of a 35mm film shot in 1927. But if you compared RED footage to today?s 35mm, I doubt the difference would be as noticeable. In fact, I?m pretty sure the average ticket buyer would not be able to tell the difference. The biggest beneficiary of the RED camera and the real end-user is not the filmmaker-- it?s the Audience. Let the Revolution begin!

Link to comment
Share on other sites

That was an excellent post generally, although I have to admit to sucking a bit of air through my teeth at these bits: :rolleyes:

 

 

The reality is that ?35mm? has been around nearly a century and I?m sure you would all agree that a 35mm film shot in 2007 in no way matches the ?look? of a 35mm film shot in 1927.

Can we save the "35mm has been around for over a century" bit for the fanboys on Reduser.net and the like? "35mm" simply refers to the size and shape of the piece of plastic that the film emulsion sits on. In the 1890s it was nitrocellulose, a filmsy, extremely unstable and downright dangerous substance, but that was all they had then. Later they replaced it with safer materials, and current-day release stock is strong enough to support the weight of an adult! But the actual shape of the film never changed because there was no real advantage and a lot of disadvantages in doing so. What is actually ON the plastic base has changed beyond recognition however.

 

The biggest beneficiary of the RED camera and the real end-user is not the filmmaker-- it?s the Audience. Let the Revolution begin!

I really don't see how the RED is going to make any real difference to the movies that most audiences see, the most that one could realistically hope for is that they won't know whether it's shot on HD or film. As has been noted many, many times before, film and camera costs make up only a very small part of the average commercial feature's budget.

Link to comment
Share on other sites

I own a JVC JY-HD 10 single chip consumer high definition camcorder. It uses a hybrid primary complimentary color filteration technology that enables the chip to output 4:4:4 color by shifting one pixel at time and comparing a block of four pixels that surrounds the pixel that is diagonally shifted. Of course this consumer camera does not record 4:4:4 color but rather 4:2:0 color and I am wondering why the Red camera does not also use this technology. The color I get from my camcorder is simply amazing and is much better than typical single chip camcorders that employ Bayer sensors and the color is a lot better than the Sony HDR-FX1 which uses 3 chips. However I do not think that people take the JVC technology very seriously because it was implemented in a one third inch chip. But had this hybrid color filteration technology been employed in a larger chip such as used in the Red Camera I think the results would have been very impressive.

Link to comment
Share on other sites

This thread deserves a read:

 

http://www.reduser.net/forum/showthread.php?t=2271

 

R,

 

Richard when I was told the red could compete at the same table as film. I was really shocked. However I have seen the footage and it looks like video not film. I don't understand what benefit people think they will get from the red? Surely those wishing to make a film are compromising by having a video look? Especially when there are so many 35mm cameras out there for sale and cheap?

 

The russian konvas comes to mind perhaps even could be said to be a real red?

 

And when you look at all the intended uses for the red there are cameras already there doing the job better? often cheaper apart from HD, although even there I can't see much difference in price and there are so many peripheral pieces of equipment for their workflows.

 

If the red was going to shake the market now it really would have to come in at a lower price point.

 

Shouldn't those buying the red camera really be aware of second hand values?

Edited by Mark Williams
Link to comment
Share on other sites

  • Premium Member
Shouldn't those buying the red camera really be aware of second hand values?

 

Hi Mark,

 

They seem to have forgotton Jim plans to sell many thousands of cameras a year. I guess they would have been buying internet stocks in early 2000, FWIW they would have done better buying Oakley

http://finance.yahoo.com/q/bc?s=OO&t=m...&q=l&c=

 

 

Stephen

Link to comment
Share on other sites

I can't comment fully on the "look" of the Red output as I have only seen the small piece of the Red short that Peter Jackson shot, via the web.

 

What struck me was the number of people openly musing about using their early camera purchase to make money. Surely there are those in the mix who are planning this, but won't say so publically on Reduser.net.

 

And as I have stated all along any piece of gear won't get you work on its own. And many of these folks probably won't have the money when push comes to shove to acquire every thing needed for a fully operational Red sys. The people in that thread insisting that they would never sell Red as a point of pride are really interesting. I wonder if they'll feel that way if they have to choose between owning Red and having a place to live.

 

If I ran into money problems the first thing I'd blow out the door is my 35mm gear. Obviously food and mortgage payments come first. :D

 

It will really be interesting to see what happens on ebay once the cameras start to be delivered.

 

{Please note this is not in any way a slam on Red or its quality, nothing to do with that.}

 

I'm interested in how "market forces" will impact the distribution of Red, with a few early adopters out there to make a buck. Not that they are doing any thing wrong mind you, capitalism is capitalism after all.

 

R,

Link to comment
Share on other sites

The people in that thread insisting that they would never sell Red as a point of pride are really interesting. I wonder if they'll feel that way if they have to choose between owning Red and having a place to live.

 

If I ran into money problems the first thing I'd blow out the door is my 35mm gear. Obviously food and mortgage payments come first. :D

 

As with all luxury goods. They are pointless if you cannot feed them to your kids :lol:

It did worry me a little to see the ones talking about the bank and having to scrape money together to buy it. I'm so glad the bank doesn't own any of my stuff as they are the first to cut you off at the knee caps if you were ever to run into hard times. Problem is that if you borrowed 40K to buy a camera the bank wouldn't want your precious little camera back if you failed to cough up the money. Sentimental value unfortunately isn't something they take into account. They'll just take the roof from over your head even if their only looking for 40K. As you said Richard, "capitalism is capitalism". It's bloody ruthless.

Edited by Alexander Joyce
Link to comment
Share on other sites

Richard when I was told the red could compete at the same table as film. I was really shocked. However I have seen the footage and it looks like video not film. I don't understand what benefit people think they will get from the red? Surely those wishing to make a film are compromising by having a video look? Especially when there are so many 35mm cameras out there for sale and cheap?

 

The russian konvas comes to mind perhaps even could be said to be a real red?

 

And when you look at all the intended uses for the red there are cameras already there doing the job better? often cheaper apart from HD, although even there I can't see much difference in price and there are so many peripheral pieces of equipment for their workflows.

 

If the red was going to shake the market now it really would have to come in at a lower price point.

 

Shouldn't those buying the red camera really be aware of second hand values?

 

Many of my friends own very high quality 35mm camera packages (complete systems) that I have been encouraged to use at my discretion, at no charge, for my personal projects. Even with free access to a camera package however, shooting 35mm is a very expensive proposition. I sense the palpable resistance to the new technology that Red promises in terms of image quality and parity to 35mm. But for many people, the possible cost-effectiveness trumps most of those concerns on many low or modestly budgeted projects. While I myself am looking forward to the Red camera as an alternative to film, I don't shoot other people's movies, and I won't ever contemplate using Red for any reason other than cost.

 

But I have seen the actual 4K presentation of the Peter Jackson footage and as a life-long movie watcher, I saw no comparison to anything approaching "video". I saw imagery that compared to 35mm film. If these two formats (assuming Red delivers on at least 75% of the hype) were priced the same, I would probably opt for 35mm. While the jury is still out on the implications of the real-world workflow that one would be tied to with a Red camera, the simple fact that you eliminate many of the steps involved in getting the footage to the editing suite and even to the output stage is a major advantage of the system. I realize the moderators of this forum would like to stifle the seemingly unending "film vs. digital" debate until Red delivers a usable camera for a head-to-head comparison. But from my perspective (until I'm asked to leave or not post again) I feel the economic repercussions that the Red camera introduces to the industry are fair game for discussion here in the Red subforum.

 

For someone like me that would love to shoot 35mm, what I would love to hear from the many professional camera operators and DP's that read these posts is how can you make 35mm compete with a system like Red?

Link to comment
Share on other sites

RED footage is NOT film. The post of RED vs. film was really intended to be a comparison of the two, not one is better than the other (although the headline admittedly implies controversy). They are different.

 

What I heard from just about everyone at the NAB screening was that our footage looked very "filmic". We heard that it doesn not look exactly like film, but it does not look like video either. It has a "feel" to it that is lacking from most other digital video systems. We have gone to great length to achieve this "feel". Our footage does not have grain, for better or worse. It also does not have the dynamic range of film (although we are working on that). Our workflow is designed to be much easier than working with film. REDCINE got rave reviews from the most hardened critics. And the fact that Apple is supporting REDCODE RAW in FCP6 makes for two compelling workflow possibilities.

 

We entered this market to provide a film alternative... not a replacement for film. Most cinematographers that are already dabbling in digital seemed happy that they have a better alternative to existing cameras. And we have won over several to consider RED as a real alternative to film. That is all we ever hoped for.

 

There is one movie in production that will use both. There are several big budget films planned to be shot with a RED camera later this year. But there are still 100's that will be shot on film.

 

We are still in development. We will continue to be for many years. Each new sensor rev. will increase dynamic range and every update will make our camera more useful.

 

Jim

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...