Jump to content

I've been Banned from Reduser


Stephen Williams

Recommended Posts

I don't think Jim would have had a problem with a problem with a forum discussing the Sony F35 which is a 2K camera that features 4:4:4 color sampling at 60 frames per second which can be a compelling alternative to the 3K shooting mode on the Red. But to say that the Canon DSLR is a better alternative than what Red offers is like saying that interlace scan video is better than progressive scan video. The best way to deal with interlace video is to refuse to recognize it but as soon as you start to recognize the format as an acceptable high definition alternative it will raise its ugly head with the claim that only 1080i is high definition video. Just as Jim Jannard could never get away with offering interlace shooting modes on his cameras likewise he could never get away with offering DSLR cameras with line skipping aliasing artifacts. So why should he open up his forums to companies who care less about picture quality but would artificially raise their resolution numbers just so they can sell more cameras?

Edited by Thomas James
Link to comment
Share on other sites

  • Replies 317
  • Created
  • Last Reply

Top Posters In This Topic

Also in the sticky (or a separate one) could be some testimony from people who have actually shot a significant amount film, how many times they've actually encountered all the terrible drawbacks the Digi-philes are so fond of repeating off each other (you know scratches, processing errors, hairs in the gate, fogging, those sorts of disasters, which for most practical purposes, simply never happen unless you're complete moron).

 

Great point Keith! I have (as DP) shot FILM for over 20+ years and have only encountered one serious problem (which wasn't serious in the end) and it had to do with the Lab not doing a proper Rinse (or something was in the tank) and it looked like it was raining in the image... a second rinse and it was gone. Occasionally a random hair will appear but frankly it is easily and cheaply solved (if I absolutely have to use that particular take) on a Flame . The benefits of a Film Quality Image far outweigh the 'fear factor'.

 

There is nothing that compares to flying by your meters! :wub:

Link to comment
Share on other sites

  • Premium Member

I actually went and read the entire 10 pages of posts that got Stephen banned and I have to say: my appreciation or interest in RED products have gone down considerably.

 

It seems more like a cult of personality than an objective forum; people slamming all none-believers left and right. In ten pages I saw no REAL criticism or questions answered, but only bile being slung at the one person (Stephen) who had what appeared to be legitimate questions.

 

For me, a product like that is not worth using because it does not have to evolve at all; its fanbase will support it as long as possible because adherents will ignore the flaws, or point out the flaws in other systems rather than logically thinking through what they are doing. What they seem to have is a square peg, a bunch of round holes, and an overzealous hammer-hand.

 

It reminds me of Mac-heads (I own a Mac and an iPod for your information, before the insults start). They are so blinded by their product and their leader they cannot even COMPREHEND how someone got there first or how something else out there is equal. Many of them literally believe that the iPhone is the first touch screen phone, or that the hardware of a Mac is somehow inherently different from the hardware of a PC. They buy the products out of blindness and Apple capitalizes on that; just like RED is doing with its users.

 

Many of us have fallen into this trap before, but usually for brief periods of time -- my love for the XL1s made me blind to the DVX100 and I held onto that camera for my life until I couldn't justify my love for it any longer. But the difference is, there's not someone out there being the FACE of the XL1s, there's not someone out there that people TRUST, and that's the problem with RED (and Apple, for that matter). There's a face to it; you look at that person and you say: "They wouldn't lie to me! They don't want my money, they want to change the world!"

 

Stephen's comments were no harsher than half the comments on posts here; the difference is that this is an open forum where the differences in ideas are (usually) celebrated. The Red forums are insular and biased; no good can ever come of that.

 

But what I don't understand is how can users trust RED to begin with? The product my produce good results, but the management seems intent on obfuscating the facts. Release dates are overlooked; broad, game changing promises become minor improvements; and it's not being presented as a tool but a solution -- and those two things are not the same.

Link to comment
Share on other sites

  • Premium Member
So why should he open up his forums to companies who care less about picture quality but would artificially raise their resolution numbers just so they can sell more cameras?

 

Hi Thomas,

 

Because people who are interested in RED are also interested in learning about them? It was on REDuser where I learned about the limitations you mention, ie: line skipping.

 

BTW people like Shane Hurlbut do some fine work with these inferior products.

 

-Fran

Link to comment
Share on other sites

BTW people like Shane Hurlbut do some fine work with these inferior products.

 

Right, and Mr Hurlbut rightly conveys on last month's AC that the two of the major reasons why he has used the 5D mkII so extensively are that the $2,500, 2 pound 6 perf-sized-chip camera produces remarkably good HD images, for anything at that price-point and size.

 

So he can have multiple cameras going at the same time and he can mount them on just about anything, all while using Panavision lenses and keeping the picture looking 35mm-like with shallow focus. He also claims that the camera has thrown many curveballs at him, and that "falling off (and getting back on) the board is part of the game." I am sure that he would choose the R1 if it was called for the job at hand.

 

So, we can talk tech specs until we are blue in the face, but my take is that it's all about using the "right tool for the right job," to coin a phrase.

Edited by Saul Rodgar
Link to comment
Share on other sites

So, we can talk tech specs until we are blue in the face, but my take is that it's all about using the "right tool for the right job," to coin a phrase.

 

An obvious exception to this would be aspiring film makers who could not even begin afford the R1. In that situation, something like a $1,500 7D, line skipping and all, would be the way to go, right or not. Hell, at that price only used prosumer cameras such as the DVX100, or consumer HD cameras compete with it.

Link to comment
Share on other sites

  • Premium Member
Hi John,

 

That would be great, every time I use a Red camera there is a different firmware version, if it's just for a days shoot I can't justify 2 days of testing prior.

 

Stephen

 

OK, here it is. It hasn't been revised since last June, as I haven't had any inquiries about Red since then:

 

 

 

Revised 6-25-09

Informal Notes on the Red camera:

 

 

I’ve been keeping up with the Red camera on the internet, at the ASC, and at various trade shows and vendor presentations. It’s a fast moving target. These notes eventually became excessively long and disorganized. The purpose of this revision is to pare them down (hopefully without omitting anything that’ll bite you in the tush), and organize the issues in a more reasonable and useful order, rather than in order of discovery.

 

Bottom line, our answer to shows that want to use the Red is “Yes, but you have to do your homework first.” Here’s the part of that homework assignment that I can provide, a summary of reported issues from actual users. They’re mostly production rather than post things:

 

 

1. Dynamic Range: Red’s range is limited, even compared with other digital cameras. Genesis, D-21, and F-35 give you a couple more stops in the highlights before they clip. You may have to set more nets and fill light with Red than with other cameras. Because the raw recording philosophy moves all of color correction to post, it may take longer to time. You need to test this thoroughly before doing your first Red job.

 

 

2. Counterintuitive “ASA” settings: You get better shoulder handling/highlight detail at the expense of more noise in the shadows by going to a *higher* ASA setting on the Red, and vice versa. The ASA setting on the camera isn’t at all like loading a film stock of a particular ASA. The sensitivity and color balance of the camera never change. The setting controls the level where the false color indicators come on. You could think of it as if it were the ASA setting on a light meter instead. Strictly speaking, ASA is based on film density curves, and isn’t even defined for the Red. Many have found 320 to be overly optimistic, and rate it lower, say 200.

 

 

3. Neutral Density/Infrared: The camera is sensitive to infrared in addition to visible light. Conventional ND filters are neutral across the visible spectrum, but pass a great deal of IR. Use enough ND (roughly 0.9 or more), and the ratio of IR to visible light gets high enough to cause visible and unpredictable results, primarily color shifts in specific objects. Formatt and Tiffen now make IR and ND “hot mirror” filters specifically to solve this problem.

 

 

4. Storage and Backup: With CF cards, you get 4 minutes on an 8 Gig card, or 8 minutes on a 16 Gig card. There’s a hard drive, but it has reliability problems in handheld use. The motion of the camera can be more than the head arm can stand. The drives are not RAID protected, and have had non-motion-related failures. They should be backed up frequently, don’t put too many eggs in that basket. With a special vibration mount, the hard drive has been used successfully on a helicopter.

 

With either cards or drives, you need some kind of backup station. The best solution at the moment seems to be a Mac computer with card readers feeding a big RAID array plus some SATA RAM shuttle drives and/or LTO tapes. You need an assistant to watch every take as it backs up, and alert you immediately in case something disastrous happened. So, it’s a full time job.

 

 

5. Overheating: This used to be a big problem, and may still be if you shoot a lot of long takes in a hot environment. It seems to vary from body to body. A white barney is the wrong idea, because the problem is getting rid of heat that the camera generates internally, not heat it absorbs from outside. It needs shade and air flow, not insulation. Cold packs have been used in extreme cases.

 

6. Rolling Shutter: This is generic to all CMOS cameras. The term is also a little misleading, because there’s no pulldown, and therefore no need to actually physically shut anything.

 

Suppose we have a film camera running 24 fps with a 180 degree shutter, and a CMOS camera shooting 24p with a 1/48 second exposure time. Any point in the aperture of the film camera sees image light for 180 degrees, and darkness for 180 degrees. Any one pixel on the CMOS chip is sensitive to light and accumulating charge for the equivalent of the same 180 degrees, and then it's read out and inactive for the rest of the cycle. Those are the similarities.

 

The big difference is that the edge of the shutter blade on the film camera spends maybe 45 to 60 degrees passing over the aperture, while the CMOS camera is designed for a continuous uniform flow of data from the chip, so the readout "edge" takes the whole 360 degrees to sweep over the whole image and start over. Therefore, worst case, the top and bottom of the film frame are 60 degrees apart in time, while the top and bottom of the CMOS image are nearly 360 degrees apart, six times as long. This can introduce some subtle distortions. Things that move horizontally rapidly get a kind of leaning effect, for instance car wheels become a little bit oval. Hand held stuff gets a strange rubbery look, sometimes called “jell-o-vision”. High vibration situations such as vehicle mounts on rough roads or high speeds can get even more rubbery looking.

 

Those distortions are mostly subtle. The big problem happens if we have a very brief bright flash of light, one that goes on and off very quickly rather than fading up and down. This is trouble if you have stuff like nightclub or party strobes, night scenes with police or fire vehicles, and some kinds of muzzle flashes.

 

Do a bunch of flashes at random, and there's some chance that you'll catch the film shutter edge during its rapid pass over the aperture. But mostly they'll either hit when the shutter is open, and go on the film, or hit when the shutter is closed, and go into the viewfinder. (That's something to watch out for when you operate -- If you see the flash in the finder, you *didn't* get it on the film, and vice versa.)

 

On the CMOS camera the "start" and "stop/readout" edges are always in the frame somewhere. So, no matter when the flash happens, you're going to catch the "shutter" edges. To fix that, you'd have to sync both the beginning and end of the flash to the camera, with the flash starting when the readout edge is at the top of the frame, and ending when the turn-on edge is at the bottom of the frame. In this example, the flash would have to have a duration of 90 degrees or 1/96 second.

 

Another difference is that the CMOS "edges" are absolutely horizontal and pixel-boundary sharp. The film shutter being between the lens and the film casts a soft-edged shadow, and is only horizontal at the middle of the frame, sweeping across the rest of the frame at an angle. (Or it's vertical at the middle in cameras with the shutter under rather than alongside the gate).

 

Another approach would be to read the CMOS out faster, and pause between frames, which would be more film-like. But to get the same time difference as a film camera, you'd have to read the chip six times faster, which would be equivalent to being able to shoot 144 frames per second in continuous rolling mode. That's not cheap or easy, which is why it doesn't happen in real world cameras.

 

 

7. The “Black Sun” issue: The sensor shuts down individual pixels that are severely overloaded. It was discovered in day exteriors in which the sun appears as a small black circle. This is has been improved but not eliminated. It’s a fairly easy post fix, but alas not a freebie.

 

 

8. “Max” Mode: Avoid “Max” mode unless the camera says to use it. It’s for extremely complex and difficult to compress images. It makes everything downstream go much slower.

 

 

 

 

 

9. There have been a few total failures of the camera reported, so it would be wise to have at least one spare body on the truck. The bigger the show, the more spares you should carry.

 

 

10. “Codec Error”: Very seldom, the camera shuts down with this error message. You lose the take, and have to re-boot, but the camera isn’t totally dead.

 

 

11. Don’t Mix Builds: Test end to end with the firmware build you’ll use, and don’t change builds during a shoot. New builds sometimes “break” post production software (17 did this).

 

 

12. Firmware Backup: It’s recommended to always have a copy of the firmware build you’re using on a card so it can be re-installed in the field. (1-09)

 

 

13. Booting: The camera takes a long time to boot up, about 1.5 - 2 minutes. That wouldn’t be so bad if it didn’t have to be re-booted every time you change batteries. One solution is a hot-swap adapter, they’re commercially available, both for block and V-lock batteries. One DP who shot with it says that the solution is big batteries. You can boot in the morning and swap batteries at lunch, making it much less of a problem, provided that you live on the dolly all day. Current builds now give you an image about 10 – 15 seconds into bootup, so you can see to start setting up. That makes this issue less important than it was. There’s a report of the camera failing to boot with both the viewfinder and LCD screen plugged in. Unplugging the LCD fixed that.

 

 

14. Battery Indicator: Batteries from vendors other than Red will work, but the Red can’t give you an indication of how much life they have left. You have to go by a meter on the battery.

 

 

15. The connectors on the camera are non-standard fragile mini-BNC and mini-XLR. It needs a breakout box or a bunch of pigtails. Breakout boxes are readily available as aftermarket accessories. Hot Swap could also be built into a breakout box.

 

 

16. You can mix different frame rates and 2/3/4K on the same card now. The time base setting, though, does have to be the same for the whole card (23.976, 24, 25, or 29.97). You have to reformat the card to change its time base.

 

17. The run-stop button is close to the user-assignable buttons, and it’s easy to accidentally press the wrong one.

 

18. Early firmware didn’t do 16:9, only 2:1. On such cameras, it’s necessary to frame for cropping in post. More recently (10-08) with Build 17, they’ve added a mode called “4K HD” which does true 16:9, using most but not all of the chip. It’s really quad-HD, 3840 x 2160 photosites. Reports are that this works very well, and because the scaling involved to HD is exactly 2:1 rather than a long decimal fraction, the render times are about 40% faster, and the images are sharper. This mode only works with 16 gig cards or larger.

 

For us in TV, this is the best mode for most purposes. The only exception would be for higher frame rates, only available with smaller active areas. 3K mode is acceptable, but the 2K mode doesn’t have adequate resolution. (2-09)

 

19. You can’t pre-slate an MOS like you can with film. The slate would be stored as its own separate clip.

 

20. The savings are illusory: The camera body may be only $17,500, but it’s like a Barbie doll. The rest of the stuff you need pushes the total price up into the same region as some conventional 2/3” cameras.

 

21. Offline editing: The broadcast quality output from the camera is in the form of compressed raw Bayer images in their proprietary .R3D file format. Red started out working with Apple to build some compatibility into Final Cut Pro. With FCP, you can edit immediately using Red-generated proxies, but at less than full HD resolution. For better resolution, or to use Avid, the raw Red files have to be rendered, which takes a lot of computer time. Full 4K to HD at full quality prior to Build 17 took 25 hours to render one hour of material. One workaround is to work at lower resolution offline, and render only the selects for online. But even working with Build 17, it still takes 15 times run time to render. This isn’t just a matter of building an extra day into the schedule, it’s also a facility scheduling bottleneck. (1-09): Some facilities can now render multiple takes simultaneously, thus reducing the overall time to get dailies out.

 

Red has recently released a software development kit, which will allow other vendors to work with the same inside information that Apple has had.

 

22. “Sub-Prime” Lenses: The introduction of thousands of new PL mount cameras has generated an unexpected demand for lenses. One major rental house restricts their best glass to customers who rent their cameras, so as not to have bodies left on the shelf for lack of lenses. They’re keeping older lenses in their rental inventory just to serve the Red market.

 

23. Dropped Frame Counter: The camera sometimes doesn’t get all the frames recorded. When this happens, a little red square appears in the finder giving the number of frames missing. It seems not to happen often, but when it does, you need to shoot another take. (12-08)

 

24. Green Screen/Blue Screen: Green screen works well, except with tungsten balanced light. Ideally, work with HMI’s at 5600K. If you have to go tungsten, hang at least an 80D filter. Forget about blue screen. The Bayer sensor has only half as much blue resolution as green, and the blue channel is the noisiest on any CMOS or CCD camera.

 

25. There are reports from a major video facility of blocky artifacts in the blacks, traced to problems in the Red post software. Going to .DPX files doesn’t have that problem. (2-09)

 

26. For accurate time code, a Lockit or equivalent outboard box is recommended.

 

27. Color decisions should be set in the metadata for automatic transfer. .RSX files created on set will override the camera’s color metadata.

 

28. Color gamut of the camera is limited, especially for saturated blues and violets. The green primary is also quite yellowish. So, if you need to distinguish between fairly saturated colors, test first. If your colors are subtle, there’s nothing to worry about here. (2-09)

 

29. Color space on the HD-SDI monitor out is more limited than in RedCine or the RAW files. The color space you monitor in also makes a difference. Recommended gamma is Rec709 and color space is RedSpace. It's best to monitor on set in Rec709. If you monitor in RedSpace, you probably will underexpose a little too much and get a noisier image in post.

 

30. Set the "RAW" view mode to one of the user buttons so you can quickly see if something is clipping in the actual raw, or just in the colorspace you are monitoring in.

 

31. The meaning of “K”: When Red refers to “4K”, they’re counting the photosites across their Bayer masked chip. That’s not the way the rest of the industry uses the term. “4K” as used elsewhere refers to pixels. Each pixel is a complete three color RGB data set for a single location in the grid. Red counts one color per location, not three. So, it’s apples and oranges.

 

32. The RedAlert software blows away the .RSX files from the camera and substitutes new default files. This can throw you into the wrong color space. (4-09)

 

 

 

From our point of view, the important things are a few mindset issues:

 

First, Red is not a video camera. Neither is it a film camera. It’s a raw data camera. So, the requirement to squeeze the large dynamic range of the sensor into the limited dynamic range of a digital video tape format no longer exists on the set. That means that the DIT is no longer making irreversible color and dynamic range decisions. What we need is a sort of second second AC. Transferring data from CF cards to a RAID array and SATA shuttle drives (or perhaps LTO tapes) isn’t a high end DIT function. It’s really the traditional job of the 2nd AC, only using cards and drives or tapes instead of magazines and cans. It requires extreme care and organization, but not a whole bunch of tech knowledge. But the requirement to look at every take to be sure it’s OK makes it vastly too time consuming for the existing second.

 

The video that comes out of the Red should be treated just like a video tap on a film camera - a very very nice video tap. It is possible to render single frames on location at full resolution to check focus and the look of the image. The DP can create lookup tables (LUT’s) on set to show post something closer to the desired final color timing. Nothing is baked into the raw .R3D output, but it is baked into the viewing proxies.

 

From the point of view of the DP, Red shoots more like reversal film than any previous technology. It’s like film, only without the headroom of negative.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

That list needs a bit of updating now that there is the new MX sensor -- sensitivity and dynamic range have increased, noise reduced, the "black sun" sensor protection artifact is no longer an issue (so I've heard), and the read-out time of the CMOS sensor has been shortened so that rolling shutter artifacts have been reduced.

 

Don't know if the IR sensitivity is still the same.

 

Also, there is the option of solid state drives -- RED RAMS -- if you don't want to use the HDD RED DRIVES.

 

Of course, workflow issues still have to be worked-out but there are more post houses that are Red-experienced now.

 

Without having done my own testing, I would say that the new MX-sensor in the Red now puts the camera on par with the F35 and Genesis in terms of dynamic range, and probably is even higher in sensitivity and/or lower in noise. In other words, I don't see any image compromising involved in making a choice between these so the only issue for TV production (your area of expertise) is the data-centric capture, post workflow, and archiving issues and costs.

 

Now that it is pilot season, it's been interesting in job interviews for me to hear which networks or producers are interested in working with the Red and which are dead-set against going that route due to the lack of an HDCAM-SR workflow starting with image capture.

Link to comment
Share on other sites

OK, here it is. It hasn't been revised since last June, as I haven't had any inquiries about Red since then:

 

 

 

Revised 6-25-09

Informal Notes on the Red camera:

 

 

I’ve been keeping up with the Red camera on the internet, at the ASC, and at various trade shows and vendor presentations. It’s a fast moving target. These notes eventually became excessively long and disorganized. The purpose of this revision is to pare them down (hopefully without omitting anything that’ll bite you in the tush), and organize the issues in a more reasonable and useful order, rather than in order of discovery.

 

Bottom line, our answer to shows that want to use the Red is “Yes, but you have to do your homework first.” Here’s the part of that homework assignment that I can provide, a summary of reported issues from actual users. They’re mostly production rather than post things:

 

 

1. Dynamic Range: Red’s range is limited, even compared with other digital cameras. Genesis, D-21, and F-35 give you a couple more stops in the highlights before they clip. You may have to set more nets and fill light with Red than with other cameras. Because the raw recording philosophy moves all of color correction to post, it may take longer to time. You need to test this thoroughly before doing your first Red job.

 

 

2. Counterintuitive “ASA” settings: You get better shoulder handling/highlight detail at the expense of more noise in the shadows by going to a *higher* ASA setting on the Red, and vice versa. The ASA setting on the camera isn’t at all like loading a film stock of a particular ASA. The sensitivity and color balance of the camera never change. The setting controls the level where the false color indicators come on. You could think of it as if it were the ASA setting on a light meter instead. Strictly speaking, ASA is based on film density curves, and isn’t even defined for the Red. Many have found 320 to be overly optimistic, and rate it lower, say 200.

 

 

3. Neutral Density/Infrared: The camera is sensitive to infrared in addition to visible light. Conventional ND filters are neutral across the visible spectrum, but pass a great deal of IR. Use enough ND (roughly 0.9 or more), and the ratio of IR to visible light gets high enough to cause visible and unpredictable results, primarily color shifts in specific objects. Formatt and Tiffen now make IR and ND “hot mirror” filters specifically to solve this problem.

 

 

4. Storage and Backup: With CF cards, you get 4 minutes on an 8 Gig card, or 8 minutes on a 16 Gig card. There’s a hard drive, but it has reliability problems in handheld use. The motion of the camera can be more than the head arm can stand. The drives are not RAID protected, and have had non-motion-related failures. They should be backed up frequently, don’t put too many eggs in that basket. With a special vibration mount, the hard drive has been used successfully on a helicopter.

 

With either cards or drives, you need some kind of backup station. The best solution at the moment seems to be a Mac computer with card readers feeding a big RAID array plus some SATA RAM shuttle drives and/or LTO tapes. You need an assistant to watch every take as it backs up, and alert you immediately in case something disastrous happened. So, it’s a full time job.

 

 

5. Overheating: This used to be a big problem, and may still be if you shoot a lot of long takes in a hot environment. It seems to vary from body to body. A white barney is the wrong idea, because the problem is getting rid of heat that the camera generates internally, not heat it absorbs from outside. It needs shade and air flow, not insulation. Cold packs have been used in extreme cases.

 

6. Rolling Shutter: This is generic to all CMOS cameras. The term is also a little misleading, because there’s no pulldown, and therefore no need to actually physically shut anything.

 

Suppose we have a film camera running 24 fps with a 180 degree shutter, and a CMOS camera shooting 24p with a 1/48 second exposure time. Any point in the aperture of the film camera sees image light for 180 degrees, and darkness for 180 degrees. Any one pixel on the CMOS chip is sensitive to light and accumulating charge for the equivalent of the same 180 degrees, and then it's read out and inactive for the rest of the cycle. Those are the similarities.

 

The big difference is that the edge of the shutter blade on the film camera spends maybe 45 to 60 degrees passing over the aperture, while the CMOS camera is designed for a continuous uniform flow of data from the chip, so the readout "edge" takes the whole 360 degrees to sweep over the whole image and start over. Therefore, worst case, the top and bottom of the film frame are 60 degrees apart in time, while the top and bottom of the CMOS image are nearly 360 degrees apart, six times as long. This can introduce some subtle distortions. Things that move horizontally rapidly get a kind of leaning effect, for instance car wheels become a little bit oval. Hand held stuff gets a strange rubbery look, sometimes called “jell-o-vision”. High vibration situations such as vehicle mounts on rough roads or high speeds can get even more rubbery looking.

 

Those distortions are mostly subtle. The big problem happens if we have a very brief bright flash of light, one that goes on and off very quickly rather than fading up and down. This is trouble if you have stuff like nightclub or party strobes, night scenes with police or fire vehicles, and some kinds of muzzle flashes.

 

Do a bunch of flashes at random, and there's some chance that you'll catch the film shutter edge during its rapid pass over the aperture. But mostly they'll either hit when the shutter is open, and go on the film, or hit when the shutter is closed, and go into the viewfinder. (That's something to watch out for when you operate -- If you see the flash in the finder, you *didn't* get it on the film, and vice versa.)

 

On the CMOS camera the "start" and "stop/readout" edges are always in the frame somewhere. So, no matter when the flash happens, you're going to catch the "shutter" edges. To fix that, you'd have to sync both the beginning and end of the flash to the camera, with the flash starting when the readout edge is at the top of the frame, and ending when the turn-on edge is at the bottom of the frame. In this example, the flash would have to have a duration of 90 degrees or 1/96 second.

 

Another difference is that the CMOS "edges" are absolutely horizontal and pixel-boundary sharp. The film shutter being between the lens and the film casts a soft-edged shadow, and is only horizontal at the middle of the frame, sweeping across the rest of the frame at an angle. (Or it's vertical at the middle in cameras with the shutter under rather than alongside the gate).

 

Another approach would be to read the CMOS out faster, and pause between frames, which would be more film-like. But to get the same time difference as a film camera, you'd have to read the chip six times faster, which would be equivalent to being able to shoot 144 frames per second in continuous rolling mode. That's not cheap or easy, which is why it doesn't happen in real world cameras.

 

 

7. The “Black Sun” issue: The sensor shuts down individual pixels that are severely overloaded. It was discovered in day exteriors in which the sun appears as a small black circle. This is has been improved but not eliminated. It’s a fairly easy post fix, but alas not a freebie.

 

 

8. “Max” Mode: Avoid “Max” mode unless the camera says to use it. It’s for extremely complex and difficult to compress images. It makes everything downstream go much slower.

 

 

 

 

 

9. There have been a few total failures of the camera reported, so it would be wise to have at least one spare body on the truck. The bigger the show, the more spares you should carry.

 

 

10. “Codec Error”: Very seldom, the camera shuts down with this error message. You lose the take, and have to re-boot, but the camera isn’t totally dead.

 

 

11. Don’t Mix Builds: Test end to end with the firmware build you’ll use, and don’t change builds during a shoot. New builds sometimes “break” post production software (17 did this).

 

 

12. Firmware Backup: It’s recommended to always have a copy of the firmware build you’re using on a card so it can be re-installed in the field. (1-09)

 

 

13. Booting: The camera takes a long time to boot up, about 1.5 - 2 minutes. That wouldn’t be so bad if it didn’t have to be re-booted every time you change batteries. One solution is a hot-swap adapter, they’re commercially available, both for block and V-lock batteries. One DP who shot with it says that the solution is big batteries. You can boot in the morning and swap batteries at lunch, making it much less of a problem, provided that you live on the dolly all day. Current builds now give you an image about 10 – 15 seconds into bootup, so you can see to start setting up. That makes this issue less important than it was. There’s a report of the camera failing to boot with both the viewfinder and LCD screen plugged in. Unplugging the LCD fixed that.

 

 

14. Battery Indicator: Batteries from vendors other than Red will work, but the Red can’t give you an indication of how much life they have left. You have to go by a meter on the battery.

 

 

15. The connectors on the camera are non-standard fragile mini-BNC and mini-XLR. It needs a breakout box or a bunch of pigtails. Breakout boxes are readily available as aftermarket accessories. Hot Swap could also be built into a breakout box.

 

 

16. You can mix different frame rates and 2/3/4K on the same card now. The time base setting, though, does have to be the same for the whole card (23.976, 24, 25, or 29.97). You have to reformat the card to change its time base.

 

17. The run-stop button is close to the user-assignable buttons, and it’s easy to accidentally press the wrong one.

 

18. Early firmware didn’t do 16:9, only 2:1. On such cameras, it’s necessary to frame for cropping in post. More recently (10-08) with Build 17, they’ve added a mode called “4K HD” which does true 16:9, using most but not all of the chip. It’s really quad-HD, 3840 x 2160 photosites. Reports are that this works very well, and because the scaling involved to HD is exactly 2:1 rather than a long decimal fraction, the render times are about 40% faster, and the images are sharper. This mode only works with 16 gig cards or larger.

 

For us in TV, this is the best mode for most purposes. The only exception would be for higher frame rates, only available with smaller active areas. 3K mode is acceptable, but the 2K mode doesn’t have adequate resolution. (2-09)

 

19. You can’t pre-slate an MOS like you can with film. The slate would be stored as its own separate clip.

 

20. The savings are illusory: The camera body may be only $17,500, but it’s like a Barbie doll. The rest of the stuff you need pushes the total price up into the same region as some conventional 2/3” cameras.

 

21. Offline editing: The broadcast quality output from the camera is in the form of compressed raw Bayer images in their proprietary .R3D file format. Red started out working with Apple to build some compatibility into Final Cut Pro. With FCP, you can edit immediately using Red-generated proxies, but at less than full HD resolution. For better resolution, or to use Avid, the raw Red files have to be rendered, which takes a lot of computer time. Full 4K to HD at full quality prior to Build 17 took 25 hours to render one hour of material. One workaround is to work at lower resolution offline, and render only the selects for online. But even working with Build 17, it still takes 15 times run time to render. This isn’t just a matter of building an extra day into the schedule, it’s also a facility scheduling bottleneck. (1-09): Some facilities can now render multiple takes simultaneously, thus reducing the overall time to get dailies out.

 

Red has recently released a software development kit, which will allow other vendors to work with the same inside information that Apple has had.

 

22. “Sub-Prime” Lenses: The introduction of thousands of new PL mount cameras has generated an unexpected demand for lenses. One major rental house restricts their best glass to customers who rent their cameras, so as not to have bodies left on the shelf for lack of lenses. They’re keeping older lenses in their rental inventory just to serve the Red market.

 

23. Dropped Frame Counter: The camera sometimes doesn’t get all the frames recorded. When this happens, a little red square appears in the finder giving the number of frames missing. It seems not to happen often, but when it does, you need to shoot another take. (12-08)

 

24. Green Screen/Blue Screen: Green screen works well, except with tungsten balanced light. Ideally, work with HMI’s at 5600K. If you have to go tungsten, hang at least an 80D filter. Forget about blue screen. The Bayer sensor has only half as much blue resolution as green, and the blue channel is the noisiest on any CMOS or CCD camera.

 

25. There are reports from a major video facility of blocky artifacts in the blacks, traced to problems in the Red post software. Going to .DPX files doesn’t have that problem. (2-09)

 

26. For accurate time code, a Lockit or equivalent outboard box is recommended.

 

27. Color decisions should be set in the metadata for automatic transfer. .RSX files created on set will override the camera’s color metadata.

 

28. Color gamut of the camera is limited, especially for saturated blues and violets. The green primary is also quite yellowish. So, if you need to distinguish between fairly saturated colors, test first. If your colors are subtle, there’s nothing to worry about here. (2-09)

 

29. Color space on the HD-SDI monitor out is more limited than in RedCine or the RAW files. The color space you monitor in also makes a difference. Recommended gamma is Rec709 and color space is RedSpace. It's best to monitor on set in Rec709. If you monitor in RedSpace, you probably will underexpose a little too much and get a noisier image in post.

 

30. Set the "RAW" view mode to one of the user buttons so you can quickly see if something is clipping in the actual raw, or just in the colorspace you are monitoring in.

 

31. The meaning of “K”: When Red refers to “4K”, they’re counting the photosites across their Bayer masked chip. That’s not the way the rest of the industry uses the term. “4K” as used elsewhere refers to pixels. Each pixel is a complete three color RGB data set for a single location in the grid. Red counts one color per location, not three. So, it’s apples and oranges.

 

32. The RedAlert software blows away the .RSX files from the camera and substitutes new default files. This can throw you into the wrong color space. (4-09)

 

 

 

From our point of view, the important things are a few mindset issues:

 

First, Red is not a video camera. Neither is it a film camera. It’s a raw data camera. So, the requirement to squeeze the large dynamic range of the sensor into the limited dynamic range of a digital video tape format no longer exists on the set. That means that the DIT is no longer making irreversible color and dynamic range decisions. What we need is a sort of second second AC. Transferring data from CF cards to a RAID array and SATA shuttle drives (or perhaps LTO tapes) isn’t a high end DIT function. It’s really the traditional job of the 2nd AC, only using cards and drives or tapes instead of magazines and cans. It requires extreme care and organization, but not a whole bunch of tech knowledge. But the requirement to look at every take to be sure it’s OK makes it vastly too time consuming for the existing second.

 

The video that comes out of the Red should be treated just like a video tap on a film camera - a very very nice video tap. It is possible to render single frames on location at full resolution to check focus and the look of the image. The DP can create lookup tables (LUT’s) on set to show post something closer to the desired final color timing. Nothing is baked into the raw .R3D output, but it is baked into the viewing proxies.

 

From the point of view of the DP, Red shoots more like reversal film than any previous technology. It’s like film, only without the headroom of negative.

 

 

 

-- J.S.

 

Geez John! :o I'll keep my Film Camera with it's On/Off and Film Speed switches! I have no fear factor with Film but what you just described... is horrifying!

Link to comment
Share on other sites

  • Premium Member

Thanks, David M. Yes, the Red list needs an update. I'll collect your observations along with the rest that show up here for a while, and then go through it again. It's been all quiet on the Red front here for about six months. Next week is HPA, and then pilot season, so it may be a while.

 

It's not meant to be horrifying, David R. It may have that net effect because I've collected every credible report of anything that has a reasonable chance of being an issue. It's hard to know for sure what things have really been fixed, versus partway fixed, or somebody said it was fixed but it wasn't. So, if I had some divine lantern of truth to go by, probably most of it would go away. Absent that, I'm proceeding in "better safe than sorry" mode. If I applied equally diligent proctology to a film camera, say the Eclair NPR, I could probably come up with a similarly lengthy list. It's just that with film cameras and raw stocks, we've had enough time that everybody already knows the gotchas.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Thanks, David M. Yes, the Red list needs an update. I'll collect your observations along with the rest that show up here for a while, and then go through it again. It's been all quiet on the Red front here for about six months. Next week is HPA, and then pilot season, so it may be a while.

 

It's not meant to be horrifying, David R. It may have that net effect because I've collected every credible report of anything that has a reasonable chance of being an issue. It's hard to know for sure what things have really been fixed, versus partway fixed, or somebody said it was fixed but it wasn't. So, if I had some divine lantern of truth to go by, probably most of it would go away. Absent that, I'm proceeding in "better safe than sorry" mode. If I applied equally diligent proctology to a film camera, say the Eclair NPR, I could probably come up with a similarly lengthy list. It's just that with film cameras and raw stocks, we've had enough time that everybody already knows the gotchas.

 

 

 

 

 

-- J.S.

 

A person, who knows how to use a meter, can shoot Film, transfer in Apple ProResHQ, bring into Final Cut, Edit, Export and air on National TV... and they do not have to understand a fraction of all that you so diligently explained. If I had to be consumed with all the technical details of a RED I'd quit the business.

 

Give me some good Glass and a couple Mags that don't scratch and I am good to go!

 

Film :wub:

Link to comment
Share on other sites

If I had to be consumed with all the technical details of a RED I'd quit the business.

 

Give me some good Glass and a couple Mags that don't scratch and I am good to go!

 

Film :wub:

 

I guess most -if not all- of us here, working cameramen and motion picture professionals, much rather continue using film than HD / RAW for most narrative / beauty projects. I know I do.

 

But these days, I hardly get anyone bringing me projects to be shot on film. I also know I am not the only one with that "problem." So one's got to keep up with the "other" moving picture acquisition technologies, as big a pain in the behind as they may be, for that is what it is to be used more often than not these days, let alone the future. This fact pains me as much as the next guy or gal. But so it will be, so might as well start getting used to those technologies now.

 

David R is in a unique and enviable situation, as a director / cinematographer / producer with clients who can afford film production. Alas, not all of us are in that privileged situation.

Link to comment
Share on other sites

  • Premium Member
A person, who knows how to use a meter, can shoot Film, transfer in Apple ProResHQ, bring into Final Cut, Edit, Export and air on National TV... and they do not have to understand a fraction of all that you so diligently explained. If I had to be consumed with all the technical details of a RED I'd quit the business.

 

Give me some good Glass and a couple Mags that don't scratch and I am good to go!

 

Film :wub:

And a transfer facility and a colorist that you trust... That's not a trivial matter, as I've come to discover recently. In that respect, the best thing about the Red workflow is that you can see your footage at full-res and color correct right away on a laptop and send the intended look to post.

 

Great list, John. Looking forward to the MX update.

Link to comment
Share on other sites

  • Premium Member
the "black sun" sensor protection artifact is no longer an issue (so I've heard),

 

Hi David,

 

The first clip from MX " Kate Beckinsale sexiest woman alive shoot " had the issue, one of the 'Stephen bashing' members from headquarters claimed it was a tweeking issue.

 

http://www.kineda.com/kate-beckinsale-is-e...st-woman-alive/

 

Look about 1:25

 

Stephen

Link to comment
Share on other sites

Hi David,

 

The first clip from MX " Kate Beckinsale sexiest woman alive shoot " had the issue, one of the 'Stephen bashing' members from headquarters claimed it was a tweeking issue.

 

http://www.kineda.com/kate-beckinsale-is-e...st-woman-alive/

 

Look about 1:25

 

Stephen

 

Black sun at 1:25 and moiré patterns throughout, wonder if the latter were created when the compressed web version was created or were visible on the original . . .

Edited by Saul Rodgar
Link to comment
Share on other sites

  • Premium Member

We're off to a good start.

I'm still hard at work on my contribution.

I'm starting off with a neutral-toned Wikipedia-like description of what the RED One actually is, except that Fanboys and other interested parties will not get the opportunity to "tidy it up" as so often happens on Wikipedia. What I presume are the design philosophies will be discussed, with their all advantages and most importantly, disadvantages

I hope it will basically be what I and a lot of other people wish someone else had written a few years ago :lol:

Link to comment
Share on other sites

  • Premium Member
The first clip from MX " Kate Beckinsale sexiest woman alive shoot " had the issue, one of the 'Stephen bashing' members from headquarters claimed it was a tweeking issue.

 

Stephen

"Fix it in Post" is not dead, it's just pining for the Fjiords.

 

I remember seeing a post that I think was on here, where somebody was complaining about some pretty bad chromatic aberration on one of the new RED zooms. They posted a picture which showed it quite clearly, which Jan von Krogh insisted quite forcefully was just a compression artifact!

Link to comment
Share on other sites

David R is in a unique and enviable situation, as a director / cinematographer / producer with clients who can afford film production. Alas, not all of us are in that privileged situation.

 

... and I count my Blessings everyday! :rolleyes: (looking up... not rolling eyes)

Link to comment
Share on other sites

Anyone on here a fan of the '90s cartoon "Rocko's Modern Life?"

 

 

I remember the episode where Rocko is getting deported (I assume because he is an Australian without a valid U.S. green card) and they throw him a "deportation party" complete with a sign saying "Congratulations, You're Being Deported!"

 

 

Something similar should apply here ;-)

Link to comment
Share on other sites

  • Premium Member
If I had to be consumed with all the technical details of a RED I'd quit the business.

 

Try a thought experiment: Suppose we had Red cameras and digital workflow 100 years ago, and photochemical film was a new invention. My guess is you'd have the same reaction. The problem is being thrown way back down a learning curve, not specifically which learning curve it is. Or it's like playing the violin for 40 years and then being told, sorry, we need you on saxophone now. ;-)

 

 

 

 

-- J.S.

Link to comment
Share on other sites

I agree.. and of course I am speaking in generalities. I'm just thankful to shoot Film and yes there was (and still is) a learning curve to it. But I gotta be honest, if I had to toss my meters, it would really take (most) of the fun out of it... and I'm not so sure I'd stick around.

Link to comment
Share on other sites

I agree.. and of course I am speaking in generalities. I'm just thankful to shoot Film and yes there was (and still is) a learning curve to it. But I gotta be honest, if I had to toss my meters, it would really take (most) of the fun out of it... and I'm not so sure I'd stick around.

 

 

Who says you don't have to use meters when shooting electronically? The fundamentals of lighting, ASA, exposure, shutter, focal length, etc are still the same. The image is just being "saved" in a different way. If anything, shooting electronically is HARDER in that film gives so much more latitude while most electronic acquisition formats require more attention to contrast issues. Almost anyone can get a usable exposure on film stock in that it can be adjusted, for the most part, later in the printing. But F up an electronic image and "saving it" later on is generally more difficult or impossible.

Link to comment
Share on other sites

Who says you don't have to use meters when shooting electronically?

 

No one.... but why bother when you can just look at an HD Monitor and see final image? (I speak from experience. I have used varicams etc.... boring :rolleyes: )

 

...prelighting without a camera/ monitor (maybe)...

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...