Jump to content

D21 workflow


Joseph Zizzo

Recommended Posts

  • Premium Member

what's the best way to go for short jobs, like a commercial or music vid, in terms of a workflow when shooting with the D21? shooting on tape seems a bit stone age to me, since i've been working with the red quite a bit the last year and a half. but are there advantages to tape? there must be, since a number of tv series are done that way, so i've discovered...

 

thanks in advance.

Link to comment
Share on other sites

I know of one TV series for which the RED was being considered, they went for an F35. From what I could make out the rendering was going to be a problem on an already tight post production schedule.

 

One big advantage of tape is that you just hand over the camera rushes in a labelled box.

Link to comment
Share on other sites

I know of one TV series for which the RED was being considered, they went for an F35. From what I could make out the rendering was going to be a problem on an already tight post production schedule.

 

One big advantage of tape is that you just hand over the camera rushes in a labelled box.

 

Yeah, rendering is pretty task intensive. In my experience, going tapeless adds at least one more crew member (data wrangler) and it can become more expensive and complicated to deal with in post, particularly with formats that need to be rendered or unwrapped to be edited. Ultimately, it is up to all parties involved in the production / post-production to find the format that best suits their needs in a case-by-case basis.

Link to comment
Share on other sites

  • Premium Member

ARRIRAW has only recently been enabled and though people like Geoff Boyle report that the quality is higher than the internal HD conversion, it's still a bit of a rarity in post houses. First of all, you are talking about 2.8K RAW uncompressed, which is a lot of data and you need to use a data recorder such as a Codex or S-Two. Then you need to work out a RAW workflow for dailies conversions and post color-correction.

 

For most TV work, people have been opting to record HD to an SRW1 deck since almost all post houses can handle that footage.

Link to comment
Share on other sites

  • Premium Member

I should point out that at least a few months ago, Arri's own raw converter software was, to put it mildly, somewhat naive in the way it worked at a low level and in the opinion of people other than myself could have worked a lot faster with some smarter coding.

 

P

Link to comment
Share on other sites

  • Premium Member

Yeah, it's not just Arri. The problem is, will anything be around long enough to justify the time and expense of really careful benchmarked coding? Does anybody even know how to write assembly code for the latest and fastest processors? Software is, and always was, much harder than hardware. ;-)

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
Yeah, it's not just Arri. The problem is, will anything be around long enough to justify the time and expense of really careful benchmarked coding? Does anybody even know how to write assembly code for the latest and fastest processors? Software is, and always was, much harder than hardware. ;-)

 

 

 

 

 

-- J.S.

 

I don't think it has to be hand-optimised asm, it just has to not touch every DPX three times for the header knowing it's likely to be running on a video RAID that's been carefully set up for streaming forward reads.

 

Grumph!

 

P

 

PS - better yet, it should be implemented for GPGPU, then you can run it on one of those $400 plug-in massively parallel supercomputers they call graphics cards these days. Which is of course not in any way what Red Rocket is. It's not a gamer's graphics card with a "Red" logo on it, or anything. Perish the thought.

Link to comment
Share on other sites

  • Premium Member
PS - better yet, it should be implemented for GPGPU, then you can run it on one of those $400 plug-in massively parallel supercomputers they call graphics cards these days. Which is of course not in any way what Red Rocket is. It's not a gamer's graphics card with a "Red" logo on it, or anything. Perish the thought.

 

I agree, Phil. Imagine how much real time flow could be handled if distributed fully over the resources of a dual-quad core mobo with 4 slots of 16X SLI running the latest smokin' cards. It would probably take 2X 1,000W PS to drive it (not to mention melting the guts out of the average 15A wall socket). But the coding for that... sheesh!

Link to comment
Share on other sites

  • Premium Member

ok, so i understand that the workflow has not been really worked out yet for ARRIRAW, and that handing over tapes at the end of a job is easier...

 

but i remember this being the case when the red was new, no one wanted to use it for a commercial because the raw, file-based workflow was not understood, and everyone saw it as a problem. now the red is, in my world at least, the go-to digital format when film has been abandoned... and everyone understands the workflow, data is backed up on set as we shoot, it's no longer a problem...

 

so, if ARRIRAW is a superior format when compared to shooting s-log to hdcam (and certainly when compared to the red), in terms of resolution, dynamic range and color rendering - i mean, arri went to the trouble of developing it for a reason, i'm sure - would any of you think that it is just a matter of time before people adapt to the work flow issues, as happened with the red, or are there larger problems than that?

 

in other words, is it worth pursuing raw workflow for short jobs, or am i just going to end up looking like some obsessed perfectionist on a mission!?

 

thanks.

Link to comment
Share on other sites

  • Premium Member

If your production can afford to work with ARRIRAW, then go ahead. It's really just a matter of cost and ease of post, where you can go for post, etc. I don't think it makes much difference on the set, going to an SRW1 versus a Codex or S-Two, other than the back-up issues and cost.

 

I just think you'll find that a lot of music videos and whatnot have limited post budgets and perhaps already have certain post houses that they work with. I suppose it's possible that all the Red-centric or saavy facilities that music videos go to may be OK with ARRIRAW footage as well -- you're just going to have to investigate that.

Link to comment
Share on other sites

  • Premium Member

The issue is not so much software as it is just finding somewhere that's used to doing file-per-frame workflows. There's quite big difference between posting some heavily compressed Red stuff on a Mac, and doing a proper DPX workflow as you'd assume might happen for something like a feature DI. Find somewhere that's used to handling the latter, and the overhead presented by handling raw D21 footage is data wrangling and machine time.

 

Potentially quite a lot of machine time.

 

P

Link to comment
Share on other sites

  • Premium Member

thanks, guys...

 

phil, if you don't mind elaborating a bit, what is the difference between posting red footage - which i, perhaps mistakenly, thought to be uncompressed data - and posting ARRIRAW data? i understand you're saying the latter is slower... but red footage used to take a day to transcode as well... now it can be done on set as we shoot. ids red data just that much more compressed?

 

thanks.

Link to comment
Share on other sites

ok, so ARRIRAW just represents a lot more data to deal with then... which sheds some light on why so many series record to tape.

 

speaking of which, I'd like to point out a new document on ARRIdigital which I hope you find interesting:

http://www.arridigital.com/downloads -> D-21 Workflow Guidelines

 

Cheers

Oliver

Link to comment
Share on other sites

  • Premium Member
phil, if you don't mind elaborating a bit, what is the difference between posting red footage - which i, perhaps mistakenly, thought to be uncompressed data - and posting ARRIRAW data? i understand you're saying the latter is slower... but red footage used to take a day to transcode as well... now it can be done on set as we shoot. ids red data just that much more compressed?

 

It is a very common misconception that red data is uncompressed. It's compressed between 9 and 12 to one, and if I mention that a 3:1 MJPEG used to be the absolute maximum amount of compression considered tolerable for broadcast television you'll see why that raises some eyebrows. The compression technique used by Red is probably better than JPEG, but not three or four times better. If red had figured out how to store an uncompressed HD image on a flash card, they would have done something really special. They haven't. This is why people like me question Red calling their data "raw". It's a term which has in common use been applied to DSLR stills which are stored as data on a flash card which is both uncompressed and unprocessed. Red's data is unprocessed, but it is certainly not uncompressed. Arri's raw mode on the other hand is both uncompressed and unprocessed and fully entitled to the term.

 

With regard to the whole bayer thing, as briefly as I can:

 

Image sensors are intrinsically black and white devices; they see only brightness, not colour. Making them see colour requires putting a colour filter on the front. Both D21 and Red (and F35/Genesis, which use the same sensor, effectively all DSLRs, and a lot of modern HD consumer devices such as the Canon HV-20 and cellphone cameras like iPhone) are single chip cameras, as opposed to the three-chip blocks we get in things like F23 and most television-oriented video cameras.

 

Three chip cameras use one each for red, green and blue, splitting the light up so that a proportion of it hits each sensor in alignment:

 

https://eww.pavc.panasonic.co.jp/pro-av/sal...20/img_3ccd.jpg

 

Single chip cameras use patterns of colour filters printed onto the front of the imaging chip. The most common of these patterns was developed by an imaging scientist called Bayer, and the technique bears his name (notably, Genesis/F35 are not Bayer patterned).

 

http://www.kodak.com/US/images/en/corp/100...er_patterns.gif

 

To recover a full colour image from a 3-chip device, you simply need to read the three chips and assume the information from each represents red, green or blue according to what sort of filter you put in front of the sensor. Clearly you can't do that with a Bayer patterned chip; if you just lined all the pixel values from the sensor up next to one another, you'd get a sort of (but not) checkerboarded pattern with alternate pixels representing the different colour channels, which would have very little meaning:

 

http://www.guillermoluijk.com/article/virtualraw/bayer.gif

 

Using a single image sensor in a camera is beneficial in some ways - mainly those ways in which it makes the camera work more like a film camera. Lenses for 3-chip cameras need special considerations to land the image accurately on all three sensors at once. This is rarely done with 100% precision and that's why out-of-focus artifacts on 3-chip cameras often shade magenta to green top and bottom.

 

http://diglloyd.com/diglloyd/blog-images/2...agentaGreen.jpg

 

However, recovering a colour image from Bayer patterned data is not trivial, and is a process with inherent compromises. The biggest problem is where the image contains sharp edges with pronounced colour differences. The edge may fall between the widely-spaced RGB channel photosites on a Bayer patterned sensor, leading to uncertainty over where it really is (aliasing). Such a discontinuity in one RGB channel is usually associated with discontinuities in others - say you're looking at a yellow object; it is active in both the green and red channels. Because the green and red photosites are not in the same place (as they are on a 3-chip device), you may get a different idea of where that pronounced colour edge is in two RGB channels. This can lead to strange chromatic aberration:

 

http://colorcorrection.info/wp-content/upl...-psvscamera.jpg

 

Here we see the partial solution to the problem: the in-camera de-mosaic uses different mathematics than Photoshop, and achieves a result with less chromatic aberration. However, it probably also has less sharpness, and that's an engineering compromise that's unavoidable with Bayer pattern sensors. This is also why people who shoot test charts on a Red where the test charts are comprised of black markings on white are not really answering any questions.

 

The mathematics involved in getting the best possible compromise out of this situation is very complicated and takes up a lot of computer time; it involves very careful interpolation of the RGB values that weren't sampled by the sensor. People have various terms for this: Dalsa were in love with the word "algorithms", but it is unavoidably interpolation, it is making up data, and from this it should be fairly obvious that if we want, say, an image 1920 pixels wide that has truly valid and unambiguous colour information, we need to use a Bayer pattern sensor at least twice that large in order to be able to scale down the results and minimise any problems. This is the reason people question Red's "4K" resolution claim; it is widely recognised that the 2K windowed mode on a Red camera is not really good enough for broadcast HD production and this is largely why.

 

It is probably not particularly more difficult to debayer Red's footage than Arri's, although recovering compressed data is an additional processing load in and of itself and of course Red are using more pixels than Arri are (though the accuracy of the information in those pixels will be massively compromised by the massive amount of compression they're applying).

 

The fact that a D21 in raw gives you a larger amount of accurate data is not really debatable.

 

P

Link to comment
Share on other sites

  • Premium Member

Compression seems to be more or less inevitable today, and more and more acceptable.

 

I remember back in the days of D1 & D2 tape, people were saying that the DCT compression of beta-SP was unacceptable for post work, yet within a few years, D2 was gone and D1 was disappearing, but beta-SP was becoming accepted as a standard-def master tape format.

 

Then, next, compressed HD was considered a no-no and yet today we have more and more HD cameras using various compression schemes, mostly variants of MPEG-2 or MPEG-4.

 

Of course, processing and storage has improved to the point where higher data rates are workable, but I think you're not going to see the end of compression even for origination formats, so the main question is how good is the compression versus how limiting it is. I would say that Red's primary success has been due to the quality of REDCODE compression, whereas Dalsa never really solved the practical problem of dealing with uncompressed 4K RAW and even ARRIRAW (near 3K RAW) is a bit of a hurdle to record in the large amounts that a typical production shoots. If Red had taken the "high road" of uncompressed RAW, it would still be a science experiment on the same level as Dalsa was in terms of volume of use.

 

The typical independent filmmaker is not going to be able to deal with 4K RAW uncompressed for his little indie movie. Nor the typical TV show. That sort of leaves uncompressed RAW for bigger shows and for efx shoots.

Link to comment
Share on other sites

  • Premium Member
I remember back in the days of D1 & D2 tape, people were saying that the DCT compression of beta-SP was unacceptable for post work, yet within a few years, D2 was gone and D1 was disappearing, but beta-SP was becoming accepted as a standard-def master tape format.

 

I think you mean Digital Betacam. Beta-SP was its analog ancestor.

 

D2 was a dumb idea from the get go. Basically they took the same kind of composite analog signal we had on 1" C tape, and digitized it just like you'd digitize an audio signal, only at the higher frequencies of video. The only nod to video in it was using four times the chroma subcarrier frequency for sampling. Some of the Sony guys still apologise at the mention of it.

 

D1 was strictly standard definition, and used huge expensive cassettes. They tried to develop an HD version called D6, using a lot more heads and a much higher drum speed. I saw it at one of those private secret hotel room demos. Spinning up, the damn thing made a noise like a table saw, and everybody took a step back away from the machine.... ;-)

 

The inevitability of compression results from the limitations of the pipes we pump our bits thru. Pixels high times pixels wide times bit depth times frame rate is the data rate of our uncompressed moving picture. By using some compression, we can increase all or any of those numbers and still get thru the same pipe. The best picture will result from some moderate amount of compression. Too much, and you have obvious compression artifacts. Not enough compression, and you coulda had more pixels or bits or frames.

 

 

 

-- J.S.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...