Jump to content

Thoughts on Alexa Mini With S16-HD Crop + 16mm Lenses for S16mm look?


Alejandro Arteaga
 Share

Recommended Posts

Hello! I will be shooting a small commercial january 14th in Canary Islands, almost everything with available light and the production company just turned down the option of shooting it in film because the deadline and the shooting are very close and we won´t be able to process the film on time.

The Director and I prefer the look of 16mm for this particular project so I was wondering if some of you have any thoughts or if you ever used this combo: Alexa mini S16 crop with 16mm lenses and a little help in the color correction for achieving the 16mm look. Also we are going to use the Inspire 2 Drone for some shots and I think the Super16mm crop on the Alexa could be a good idea for matching the two cameras differences in depth of field and angle of view.

Does this make sense for you? Do you have any other ideas? Thank you very much

Link to comment
Share on other sites

I was trying the super 16 Zeiss super speed MKII (12-16-25-50) with the red epic helium 8K and It looked good in 4K. The 9.5mm did not work with the mount I had unfortunately. The canon 8-64 is also nice but vignette in 4K. 

You could change the red colorspace to Arri Alexa https://truecolor.us/downloads/red-to-alexa-lut-package/

After that, you could color space transform to cineon and use the print film emulation in Davinci or the one from lightillusion.

If you add blur to the footage under the layer of grain, it helps. You could add halation as well.

Or use existing plugins:

https://videovillage.co/filmbox/

https://www.dehancer.com/store/davinci_resolve

Link to comment
Share on other sites

Have used the s16 mode and 16 glass on mini before but as Gilray said it’s no more inherently “filmic” than the same lens on the camera at 2.8k, it’s just using the image area better, you could try pushing iso but s16 mode isn’t really an alternative to s16 unfortunately. 

Link to comment
Share on other sites

Since you said shooting actual 16 isnt an option, shooting the alexa crop for s16 with s16 glass is probably the closest you'll get optically. The last show I was post sup on we did a full arri digital to s16 treatment and it came out surprisingly well, though this was with a colorist who really knew her stuff (alas its not out yet so I can't share images). Done this before with reds as well but I think the color results on the arri sensor were better. 

I'd also say I've been a big fan of the dehancer plugin for resolve, the variable grain size per exposure is great and its halation tool generally sells quite well. 

Link to comment
Share on other sites

  • Sustaining Member

Grading can do a lot for a look if you are shooting raw and can "push the grade" without everything falling apart.

For my own projects, I wanted to shoot 16mm until my camera sitch took a dump. My next favorite is the Bmpcc4k (since I cannot afford an Alexa Mini to play around with) and I have been trying to get the S16 look from the footage I have seen online (until I get my camera to test for myself)

The hardest part about getting the 16mm look right is the softness. It has a certain softness that cannot (or at least I haven't been able to) emulate using any sort of blur. Neither normal blur, Gaussian blur, lens blur, or radial blurs work to get the look. Not sure what it is (maybe the unique DOF?)

For my project, I am shooting BW since I want a retro horror/thriller feel.

Below is a test of the raw sample (from BMD website) and then my BW grade to emulate film (no plugins; just Davinci Resolve and toil). I imagine an Alexa Mini could get raw footage that is much better than a p4k so you have that advantage.

bmc4k_reg_1.1.1.png

bmc4k_bw_1.1.1.png

Link to comment
Share on other sites

  • Sustaining Member
1 hour ago, Robin Phillips said:

I'd also say I've been a big fan of the dehancer plugin for resolve, the variable grain size per exposure is great and its halation tool generally sells quite well. 

To counter balance your point, I played around extensively with Dehancer during my trial and I decided not to buy it. I do not believe the tools are worth the money and nothing there cannot be done with plan Resolve and using a few built in plugins (glow, film grain, etc.)

Link to comment
Share on other sites

On 12/30/2021 at 3:16 PM, Matthew W. Phillips said:

To counter balance your point, I played around extensively with Dehancer during my trial and I decided not to buy it. I do not believe the tools are worth the money and nothing there cannot be done with plan Resolve and using a few built in plugins (glow, film grain, etc.)

thats fair. Personally I do prefer its grain generation and have found its halation tool fairly intuitive, hence suggesting it. I think a spot on colorist can probably do the work with either set of tools and yield a solid result, but you do need a colorist who really knows film to get best results.

Link to comment
Share on other sites

  • Sustaining Member
Posted (edited)
2 hours ago, Robin Phillips said:

thats fair. Personally I do prefer its grain generation and have found its halation tool fairly intuitive, hence suggesting it. I think a spot on colorist can probably do the work with either set of tools and yield a solid result, but you do need a colorist who really knows film to get best results.

The grain is...well, I believe it was Steve Yedlin that already pointed out in one of his "rants" that grain can be algorithmic or it can be filmed and it pretty much is just as valid either way since the grain structure is random and not tied to any particular thing as some people tend to think. I think Dehancer using that argument that grain is somehow "baked in" or specific to the image. There is no justification for this in the actual science as far as I can tell.

As for halation, I don't get why this is so desirable. Most people associate the "film look" or "movie look" with Kodak since most motion pictures have been shot on their stock. Yet Kodak has had an Anti-halation layer for decades. Unless you want to emulate a niche stock then I don't see the appeal of it.

As for bloom, that is pretty much just a wrapper for the Resolve "Glow" tool. And their film stock selection is just a guesstimation based on the tint of different film stocks. You can get your own tint by shooting a roll of still film of the type to emulate on a grey card and then pixel peep the color to check the RGB values. Then you can use the Offset Resolve tool in a node to get that tint by adjusting those values. It is how I do BW emulation since BW film (Kodak 7222) is not pure R = G  = B but has a slight Red shift with a slightly higher Green shift.

Edited by Matthew W. Phillips
Link to comment
Share on other sites

  • Sustaining Member

With film, image detail IS the grain, it’s not like noise in digital. The image itself is made of grains. It is also tied to the image in that grain in the shadows has a different grain structure than in the highlights because the smaller, slower grains did not get exposed in the dark areas so get washed away in development.  This is the approach of the Live Grain process, it uses scans of over and underexposed film stocks to map the grain over the digital image based on the luminance values in the frame.
 

Despite having an anti-halation backing, there is still some halation on film, more so with black & white film since it doesn’t use rem-jet. In color photography, you often see a red-ish or brassy edge around a bright warm highlight. With b&w film, you get a halo ring around points of light. Of course, it is debatable whether that artifact is desirable or not.

  • Like 3
Link to comment
Share on other sites

  • Sustaining Member
7 minutes ago, David Mullen ASC said:

With film, image detail IS the grain, it’s not like noise in digital. The image itself is made of grains. It is also tied to the image in that grain in the shadows has a different grain structure than in the highlights because the smaller, slower grains did not get exposed in the dark areas so get washed away in development. 

I am not arguing that you are correct in this. However, even with grain overlays on digital images, it still seems to be the case that grain over dark areas isnt noticeable as much as it is in highlights. So, from a practical standpoint, how much difference does this have when viewing?

Link to comment
Share on other sites

  • Sustaining Member

I was just disputing the notion that there is no science behind the notion that the grain in the image is specific to the image itself on film -- of course it is.  As for practicality, viewer perceptibility, etc. that is a whole different issue. I don't have an issue with adding grain to a digital image for creative reasons.

Link to comment
Share on other sites

  • Sustaining Member
8 hours ago, David Mullen ASC said:

I was just disputing the notion that there is no science behind the notion that the grain in the image is specific to the image itself on film -- of course it is.  As for practicality, viewer perceptibility, etc. that is a whole different issue. I don't have an issue with adding grain to a digital image for creative reasons.

Fair enough. My question now is, how predictable is the grain structure based on the structure of the image? Is it something that could reliably be done using a computer algorithm? Or is the process still inherently random in that the darker images have larger grain than the lighter images but the size of the grains are not reasonably consistent from one image to another? I ask this because I am trying to determine if something like Dehancer can be relied upon to be reasonably reliable to produce a grain structure that is superior to adding an overlay from Cinegrain, for example.

Link to comment
Share on other sites

  • Sustaining Member

In an ideal world one would very slightly distort the image underlying the grain layer to approximate the way the regions of colour are made out of individual crystals. This can be roughly approximated with a compound blur which makes the underlying image blur in areas where the grain layer is brightest. Also, luma key the grain so it appears mainly in the darker areas.

For instance, based entirely on basic tools available to everyone:

Original image (all shown at 200%):

image.png.8cbfd22052c557811288c4173cc19e37.png

Build grain layer. Start with noise:

image.png.6dd267198534c22251912334794ad958.png

Blur slightly:

image.png.d1f78abc46fab0b17456a62b5d8e60ec.png

Sharpen to create grain boundaries, and desaturate. Optionally, at this stage, scale; smaller generally helps, blue should be larger than red:

image.png.d698e50f467694f03f80e6e068828534.png

Luma-keyed blur layer:

image.png.9032c0329afb9f2344f4ecaee381086b.png

Inverse luma-keyed grain layer.

image.png.2bea4d13ed858ff97431acf1b286df15.png

Composite with appropriate transfer modes and trim intensity to taste.

image.png.fa0011635255938946a6e9e8b2db95ef.png

Before:

image.png.8cbfd22052c557811288c4173cc19e37.png

 

 

  • Like 2
Link to comment
Share on other sites

  • Sustaining Member
15 minutes ago, Phil Rhodes said:

In an ideal world one would very slightly distort the image underlying the grain layer to approximate the way the regions of colour are made out of individual crystals. This can be roughly approximated with a compound blur which makes the underlying image blur in areas where the grain layer is brightest. Also, luma key the grain so it appears mainly in the darker areas.

For instance, based entirely on basic tools available to everyone:

Original image (all shown at 200%):

image.png.8cbfd22052c557811288c4173cc19e37.png

Build grain layer. Start with noise:

image.png.6dd267198534c22251912334794ad958.png

Blur slightly:

image.png.d1f78abc46fab0b17456a62b5d8e60ec.png

Sharpen to create grain boundaries, and desaturate. Optionally, at this stage, scale; smaller generally helps, blue should be larger than red:

image.png.d698e50f467694f03f80e6e068828534.png

Luma-keyed blur layer:

image.png.9032c0329afb9f2344f4ecaee381086b.png

Inverse luma-keyed grain layer.

image.png.2bea4d13ed858ff97431acf1b286df15.png

Composite with appropriate transfer modes and trim intensity to taste.

image.png.fa0011635255938946a6e9e8b2db95ef.png

Before:

image.png.8cbfd22052c557811288c4173cc19e37.png

 

 

Thanks for the in-depth tutorial. I appreciate it.

Link to comment
Share on other sites

  • Sustaining Member

I think the S16mm on Alexa is the best digital option you have. I'd do that, and then get a licence for "Filmbox" (which is an EXCEPTIONAL film emulation plugin for Davinci resolve), it can even export monitoring LUTs that you can use in camera while you're shooting.

That combination will give you a shockingly close digital emulation of actual S16mm. I've never seen anything come as close.

  • Like 1
Link to comment
Share on other sites

  • Sustaining Member
1 hour ago, Mark Kenfield said:

I think the S16mm on Alexa is the best digital option you have. I'd do that, and then get a licence for "Filmbox" (which is an EXCEPTIONAL film emulation plugin for Davinci resolve), it can even export monitoring LUTs that you can use in camera while you're shooting.

That combination will give you a shockingly close digital emulation of actual S16mm. I've never seen anything come as close.

I looked into this...interesting. But macOS only + $1,000 for full version gets a "no dawg" from me. 

Link to comment
Share on other sites

21 hours ago, Matthew W. Phillips said:

Fair enough. My question now is, how predictable is the grain structure based on the structure of the image? Is it something that could reliably be done using a computer algorithm? Or is the process still inherently random in that the darker images have larger grain than the lighter images but the size of the grains are not reasonably consistent from one image to another? I ask this because I am trying to determine if something like Dehancer can be relied upon to be reasonably reliable to produce a grain structure that is superior to adding an overlay from Cinegrain, for example.

It's measurably different using an algorithm that emulates grain vs using cinegrain – but this is a subjective medium. 

So to an extent it's futile getting mired in technical debates. If it feels right to you, that is what you are trying to convey to the audience. I am fairly confident there are bigger shows using both approaches to grain emulation. (I'd go algorithmic.)

Edited by M Joel W
Link to comment
Share on other sites

  • Sustaining Member
2 hours ago, M Joel W said:

It's measurably different using an algorithm that emulates grain vs using cinegrain – but this is a subjective medium. 

So to an extent it's futile getting mired in technical debates. If it feels right to you, that is what you are trying to convey to the audience. I am fairly confident there are bigger shows using both approaches to grain emulation. (I'd go algorithmic.)

Having a computing background, I recognize that any algorithm is only as good as the rules that govern it. I don't doubt that an algorithm has the capability to be superior (since it could be tailor made to the image at hand). Where I express doubt is in how much research and accuracy is present in the algorithm used. When writing a program, your algorithm is your rules that govern the program. These rules must be precise and explicit for the program to accomplish this task. I would be curious to know what "rules" they are using with regards to the grain structure for am image. Is this done on a per pixel basis or is the whole image scanned to get an approximation of how to set up the grain structure? 

Link to comment
Share on other sites

1 hour ago, Matthew W. Phillips said:

Having a computing background, I recognize that any algorithm is only as good as the rules that govern it. I don't doubt that an algorithm has the capability to be superior (since it could be tailor made to the image at hand). Where I express doubt is in how much research and accuracy is present in the algorithm used. When writing a program, your algorithm is your rules that govern the program. These rules must be precise and explicit for the program to accomplish this task. I would be curious to know what "rules" they are using with regards to the grain structure for am image. Is this done on a per pixel basis or is the whole image scanned to get an approximation of how to set up the grain structure? 

That's a good question. I don't have that background at all. I assume the grain (both size and amount) is governed by individual channel brightness, RGB to correlate with each layer.

I think this isn't available for purchase anymore, but I would LOVE to get my hands on it and the author could probably answer any questions better than us here:

https://www.matthiasstoopman.com/color-science

From what I recall, both Nuke and After Effects have rather primitive grain emulations that work well for composited elements and I think Cinegrain or similar has been used on major features. Or maybe they're more complex than I realize. But they look good. Stock grain elements I do find very monochromatic. Years back I experimented with applying stock grain per-channel vs as a standard overlay. Mixed results....

That any sane person (average audience member) would probably not scrutinize as much as I did!

What David says about film being composed of grain is a good point too. Less of an issue on color film scanned at 2K. But I shot some 3200 ISO black and white film and it has the feel of a dithered monochrome image. So might high speed S16 4K scans I imagine. I imagine that would be a difficult emulation to pull off. 

 

Edited by M Joel W
Link to comment
Share on other sites

  • Sustaining Member

Procedural grain emulation is hard from the ground up. Any reasonable example is going to involve generating some random noise as a seed for it, and generating high quality random noise is surprisingly hard work for computers. The noise filter in After Effects is not fast. Visualising the results as an image is a common way of analysing the quality of a random number generator, since problems tend to show up as patterns in the image, which is exactly what we can't have for this sort of application.

Good random numbers:

randbitmap-rdo-section.png

Bad random numbers: 

randbitmap-wamp-section.png

Personally I don't think that doing a physically accurate simulation is particularly important. After all, shooting a grey card and scanning it is a physically accurate simulation, and depending on how it's composited with the original it can look absolutely horrible.

I have noticed that the sort of grain people seem to like actually isn't that realistic anyway; it's often a lot less colourful than real grain. Fashionable grain simulations actually end up looking like the sort of grain created by ENR or bleach-bypass print processing, which introduce a sort of black cellular noise to the image based on the chunks of silver in the image. It's my observation that real grain on colour film looks a lot more like video noise than people like to admit.

I base this on the fact that I once spent the best part of a year sitting in a suite at a manufacturer of film scanners and grading equipment using, as test material, some of the original scans from a film which was Oscar-nominated for its cinematography, but which had been shot super-35 on 500-speed film. Having spent many hundred hours pixel-peeping that material on quality displays, I think I can claim some quite close familiarity with what film grain looks like.

P

 

  • Like 1
Link to comment
Share on other sites

  • Sustaining Member
8 minutes ago, Phil Rhodes said:

I have noticed that the sort of grain people seem to like actually isn't that realistic anyway; it's often a lot less colourful than real grain.

This is an accurate point; at least from what I have noticed from watching a million YouTubers discuss the topic. Pretty much everyone on there advocated for keeping grain only on the Luma and suppressing the Chroma. Therefore, I cannot imagine that procedural grain is going to be any more "realistic" than grey card scans.

I, personally, like Cinegrain because a couple of the samples have other things that I stylistically like something like the subtle flicker, vignette, or a few legitimate pieces of dust for effect that might be harder to simulate in a very gentle sort of way. But I do not disparage procedural grain; I just don't want to spend 400 - 1,000 for it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

Forum Sponsors

CineLab

FJS International

Abel Cine

Tai Audio

Wooden Camera

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Film Gears

Serious Gear

Visual Products

DMX-iT

Cinematography Books and Gear



×
×
  • Create New...