Jump to content

Tomasz Brodecki

Basic Member
  • Posts

    28
  • Joined

  • Last visited

Posts posted by Tomasz Brodecki

  1. 11 minutes ago, Wendy Sanders McDonlad said:

    yes, but why aren't the other images toss away and replace with black, instead of the ugly clusters. 

    In your other examples, there was shadow detail that got deteriorated because of the compressed output.
    In this example, there was no shadow detail whatsoever.

    As for the "why" — any and every step of the pipeline could have contributed to this result — from a potentially underexposed original negative, through timing, printing, scanning and eventually preparing this digital copy that you're sharing (with its own bitdepth and bitrate limitations).
      

    8 minutes ago, Wendy Sanders McDonlad said:

    Maybe this particular copy is especially good?


    Usually when a copy throws away half the potential dynamic range of the scene, we don't call it "good", but I understand that everything is subjective.

     

    • Upvote 1
  2. Sure, with careful placement and/or a bit of diffusion, the two 300s can be enough.
    I'm mostly going to advise lighting the background criss-crossed (right light shines on the left part of the background and vice versa), so that you avoid hot spots and provide a more even illumination.

  3. 18 hours ago, Wendy Sanders McDonlad said:

    The problem is I shot RAW on my BMPCC 4k, and RAW stills images on my Canon 5D MK II... .and this problem still exists.

    It doesn't. You're probably describing output or preview files based on those RAW inputs.
    As long as you shoot uncompressed or lossless RAW, compressed blocks of pixels are not something that exists in your image, their existence depends solely on your export settings — and in that regard you should follow Phil's advice above.

    • Upvote 1
  4. 4 minutes ago, Alexandros Pissourios said:

    Everything online says to double the shutter speed to the frame rate......

    is that not the case then?

    That's exactly the case if you're going for an effective 180° shutter angle, as I've said above

    6 hours ago, Tomasz Brodecki said:

    just what 50fps slowed down to 25 should look like when shot at 1/100 (so you're left with a 180° shutter angle).

     

    2 hours ago, Tomasz Brodecki said:

    to the recording speed — e.g. shooting 1/200th at 100fps or 1/100th at 50fps and slowing it down (so that no frame is skipped) results in the same motion ratio as shooting 1/50th for 25fps.


     

  5. 8 minutes ago, Alexandros Pissourios said:

    So I need to match the shutter speed to the playback speed 1:1 (ie shooting 50fps to playback at 25fps means 1/50th shutter speed?)

    No, to the recording speed — e.g. shooting 1/200th at 100fps or 1/100th at 50fps and slowing it down (so that no frame is skipped) results in the same motion ratio as shooting 1/50th for 25fps.
    And it looks like that's exactly what you achieved with the slowed down versions of your videos. I still fail to see the problem.

    The footage is definitely shaky, as in, unstabilized, but the shutter angle aspect looks alright to me (once slowed down and not frame-skipping).

  6. 13 minutes ago, Alexandros Pissourios said:

    the shoe clip begins with normal speed and then when the frame goes smaller it’s the 50% slower and then back to normal 

    I’ve included the clips in normal speed to see the difference

    I don't see the issue with the slowed down version, can you describe what exactly "looks shit" to you in this clip?

     

    18 minutes ago, Alexandros Pissourios said:

    I don’t understand the settings I need to have to get a nice 50fps to 25fps playback (shutter speed-wise)

    Shutter angle math remains the same between 1:1 (real time) recording:playback and slow-motion, as long as you remember to match your footage to the playback frame rate (so that every frame is played and none is skipped)

  7. The first clip you embedded looks alright, just what 50fps slowed down to 25 should look like when shot at 1/100 (so you're left with a 180° shutter angle).

    With the two shorter clips, it looks like you forgot to slow them down, so at 25fps you're only watching every other frame of your 50fps recording (skipping the rest), and that results in the "juddery" 90° shutter angle look.

  8. 5 minutes ago, Stephen Sanchez said:

    @Tomasz Brodecki, what if he just uses a 4x brighter light?

    Yes, there's always that third option, but it's not exactly the best one for the planet ? 
    Just to be clear — we're going to need more light for any scene intended for single-color result compared to an equivalent full-color one anyway (if we're going for a believable look of a single-color light source, because of the values that we will be substracting), but by using white we're left with more choices.
     

    48 minutes ago, Viggo Söderberg said:

    I don't like leaving decisions for later. I want red in the scene. 

    There's a time and place for everything, and unless you have the colorist on set and/or performing final grading on the entire footage as you shoot, you will be making the ultimate color decisions in post production anyway, so the time on set is probably better spent focusing on the task at hand.
     

    7 minutes ago, Viggo Söderberg said:

    what would a red LUT be doing exactly? How would I film it and how would I alter it in post to retain the color information of the blue and green channels while only promoting the red channel?

    The choice is up to you (or your DIT or your colorist, whoever prepares your LUT) as to how much you want it to use the blue and green values to be turned into reds and how much to be filtered out, or any specific point in between.

     

  9. Because of the Bayer filter array in front of your sensor, if you shoot using only blue or only red light, the effective sensitivity of the sensor is reduced by 75%, so in order to compensate for that difference, you'd have to bump your gain, along with the accompanying noise, by two stops (for green, that's 50% or one stop).

    In other words, if you're going for a monochromatic result and you're shooting digitally, use the most of the spectrum you can, shoot with a red LUT if you can, and save the final color decisions for grading.

  10. I kind of expect the gaffer to be equipped with a light meter at all times, mostly so that he can take measurements without walking back and forth to the camera, but for me, from the image acquisition point of view, waveform monitor is the best light meter.

    And as for location scouting, everything there is to know about the luminance of the scene, you can learn from the scout photo exposure settings (after all, a DSLR or a mirrorless camera is a perfectly capable metering tool in itself, and I don't mean just its built-in metering that returns the f-stop value for a given shutter time and ISO, but also the millions of per-photosite measurements that it stores in every single photo).

  11. 1 hour ago, Mei Lewis said:

    I really want to try some of this, but don't know where to start. Is there a step by step guide somewhere?
     

    For Oculus, there's
    https://creator.oculus.com/mrc/
    and here's the guide for LIV, which is sort of its SteamVR counterpart, although developed independently and available as an add-on tool:
    https://help.liv.tv/hc/en-us


    I'm sorry about the double posts, unfortunately this forum doesn't allow edits (or even addendums) after more than 5 minutes ?

    • Like 1
  12. 24 minutes ago, Mei Lewis said:

    Tomasz, would you be able to go through what kit you use in a bit more detail?

    A lot of people are using the vive tracker right, but but the thing on your camera isn't that?

    The Vive Tracker that you linked serves a similar purpose in HTC/Steam's inside-out tracking solution (Vive headsets, controllers and trackers are equipped with light sensors that monitor where in the world they are by receiving flashes of lights from the oscillating illuminators, called Lighthouses, which you place in the corners of your space). Vive Tracker is exactly what I would be using if we were in the HTC hardware ecosystem.

    Inside-out v Outside-in: How VR tracking works, and how it's going to change

    In our case, since we're using first generation Oculus hardware, the tracking works outside-in (cameras/light sensors are placed stationary in the corners of your space, while the controllers and headsets blink their unique identifiers using IR LEDs), so in a very similar way, just the opposite direction.

    New Oculus Touch Controller Leak Points To Finger Sensing, New Haptics

     

    28 minutes ago, Mei Lewis said:

    Also Tomasz, am I understanding this correctly? You are getting the girl to play Beat Sabre, and filming her on greenscreen as you do so, then when you superimpose her on the computer image her motions are automatically in synch with the motion of the game because she was actually playing it?

    That is correct, with the caveat that the computer generated image has to be rendered from exactly the viewport (position + rotation + field of view) that matches the position of the real-world camera (establishing which is the purpose of the camera-tracking solutions described above). If I mess up the calibration between the two, it results in less convincing parallax between real life talent and VR-based objects, like at 1:53 in the video below:
     

     

  13. 50 minutes ago, Phil Rhodes said:

    What's the approach for camera tracking, then?

    P

    We use the same tracking system that monitors the position (and rotation) of the headset and controllers that you see Michalina use in the video.
    In addition to those three objects, we add a fourth one (a controller from another set), that is fixed to the real-world recording camera, which is then recognized by the tracking sensors, providing its own position and rotation data in three dimensions, with no need for trackers or reference objects in the scene.

     

    160.jpg

  14. 4 minutes ago, AJ Young said:

    Very interesting! Did you shoot this in front of a green screen?

    You can say that ?
    Specifically, our green bedroom wall, using a yoga mat as a backdrop extension ?
    + I had those two slim 1m LED lights made to make the real world video match the in-game lighting a bit better ?

    unknown.png

    I forgot to mention that even though the camera is tracked in 6 degrees of freedom by the computer (so moving it will result in a corresponding perspective shift in the plate), we haven't recorded a single video with a moving camera yet, because of practical limitations, but we're gearing up to finally do something about it ?

  15. Fun to see someone else approaching the mixture of physical and virtual reality, from a different direction than we have ?

    What you see below were my first experiments with moving pictures (having worked before both with Unity and 360° photography, so all the needed equipment and software was already at hand):


    Almost every video on this channel is an attempt at combining Unity-based real-time rendered surroundings with real-world capture, performed in a very DIY way, originally only to prove that it could be done (we did not expect the interest the project will gather).

  16. 9 hours ago, David Peterson said:

    At the price point of the a7Smk3/S1H you start to ask yourself, why not just go all in and get a cinema camera rather than a mirrorless? 

    Those are apples and oranges, there are separate uses for small cameras (with their portability/low mass/usefulness in tight spaces) and large cameras (with complete control laid out on the body, multiple robust connectors, efficient cooling, displays, battery life etc.).
     

    9 hours ago, David Peterson said:

    > I only recommend Nikon hardware to long-time Nikon owners, since as motion picture equipment, this product line is a dead end.

    Why??? Nikon is committed to Nikon Z Mount

    Because they don't make a single product from the latter category above, so there is nothing to advance to and maintain compatibility, unlike what you can do with Sony (E) and Canon (EF) systems. And as for Z-mount lenses, you can't use them on cinema cameras (unlike F-mount lenses), because of the 16mm flange-focal-distance.
     

    8 hours ago, David Peterson said:

    However the jump to S35 was a big big improvement that even a causal viewer / producer / director could see. 

    But in the move from S35 to LF to beyond, those are much smaller leaps, relatively speaking compared to the leaps from small chipped ENG cameras to S35 cinema cameras. 

    I totally agree with this. Not to mention that even movies shot with LF nowadays end up using t-stops that negate the potential advantage (or disadvantage) in depth of focus.

  17. 15 hours ago, charles meyer said:

    I really need to learn DaVinci Resolve as my library job

    Hi there,

    I also moved to DaVinci Resolve last year, mostly because of the built-in chroma-keying capabilities, and now it's the only NLE I use, both for clients' projects and our mixed reality channel.
    As for learning its basics, I can't recommend this thing enough:https://documents.blackmagicdesign.com/UserManuals/DaVinci-Resolve-16-Beginners-Guide.pdf

    My fiancée has just gone through the first two lessons and she definitely picked up the Cut page much quicker than I did the learn-as-I-go way. 

  18. Hi there,

    I also come from a photography background, so flat base was important for me in my first fluid head. Since all my existing heads and tripods were flat base Manfrotto products, I prioritized compatibility, with no regrets so far.
    Sure, the two solutions rely on different habits when it comes to leveling, but I suppose you already have some similar to mine, so it's not like using a flat-head tripod is currently a problem for you. And even though I'm currently shopping for a bowl tripod, I'm going to continue using my flat base heads (since you can easily adapt flat-to-bowl, which isn't that easy the other way round).

    Neither of the models you mention is a bad head, neither will stand in your way of achieving smooth motion (as I'm sure you've already found out during the week you've spent with your 608).

    Since you mention that Ikan riser, should I assume that you already own a flat-base slider?  

    In time you'll probably start seeing the cases where you would benefit from a bowl setup (probably around the same time that the first annoyances of shooting with small cameras will show up), but for dual-purpose use, I'd still stay with flat bases.
    It might be an interesting topic to revisit after some time, when gearing up for a heavier camera or once video work dominates your shooting; for now it seems like you're already set up rather well for what you're going to do with your cameras.

    • Like 1
  19. 24 minutes ago, aapo lettinen said:

    actually it DOES happen in internal recordings as well sometimes and it will show in the end result. Switching the n-log off does not help the flickering on external recording but in some situation the flickering can happen a bit more often (or so it seems) if you are shooting in cold-ish temperatures like under 10 - 15 Celcius or so

    That's very interesting, thanks for the heads-up;
    I've been using it for most of 2020 without this showing up either in external or internal, and I was shooting outside at night, at 2°C, just this Sunday (you actually made me re-check my files, but they look alright, at least in that regard).
    I'll keep my eye out for that and, let me reiterate, I only recommend Nikon hardware to long-time Nikon owners, since as motion picture equipment, this product line is a dead end.

×
×
  • Create New...