Jump to content

Chris Millar

Basic Member
  • Posts

    1,647
  • Joined

  • Last visited

Posts posted by Chris Millar

  1. Hi Scott,

     

    Welcome to Team Bolex!

     

    Hrrrm, looking at the surface corrosion of some of the external parts it looks like what might be missing is a good internal service, it's not trivial and may cost a bit relative to the cost of the machine, but there are still lots of techs about, a few here in the forum.

     

    A TV zoom lens won't have the 'RX' correction to account for the prism on your boles, you'll get an image of course, but it won't be as sharp esp. wide.

     

    My RX4 (non-magazine) has a set of primes only, not saying it's exclusive, it just feels a mechanically unsound having all that mass levered off not only a c-mount but the turret also - there are solutions, such as the locking plug thingy and a kind of rod based support, but the bayonet mount models are where I opt for zoom.

  2. Can't hurt to see, so unscrew the plate on the base (4 large flat heads). Pry the plate off (it'll be gummed in quite well over time).

     

    Test the fuse in there, if blown replace with the spare fuse that might still be in there.

     

    Also if I recall, that behavior happened when I had a dodgy connection in the power connector. Try (carefully) wiggling it about at both ends to see if the connection is almost but not quite.

  3. SBM is a much more straight forward conversion...

     

    By the eay, if either of the 10mm or 75mm switars have little levers on them they are the 'preset' models. If so, maybe you want to keep that deal :)

     

    Check out eBay prices on them!

  4. Well if the gate shows no widening, it isn't super 16.

     

    On a plus, your lenses will cover :)

     

    But yeah I'd be researching the cost of a conversion, then asking for that refund, or a simple return as the conversion cost is likely more than the purchase.

     

    It's a pity, makes you wonder about the seller - but best just assume they were genuinely mistaken.

  5. Kern Macro- Switar 75mm f1.9 - certianly

    Kern Switar 10mm f1.6 - reports are mixed, pretty sure the preset version will cover, but the optics aren't great in the corners, esp, wide open - as for the standard ...

    Can't see why the Taylor Hobson wont

    Zooms - have you looked through the finder ??

  6. Chris M's suggestion that Arri traded 4× time for 4× space (~cost) overlooks that the larger sensor would be slower. Sensors take time to unload the image. I don't know if this time is proportional to the number of pixels, but if so there's no time lost in making four exposures, each with 1/4 as many pixels, excepting the time for the half-pixel motions by the piezo drivers. (How fast are they?)

     

    Well, piezo drivers are 'pretty damn fast' in general - considering the distances involved if it's open loop then it'll be a very small and known duration - well tuned closed loop (feedforward) can probably approach that also.

     

    But yeah, no idea of the relative scales of magnitude between sensor ADC offloads and the piezo micro-adjustments we're talking about - maybe they're comparable.

     

    I agree though, it's off topic - just shooting the breeze :)

  7. Well, I'm not an Alexa user, so unless ARRIRAW means something more specific that includes resolution in it's definition - and the same applies to '2K' in the Alexa context w.r.t colour space/bit depth - then you're comparing apples to pears.

     

    One is a specification of pixel count - 'resolution' I believe they call it. The other is a term that will infer a specification of the colour-space, bit-depth and relative distribution of data across that bit-depth (log vs lin) - something that is independent of resolution. Although together they will affect your pipeline in terms of processing power and data transfer rates from sensor to screen.

  8. For the sake of discussion let's assume two cases:

     

    a pixel pitch equal to the sensor size divided by the resolution(s) - i.e. no zero boundary around the pixels

     

    Now imagine black and white stripes that are focused on the sensor exactly in line with the sensor pixels - take a sample, you'll see black and white stripes. Offset the sensor by 1/2 pixel pitch, you'd get a grey image...

     

    Obviously one sample is 'correct' and the other 'wrong' - key point#1 >> it's up to the randomness of the initial placement of the image on the sensor that determines this.

     

    Solution - assume it's the 'wrong' sample every time, do the half offset, then combine images - you'd get black-grey-white-grey-black-grey-white... and so on - key point#2 >> you'd approach result this every time.

     

    Pretty much it's allowing the intensity to be districted more accurately over space.

     

    Now pixels with surrounding nothingness (like real sensors) - this might be a bit harder to intuitively perceive how this affects the results, but consider the half offset affords the acquisition pixel to actually be exposed to a signals that part incorporates image intensity information it never had access to previously. That's quite a plus.

     

    Try it in 1D on paper (pretty much audio actually) - quite interesting.

  9. Here's a quick drawing I did of a grid of photosites. I took the first grid, let's say it represent the 3K sensor, and then offset it by half a pixel horizontally, then half a pixel vertically, then half a pixel diagonally -- so four scans of the same piece of film

    Yes,

     

    and the drawing makes it clear there was a flaw in my logic - you'd need four scans not two.

     

    Which means that the time spent is 4 times as much - which in turn implies that the cost of pixel density (in dollars or other associated 'bad' factors) isn't linear. There must be something that makes having larger sensor elements worth trading the extra time for...

  10. 2) Take multiple images with a micro-shift in the sensor position, and use those to interpolate a 6k image.

     

    Now, that's not the same as blowing up a 3k x 2k image to 6k, which would involve making up a lot of image data. A lot of accurate information can be derived by subpixel changes in an image, so you'd probably get a significantly better interpolation to 6k this way than with a single shot at 3k x 2k.

     

    But I'm not understanding how this is truly a 6k image if it's doing that. It's still interpolated.

     

     

    Why do you suggest that this is an interpolation? Data isn't being inferred, it's being directly sampled.

     

    At it's essence it's taking advantage of the fact that the image is static in time by trading money spent for time spent - assuming elemental sensor parts - i.e. pixels - linearly correspond to $ and excluding market forces like supply demand etc. a 6k sensor would cost 4x as much as a 3k, but the time spent doing two scans is closer to 2x worse.

     

    4 > 2

     

    Maybe I got my logic and math backwards somewhere, but I still think it's a simple case of trading 'space' complexity for time complexity - quite common in industry.

×
×
  • Create New...