High-Quality Film Transfers With This Raspberry Pi Frame Grabber

Untold miles of film were shot by amateur filmmakers in the days before YouTube, iPhones, and even the lowly VHS camcorder. A lot of that footage remains to be discovered in attics and on the top shelves of closets, and when you find that trove of precious family memories, you’ll be glad to have this Raspberry Pi enabled frame-by-frame film digitizer at your disposal.

With a spare Super 8mm projector and a Raspberry Pi sitting around, [Joe Herman] figured he had the makings of a good way to preserve his grandfather’s old films. The secret of high-quality film transfers is a frame-by-frame capture, so [Joe] set about a thorough gutting of the projector. The original motor was scrapped in favor of one with better speed control, a magnet and reed switch were added to the driveshaft to synchronize exposures with each frame, and the optics were reversed with the Pi’s camera mounted internally and the LED light source on the outside. To deal with the high dynamic range of the source material, [Joe] wrote Python scripts to capture each frame at multiple exposures and combine the images with OpenCV. Everything is stitched together later with FFmpeg, and the results are pretty stunning if the video below is any indication.

We saw a similar frame-by-frame grabber build a few years ago, but [Joe]’s setup is nicely integrated into the old projector, and really seems to be doing the job — half a million frames of family history and counting.

[via Geek.com]

19 thoughts on “High-Quality Film Transfers With This Raspberry Pi Frame Grabber

  1. Tip: f you use an LED, it should be a high-CRI option, and the appropriate color temperature for film projection. If you’re on a budget, Cree’s TR series edison bulbs have a fairly high CRI, but it’s fairly simple to get the right emitter from digikey or mouser. Standard LEDs have a very peak-y spectrum; CFLs are even worse.

    If you can’t get a high-CRI LED, then just use a tungsten bulb – lower wattage, of course (don’t use a dimmer – the color temperature and spectrum will shift dramatically.) The color spectrum is far, far smoother. Add a small PC fan to keep the film cool if you’re not moving the film very fast.

    Also: the first generation Pi camera was pretty rubbish. The newer Sony Eximor-R based 8mp sensor is far better, but the optics are, again, pretty rubbish.

    1. Film typically contains 3 pigment layers anyway, and you’re digitizing it with a 3-color sensor, so it’s a lot less critical to have a good LED spectrum, and CCT barely matters. As long as you have enough light in each layer’s absorption band, there exists some 3×3 correction matrix will give you basically the same results. (And since you’re already stacking multiple exposures, you don’t need to worry about too much light in one band before you have enough in another — worst-case, a low-quality light source means you need to add one more exposure per frame at the top/bottom.)

      Low-CRI LEDs have a big gap between the blue spike and the yellow phosphor, but the yellow phosphor’s curve is still broad enough to cover both your red and green requirements tolerably well, and any of the decent-CRI options (e.g. 80+ CRI) is as good as the best.

      1. The scripts referenced in the article were put together by Fred van de Putte. Here is a web page describing the telecine machine he put together. Rather than doing multiple exposures ala HDR, he has a higher quality camera.

        http://www.super-8.be/s8_Eindex.htm

        Explore the links there — his telecine machine is first rate, and his example restored films are astoundingly good for 8mm. He has a couple youtube tutorial videos too.

    1. That’s a striking video, and makes a great example of the Soap Opera Effect.
      I am consciously aware that the right side is objectively better in every possible way, but something about the left just feels more real. Then I realized that the right side looks like it was shot with a VHS camcorder.

  2. On the software side, Blender is pretty good for handling both basic image manipulation jobs, like cropping and color correction, as for some fancier stuff. I’ve used it for tackling some very bad chromatic aberration on 8mm film. I guess it could do some motion tracking and stabilization too, if needed, but that would require quite a bit more manual work.

    https://www.dropbox.com/s/gs91qguliyn03e3/chromAberr_nodes.PNG?dl=0
    This is a very basic node setup for removing chromatic aberration, luckily it was easily done by separating the image into RGB components and moving them back into their proper places. Not quite the exact job, but close enough.

      1. Blender’s really gotten more user friendly than it was 10+ years ago when I started with it. For video filtering, setting up some nodes is a lot simpler than After Effects or even Vegas Pro, let alone trying to to finagle Avisynth into doing anything.

  3. A cheap upgrade might be to remove the camera lens entirely and project directly onto the image sensor. You can do this with the aid of a photo enlarger lens. I use it on my system and it avoids a lot of problems.

    1. I actually tried replacing the lens with a few different versions, and – long story short – the pi camera doesn’t seem to play very nicely with other lenses when the stock lens is removed, an issue which has been discussed on the pi forum. (Can’t find link right now.) It’s something I’d sure like to do, though; I expect a new interchangeable-lens-friendly version of the camera will be released the day after I scan my last reel.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.