More recently, I’ve begun simultaneous work on both a Raspberry Pi-powered device as well as an iOS app, both of which aim to tackle the same problem, the problem being a need to stabilize footage from my Blackmagic Camera, as well as harness the awesomeness of TOF technologies to add fast autofocus to this camera, which barely has autofocus on its own.
There is, at least, one exisiting commercial product that tackles the stabilization issue, but I set out to see if I could create my own solution, whether it be by creating my own device, or by utilizing the hardware that already exists in my iPhone. The raspberry pi device, housed inside a custom case that was designed in C4D before being 3D printed, utilizes a gyroscope/accelerometer board, as well as a TOF sensor, to handle both the stabilization as well as focus. Using Blackmagic’s developer SDK, the devices connects to the camera via bluetooth, features a physical button to start and stop recording, and sends TOF data to the camera to have camera adjust focus appropriately. With the underlying code written in Python, the stabilization data is saved into an XML file on a microsd card attached to the raspberry pi, and later brought into After Effects via a script that I wrote to import the data and translate to keyframes for a 3D camera in After Effects.
The iOS version is basically the same thing, only written in Swift and using Apple’s MotionCore library to utilize the iPhone’s built-in gyro and accelerometer, as well as the lidar sensor on the new iPhone Max. I’m also looking into possibly bypassing the actual MotioCore data and instead testing out the use of RealityKit (Apple’s augmented reality framework) to create the 3D scene, which would then be able to record movement as well as depth (focus) information in one fell swoop.
The upside to this app-based approach is, of course, to completely bypass the need to create an actual device and, instead, use the sensors that most people already have access to within their phones.