
S.A.M.
When the annual science fair at my daughter’s school came around last year, we decided it was time to build SAM. SAM, or Serving Assistant Machine, is a waitor on wheels powered by an Arduino microcontroller and a Nvidia Jetson Nano, along with a handful of sensors. A serving tray was attached to the top of the device, with drinks placed upon it; SAM would then peruse the room, detecting faces and offering drinks.
With the Arduino situated at the base, along with the batteries, ultrasonic sensors, wheels and motors, SAM would navigate itself around the room, using its ultrasonic sensors to detect when something was in front of it. Upon detection of an obstacle, the Arduino would stop SAM in his tracks and send a signal to the Jetson Nano, which was located higher up, just below the serving tray. The Nano would then turn on its camera and, using OpenCV, look for a face; if it found one, SAM would audibly ask if the person would like a drink. On top of just detecting faces, the Nano would create a database of faces, and if it encountered a face it had already seen, SAM would adapt its question to indicate so. After a pause, SAM would then turn and start its routine over again.
With the Arduino situated at the base, along with the batteries, ultrasonic sensors, wheels and motors, SAM would navigate itself around the room, using its ultrasonic sensors to detect when something was in front of it. Upon detection of an obstacle, the Arduino would stop SAM in his tracks and send a signal to the Jetson Nano, which was located higher up, just below the serving tray. The Nano would then turn on its camera and, using OpenCV, look for a face; if it found one, SAM would audibly ask if the person would like a drink. On top of just detecting faces, the Nano would create a database of faces, and if it encountered a face it had already seen, SAM would adapt its question to indicate so. After a pause, SAM would then turn and start its routine over again.

This project included designing and 3D printing two different enclosures, one for the lower section that included the Arduino, motors, wheels, multiple batteries, motor controller, and ultrasonic sensors, as well as an extension to hold up the wooden pole that the top half of the machine sits on. At the top, the second enclosure holding the Jetson Nano, webcam, speaker sat below the serving tray.
Coding included C++ for the Arduino section, which basically entailed building a rudimental self-driving vehicle. Python was used for programming the Jetson Nano, with machine learning incorporated to build the model for face detection.

While it has been some time since I published an app, there was a time in the early days of the app store when I had a handful of children’s and music-based apps that I designed, coded, and published.
Trixmix and Trixmix 2, my first published apps, were iPad apps that wirelessly connected to a Mac, turning the tablet into a mixing board for the music creation program Logic Pro. (Years later Apple released an offical ipad app that did the same thing). I released this app a few months after the release of the very first iPad, and it was quite the money maker for the first year of its existence. The first version was created in XCode with Obj-C. Version 2 was remade using the Corona SDK game engine, written in Lua.
Trixmix and Trixmix 2, my first published apps, were iPad apps that wirelessly connected to a Mac, turning the tablet into a mixing board for the music creation program Logic Pro. (Years later Apple released an offical ipad app that did the same thing). I released this app a few months after the release of the very first iPad, and it was quite the money maker for the first year of its existence. The first version was created in XCode with Obj-C. Version 2 was remade using the Corona SDK game engine, written in Lua.
Lilt, Pascale’s Puzzler, and ABC Explorer were educational apps that I published a few years later, created originally to help teach my daughter. Pascale’s Puzzler was created in Corona, while Lilt and ABCExplorer were created with Unity3D, which is still one of my go-to development platforms to this day, along with Swift and XCode. All three of these apps were released for iOS, Android, and the Kindle line of devices.


More recently, I’ve begun simultaneous work on both a Raspberry Pi-powered device as well as an iOS app, both of which aim to tackle the same problem, the problem being a need to stabilize footage from my Blackmagic Camera, as well as harness the awesomeness of TOF technologies to add fast autofocus to this camera, which barely has autofocus on its own.
There is, at least, one exisiting commercial product that tackles the stabilization issue, but I set out to see if I could create my own solution, whether it be by creating my own device, or by utilizing the hardware that already exists in my iPhone. The raspberry pi device, housed inside a custom case that was designed in C4D before being 3D printed, utilizes a gyroscope/accelerometer board, as well as a TOF sensor, to handle both the stabilization as well as focus. Using Blackmagic’s developer SDK, the devices connects to the camera via bluetooth, features a physical button to start and stop recording, and sends TOF data to the camera to have camera adjust focus appropriately. With the underlying code written in Python, the stabilization data is saved into an XML file on a microsd card attached to the raspberry pi, and later brought into After Effects via a script that I wrote to import the data and translate to keyframes for a 3D camera in After Effects.
The iOS version is basically the same thing, only written in Swift and using Apple’s MotionCore library to utilize the iPhone’s built-in gyro and accelerometer, as well as the lidar sensor on the new iPhone Max. I’m also looking into possibly bypassing the actual MotioCore data and instead testing out the use of RealityKit (Apple’s augmented reality framework) to create the 3D scene, which would then be able to record movement as well as depth (focus) information in one fell swoop.
The upside to this app-based approach is, of course, to completely bypass the need to create an actual device and, instead, use the sensors that most people already have access to within their phones.
The upside to this app-based approach is, of course, to completely bypass the need to create an actual device and, instead, use the sensors that most people already have access to within their phones.

