DSP principles for analog video emulation?

Hi all,
I recently discovered video synthesis and immediately started going down the rabbit hole (watching all the YouTube videos I can find of LZX gear, downloading Lumen and learning to patch it, etc.) and I think I’d like to implement my own “analog-like” digital video softsynth to understand the principles (I did something similar when I first encountered audio synthesis a few years back and I think I got a lot out of it, though after that I ended up buying mostly analog gear :grinning:).

I am doing a PhD in machine vision (basically image processing for robotics) so video synthesis has a natural appeal to me, but I’ve discovered that analog video and analog “images” are rather different from the intensity array digital images I’m familiar with, and most of the resources I’ve found are old-school EE notes on analog circuits.

I’ve been playing around with simple scripts using sine oscillators for RBG and messing around with frequencies to get interesting patterns, but I was wondering if there are any good resources (papers, books, github repos) I could read to implement a basic version of something like Lumen–someone on r/videosynthesis recommended I ask here. Most of the online resources on analog video describe the actual analog processes (scan lines, horizontal and vertical sync, etc.) but emulating this in software seems to be a rather niche topic.

Thanks!

This might be a naive answer (I don’t know much about DSP at all), but I would guess video DSP would be very much like audio DSP. The analog video signal isn’t a 2D (X,Y) signal, it’s still just a 1-D (Y) signal representing intensity over time*. The only real difference would be how you output it. The scanline starts in the top left corner of the screen, goes across, then restarts at the beginning of the next line, goes across, and so on until it makes it to the bottom right of the screen, and then it starts again at the top left**. The trick to making 2D shapes happen in the 1D stream of intensity is just synchronizing things/actions to the framerate, or the horizontal rate, or the vertical rate. To make something like Lumen, you’d “just” need a buffer to put the 1D intensity info into and then display that buffer as a 2D image periodically.

I’m not sure if that’s actually any help at all, but I hope it is! I’m curious to see what you end up doing along these lines.

* (Well, not exactly. There’s also color signals modulated into the intensity signal, but I imagine modulation/demodulation is common in audio DSP, so it’s probably not tricky to deal with. Also you can ignore the modulated color signals and just treat it like a single monochrome signal anyway, at least to start with to simplify things. That’s what a B&W TV does when it receives a color composite signal.)

**(That was a simplification for a number of reasons. The beam can not jump instantaneously, so it has to travel from the end of the previous line to the start of the next, and from the bottom right of the screen up to the top left of the screen, and these movements take time and the video signal is blanked then. Also there’s an alternating scanline thing too. So it’s not quite as simple as I explained it above, to truly generate such signals, but for your purposes I think that might be how you should simplify it. To see how the beam in a CRT really travels, check out the image that goes with this stackoverflow answer.)

1 Like

I used to write simulations in Adobe Flash for some LZX modules (not the best tool for the job) and we have used some Python scripts too.

You want to process (or generate) your digital image the same way it happens in analog, just by iterating through the whole image pixel by pixel, starting at the top left and going to the bottom right, scanline by scanline. You need to think of pixels as increments in both time and in image coordinates. An SD pixel is about 70nS for example.

This video I made might help, it has some good illustrations:

2 Likes

Thanks! In my initial experiments I had been essentially ignoring time by generating whole frames at 30Hz (which would be identical were it not for a small phase offset I would add for each frame to simulate the real thing). This way I could generate all the frames “offline” and then ffmpeg them into a video which was rather far removed from the ultimate goal of a simple synth.
Since, I’ve switched to pygame and a simple nested for loop that simulates the scanline (with the color at every pixel determined by the “voltage” at that timepoint from the sinewave signal) and I get results pretty similar to Lumen though to get it to run at 30Hz the “canvas” needs to be pretty small since nested for loop for every frame+pygame seems pretty inefficient. I’ll have to rewrite this in C++ eventually…

1 Like

Thanks @creatorlars – your video inspired me to try this out in the first place! I’ll eventually need to figure out how to include those sync pulses…

1 Like