So, let’s break down all the complex signal flow steps that introduce latency.
For the purpose of illustration, we have a PC with a mouse, video playback software, an SDI video output card, an SDI to analog converter, an analog to 1V LZX TBC (TBC2), an analog keyer module (FKG3), an analog output encoder module (ESG3), and a consumer grade LCD video monitor with Analog/YPbPr input.
The latency we are measuring is the time it takes between “click the Play button” and “LCD monitor starts to show the frame.”
Each of the following steps introduces latency:
- The USB peripheral drivers poll the mouse and register a click.
- The software registers the click event and process it.
- The software starts buffering a video file into memory until it reaches a threshold at which it decides is safe to start streaming without hiccups.
- The video output card shows the first frame of video.
- The SDI to Analog converter processes the first frame of video and converts it into an analog signal.
- The TBC2 module captures the frame to memory and then waits for the next “house sync” vertical refresh pulse.
- The TBC2 module streams the frame to it’s 1V RGB outputs. Now we are inside the LZX 1V patch.
- The FKG3 keyer module processes the analog signal, transitioning for example, between the image and a synthesized pattern.
- The ESG3 analog encoder converts LZX 1V RGB back into viewable analog video signal.
- The analog video signal is captured by the LCD monitor and converted into a digital bitstream, and written to the monitor’s internal frame store.
- The LCD monitor refreshes, and you see the result.
All 11 steps introduce latency. Now, we can start estimating and measuring in context – the latency between two different steps, for example.
Steps #7 thru #9 are measured in nanoseconds. In this case, the latency is maybe 100ns? No more than a pixel or two. Each analog op-amp in the signal path adds a latency of around 10ns.
For the sake of proportional context, 100ns is about 10000 times faster than even Step #1 (registering a mouse click at 1ms polling speeds.)
And for the sake of argument, steps #1 thru #6 and #10 thru #11 have no effect on the output image you will see. There’s no difference if this was a latency of 1 frame or 100 frames, nothing is changing besides the time it takes for you to see it.
So in this example case, the only significant latency factor is the user interface element – UI response time between click and display. Whether that is relevant and how much it effects your workflow is inherently dependent on your process itself. For example, in a pre-rendered mixdown scenario for recapture, it doesn’t really matter at all. In a performance context where you are clicking the mouse in time with a live audio signal, it matters more. As mentioned above, it is the display or projector that is most likely to introduce an annoying amount of latency in context.
Things only get more complicated or start to effect the resulting image you will see when we are discussing recursion/feedback. And in that case, “low latency = always good” is not true at all. For example, you want at least one frame of latency in order to build up recursive images that stack on top of each other – if it’s less than that, you get more of just an optical smear – also good, but a different technique. Even delaying the recursion (variable programmed latency – frame delay memory) is an important technique.
The recursion latency and total latency are two different things entirely. Both are relative to different steps in the flow. For example, if we patch the FKG3 output back to one of it’s inputs, our total latency (click to display) remains constant. But the recursion latency of the FKG3 feedback path is much much faster (20ns-30ns ish.) You’d likely want to get a low pass filter into that feedback loop (i.e., intentionally increase the latency) or take the feedback path thru an additional TBC or frame delay to have that make a range of interesting pattern modulations.
For the sake of further discussion, let’s define some more terms:
Frontend software latency. Steps #1 thru #3. Anywhere between a few milliseconds to a few seconds. Want it to go faster? Buy a fancy gamer mouse, and optimize your PC (faster RAM, faster hard driver, better software.)
Frontend video latency. This is the latency between steps #4 and #5. A frame or two. Maybe more. A device that has analog output (rather than going SDI to Analog converter in step #5) might reduce it a little, but probably not by much. Once you’re streaming SDI, the digital devices can communicate with very low latency, and broadcast converters are generally very consistent. Cheap consumer grade converters from Amazon probably do more in software or have less memory or are MCU rather than FPGA based, so they will have more latency.
Input TBC latency. 1+ frames. In the case of TBC2, the frame latency is adjustable.
Analog processing latency. Almost instantaneous. Usually less than a microsecond even in worst case. If an object were to fly past your face at this speed, you’d never know it was there.
Backend / display latency. Could be anywhere from almost instantaneous (a CRT monitor) to several frames (a cheapo projector or a long video transmission line from the sound booth to front of house in a venue.) Want to decrease this, buy a display with faster refresh.
Software user interface latency The sum of all other latency increments. For example, the interval between clicking the mouse vs frame displayed on the monitor.
Analog/hardware user interface latency The sum of analog processing latency and backend/display latency. For example, the time between turning a knob on your video synth and seeing the result. The only latency here is going to come from the display you choose.