I’m certainly not trying to say that a frame of latency doesn’t matter. I design latency free analogue video equipment for a living! I’m trying to say that where that frame of latency occurs very much does. I’m sure you can detect the difference between 1 frame of delay and 2 frames of delay – but can you detect the difference between 6 frames of delay and 7 frames of delay? Maybe. Your perception happens on an exponential scale. 1 vs 2 frames is huge. 4 vs 5 frames is much smaller.
“Latency” as a concept, in isolation, is not specific enough to give us the big picture. Maybe with audio it is. Maybe with USB polling of a gamer mouse it is. Maybe with LCD monitor refresh times it is. But in our complex signal processing context, it’s like saying “fruit” as if “apples, oranges, and bananas” had negligible differences. I don’t really have a subjective opinion about this at all – I’m trying to illustrate that there are always objective physical constants involved, and where they may matter in practical terms may be elusive. In the example given, you have much more to gain from optimizing your playback PC, using high quality converters and display, than from an analog input amp.
I also deeply want to understand whether it is recursion latency or user interface latency optimization you are after, and if the interface in question is coming from software or is it the controls on the modules themselves?
If it is the controls on the modules themselves, just use a CRT for preview and you will have effectively zero latency. Done! Frontend buffering latency isn’t an issue. If you want us to design an effectively latency free video processing workflow – it’s already there. The latency of the TBC itself only matters in relation to the interface of external gear on the frontend of your system. So if you want to avoid frontend latency… don’t use a frontend video playback interface as part of your performance – that’s where the interface latency would propagate. If you need instantaneous playback of a video clip after you press a button, Memory Palace already provides that kind of functionality by preloading frames into memory.
Want optical camera feedback without frame delay at all? Awesome! But you’re now in experimental territory – we don’t have an out of the box solution for that. But in my experiments, I’ve got along just fine by using analog YPbPr or RGsB directly into the module minijacks, with RCA to 3.5mm converters, being fed from a camera with those outputs and a genlock input. SMX3 would be an ideal module for this purpose, since you can adjust colorspace and gain freely. You need a tube camera to really make that significant though – otherwise the CCD itself, video encoder, and processing going on inside the camera are going to introduce frame delay too! So if this is what you want, find a tube camera and start experimenting. If those experiments go well and you are finding that you need a formal DC restoration interface to improve on them, talk to us – let’s see what camera you’re using, and whether others have the same workflows, and we could justify releasing a module. Heck, even drop by the lab with your tube cam after your Chromagnon arrives and we can take some real measurements of what is actually going on.
Even if I were to bypass the frame memory in TBC2 firmware, you’re not going to get latency free performance like an analog signal path. There’s going to be a measurable processing delay through the process of ADC (video decoder IC, like ADV7181C) and then through the FPGA fabric, and then on to a DAC (video encoder IC, like ADV7393).