Conversion to LZX RGB format

Hey there,

What do I need to do to convert HD video to the LZX RGB standard? My understanding is that the LZX 1V standard is unique. Before hitting LZX RGB inputs, any signal is going to need to be massaged: voltage scaling, DC restoration, sync strip, etc. I saw in another thread that Cadet III performs this function. But of course I did not see them for sale. What are my options currently?

What is the plan for Gen 3? A decoder module that can accept YUV would be good enough for my purposes at this time, but of course we need to get other formats in as well. I propose separate decoder modules for analog & digital signals. The less complicated the better, for both designers and users of the modules. I’d love to see an analog decoder supporting YUV and VGA. RGBS, RGBHV, and RGsB would be great, but not top priority. A digital decoder would need to support HDMI and SDI. DVI-D would be nice, but not essential.

Thanks!!

Aaron

1 Like

TBC2 is the module we have developed for this purpose.

3 Likes

It would take a circuit being designed and manufactured that does those functions. There is not a standalone Eurorack format module that exists that does what I think you’re getting at, i.e. an HD component input amp with no intervening frame buffer.

I’m curious to hear about your proposed workflow with these myriad input formats or your desired outcomes from a video synthesis instrument. It seems like minimal to no frame delay is critical in your process which I’m not quite understanding due to lack of context–do you have any current work you could share with us that gives a sense of how you plan to approach the instrument? Some frame delay seems impossible to avoid in my own practice but I’m not doing anything where user input per individual frame is critical or can’t be compensated for.

5 Likes

I’m pretty sure the Cadet III does not do what you’re looking for. First, it’s SD-only since it uses a LM1881 (as far as I can tell, that’s only for NTSC/PAL, both of which are SD formats). But also it’s not RGB, it just converts a composite signal to a luma (brightness) signal.

The Syntonie VU003 is closer to what you want, since it’s YPbPr, but it too is SD-only because of the LM1881.

1 Like

From the LM1881 datasheet:

The integrated circuit is also capable of providing sync separation for non-standard, faster horizontal rate video signals.

Not sure the range of signals it will decode sync from, but it should do some hd I would think.

TBC2 looks great, at least for component signals. A post-release firmware update to support VGA resolutions up to 1080i would make it complete. But if I am not mistaken, there is no way to bypass the frame store. Forgive me if you already answered this question on FB. But can TBC2 be used as a simple decoder for synchronous sources?

And of course, TBC2 doesn’t address the need for HDMI and SDI input into an LZX system. This can be handled by external conversion boxes, but this is sub-optimal for many reasons, including portability. Ideally a modular system would have everything needed in one case with a unified architecture, power, etc.

Thanks!

1 Like

The inputs are hardwired directly into the frame buffer, so no. What you are describing is an input amp module that doesn’t exist and would need to be designed and manufactured.

That would be ideal, wouldn’t it? For now, I think you’ll have to use external converter boxes if your workflow necessitates that.

Just circling back to my question above, I’m curious if you could fill us in on your need for a near zero frame delay in a practical context of your body of work or what you’re hoping to literally do with these devices. It might help folks on the forum give you better advice on what you’re trying to achieve that requires such minimal frame delay when some frame delay is usually just a factor of life in video synthesis up to pro broadcast settings.

Not saying there isn’t a need for it if we can understand why.

3 Likes

One of my core intentions is to play the video synth as a real-time instrument, controlled from a PC via MIDI and DC coupled audio CVs. Latency is a huge concern here. It’s already bad enough that the best audio interfaces available can only bring latency down to ~10 ms, and practically speaking it’s usually more like ~20 ms. Then if we’re going through a TBC, that’s adding at least ~30 ms. So at a minimum we’re looking at two frames of delay between hitting a MIDI key and seeing something change on the screen.

Then it gets even more ugly when we’re trying to generate musical sound and picture at the same time. If the video is being delayed by TBCs etc., and the audio is not, then there will be an A/V sync issue at the capture stage. That could be dealt with by shifting audio in post, but that is a solution to a problem that does not need to exist.

1 Like

Yes, that’s what I thought, and that’s why I brought it up. There’s a need for this, at least in my use case described above.

I think people may be too comfortable with frame delay as a “factor of life”. I clearly remember a time when absolutely everything in a studio was locked to house sync. TBCs were expensive. Even VCRs weren’t necessarily going through full frame stores… the old U-Matic decks went through 16-line TBCs that introduced almost no delay.

2 Likes

This sounds like a fun approach but I wonder if you’ve had the opportunity to attempt this workflow with another analog video synth and had this exact problem occur that made it impossible to navigate. If you haven’t, there might be LZX users in your area that you could attempt to try this out to see how much it really creates a problem during the process? It seems like this is a major point of concern for you and I’d be sad to see you disappointed with gear that doesn’t meet specs you’d like it to have.

Can we see any of your current portfolio so we can try to make some helpful suggestions with context on what you’re attempting to use this for in more concrete terms? I get that you want to press a key and have something happen with zero delay but I’m having trouble conceptualizing how two frames would actually present an issue. I’m mainly curious as I’ve personally never encountered another video synthesist whose work was frustrated by two frames of delay and just want to better understand the limitations you seem to be butting your head up against.

Cool, maybe this would be a great opportunity to learn how to design a module that fits your needs and share it with everyone else! I had no idea how to design a module a little over a year ago and found the time to make a Quad S-Video Distribution module I wanted with no formal training. Maybe that’s not in the cards for how you want to spend your time, which is totally understandable. If not, maybe you’d want to personally commission someone to research, design, and manufacture it for you?

I don’t know your circumstances but it’s unlikely a very niche case module will appear on the market by posting an idea about it on a forum–just trying to be real if you’re looking for this module to actually exist. Syntonie’s VU003 is probably a good starting point but the YUV->RGB LZX 0-1V colorspace conversion is different between SD/HD and the sync stripper (LM1881) may need to be a more modern adaptable IC like LMH198x.

Sure, it makes sense that when TBCs were expensive that there was more of an emphasis on gear that primarily worked with genlocked inputs. I have ancient gear from the 70s with separate composite sync and composite blanking inputs, even!

Maybe digging into that older gear will get you the result you want? TBC2 is just not the module you’re looking for if you want zero input delay nor can I think of any TBC that has zero delay due to how they work. I’m not an expert on all TBCs that have ever been produced so maybe you have model numbers of examples of some you could share with us?

2 Likes

Thanks, but that Syntonie module is SD only.

The line TBCs I’m referring to are rare these days. But the concept is that they don’t store the whole frame. You need to make bidirectional connections between the TBC and the VCR. The TBC replaces the sync coming from the deck, stabilizing the signal. It also cleans up the raster, performing dropout compensation. That’s an example showing how delaying all signals by at least one frame is just unnecessary. It’s convenient to just feed random wild signals into a frame store, but it’s not always the best solution. Pretty soon you’re in a situation where you need to delay the audio, too, and things get messy and expensive.

As for my own work, you’re welcome to take a look, but what you see there isn’t necessarily indicative of what I am going to try to accomplish in the future. The closest thing would probably be “PSEKELIS”. I created this at CalArts by feeding a prerecorded musical piece into a bunch of gear including a Fairlight CVI, CV-modded Optical Electronics raster processor, Hearne/EAB Videolab II, and ARP 2600.

http://www.dr-yo.com/video_psekelis_hfr.html

2 Likes

I don’t think I can help from here as you probably need to find older rare gear or invent newer gear that does what you want at the price point you’re willing to pay. Good luck!

3 Likes

From the LMH1980 datasheet (emphasis mine):

Advantages of the LMH1980The LMH1980 offers several advantages over the LM1881, including:

  • Lower operating supply of 3.3 V
  • Wide temperature range of −40°C to +85°C
  • Auto-detection of the input video format using a fixed-value resistor biasing architecture
  • Support for HD tri-level sync separation
  • HSync output and HD detect flag output
  • Smaller package

That’s why I figure the LM1881 isn’t suitable for HD.

1 Like

I’ve never noticed the delay in any kind of appreciable way
and I love working with memory palace + sensory translator
so I guess I just don’t see this as a problem for my own work

what video synthesis equipment are you currently working with?

I am getting back into this field after a long hiatus. But I have experience with the Sandin and Hearne/EAB systems, DVEs, raster manipulation, etc. Plus I was a video editor and videographer, and I’ve been a computer musician and 3D graphic artist for a very long time.

The only reason I am getting back into video synthesis is because LZX is supporting high definition. I would have bought their gear long ago, but I am a snob about HD. :smile:

2 Likes

I understand exactly what you want @dryodryo and I understand exactly why you want it. I do find that latency becomes more significant for tight rhythmically oriented patches. I am starting a project with a tap dancer, next month, and I know that some elements of this may benefit from low latency response.

People talk about one or two frames latency, but this isn’t what I’m seeing from my Memory Palace or Structure. It’s more than that (there is another thread about this). When I plug into a venue’s projector, that will add some more latency. At a certain point, some types of patches no longer work.

My approach for working around this type of thing is to have the digital stuff going into the Cortex and then trying to add any required snappiness using analogue processing. My desire is to minimise any latency that I can control in order to give myself more options.

I suspect that if you transfer the audio performance elements into the voltage controlled realm, then you may be able to cut down on some of the latency that is intrinsic to MIDI and audio interfaces. This way, your video synth can work from the exact same triggers as the audio.

I get the impression that low latency input is a relatively niche requirement. I don’t know whether the Syntonie VU003 – COMPONENT2RGB could be updated to use the LMH1980 and put an additional Genlock output to the front panel (and possibly the rear, too). By my thinking, that would provide one component input that sends a genlock signal and could bypass any frame buffered encoder sections (Chromagnon etc).

1 Like

Yes, that was me. I think the episodes have all been taken down from the KPSU site.

1 Like

I am working on an infographic that clearly sets out some terms and definitions for video latency, but have not had much time to work on it. I think it may clarify this discussion more, and provide a basis for describing what we’re all trying to accomplish here – there are several very different ways that latency is introduced in the signal path in a complex video processing system, and several very different ways in which that becomes relevant – or not – for various user workflows.

This is a fascinating topic that rarely escapes the world of folks who are designing the circuits themselves. It’s important to understand LZX is a “deconstructed video” format – it lets you do all kinds of things to the signal and interface with it in a manner outside any consumer or broadcast gear.

So it can be very hard to have a discussion on latency outside a very specific signal path (which is why we all keep asking for examples of gear being used and techniques being attempted.) Maybe it is easier, if we at least have those details nailed down in context, even if it is for the purpose of theoretical analysis. That way we can have the discussion using words like nanoseconds and microseconds!

5 Likes

So, let’s break down all the complex signal flow steps that introduce latency.

For the purpose of illustration, we have a PC with a mouse, video playback software, an SDI video output card, an SDI to analog converter, an analog to 1V LZX TBC (TBC2), an analog keyer module (FKG3), an analog output encoder module (ESG3), and a consumer grade LCD video monitor with Analog/YPbPr input.

The latency we are measuring is the time it takes between “click the Play button” and “LCD monitor starts to show the frame.”

Each of the following steps introduces latency:

  1. The USB peripheral drivers poll the mouse and register a click.
  2. The software registers the click event and process it.
  3. The software starts buffering a video file into memory until it reaches a threshold at which it decides is safe to start streaming without hiccups.
  4. The video output card shows the first frame of video.
  5. The SDI to Analog converter processes the first frame of video and converts it into an analog signal.
  6. The TBC2 module captures the frame to memory and then waits for the next “house sync” vertical refresh pulse.
  7. The TBC2 module streams the frame to it’s 1V RGB outputs. Now we are inside the LZX 1V patch.
  8. The FKG3 keyer module processes the analog signal, transitioning for example, between the image and a synthesized pattern.
  9. The ESG3 analog encoder converts LZX 1V RGB back into viewable analog video signal.
  10. The analog video signal is captured by the LCD monitor and converted into a digital bitstream, and written to the monitor’s internal frame store.
  11. The LCD monitor refreshes, and you see the result.

All 11 steps introduce latency. Now, we can start estimating and measuring in context – the latency between two different steps, for example.

Steps #7 thru #9 are measured in nanoseconds. In this case, the latency is maybe 100ns? No more than a pixel or two. Each analog op-amp in the signal path adds a latency of around 10ns.

For the sake of proportional context, 100ns is about 10000 times faster than even Step #1 (registering a mouse click at 1ms polling speeds.)

And for the sake of argument, steps #1 thru #6 and #10 thru #11 have no effect on the output image you will see. There’s no difference if this was a latency of 1 frame or 100 frames, nothing is changing besides the time it takes for you to see it.

So in this example case, the only significant latency factor is the user interface element – UI response time between click and display. Whether that is relevant and how much it effects your workflow is inherently dependent on your process itself. For example, in a pre-rendered mixdown scenario for recapture, it doesn’t really matter at all. In a performance context where you are clicking the mouse in time with a live audio signal, it matters more. As mentioned above, it is the display or projector that is most likely to introduce an annoying amount of latency in context.

Things only get more complicated or start to effect the resulting image you will see when we are discussing recursion/feedback. And in that case, “low latency = always good” is not true at all. For example, you want at least one frame of latency in order to build up recursive images that stack on top of each other – if it’s less than that, you get more of just an optical smear – also good, but a different technique. Even delaying the recursion (variable programmed latency – frame delay memory) is an important technique.

The recursion latency and total latency are two different things entirely. Both are relative to different steps in the flow. For example, if we patch the FKG3 output back to one of it’s inputs, our total latency (click to display) remains constant. But the recursion latency of the FKG3 feedback path is much much faster (20ns-30ns ish.) You’d likely want to get a low pass filter into that feedback loop (i.e., intentionally increase the latency) or take the feedback path thru an additional TBC or frame delay to have that make a range of interesting pattern modulations.

For the sake of further discussion, let’s define some more terms:

  • Frontend software latency. Steps #1 thru #3. Anywhere between a few milliseconds to a few seconds. Want it to go faster? Buy a fancy gamer mouse, and optimize your PC (faster RAM, faster hard driver, better software.)

  • Frontend video latency. This is the latency between steps #4 and #5. A frame or two. Maybe more. A device that has analog output (rather than going SDI to Analog converter in step #5) might reduce it a little, but probably not by much. Once you’re streaming SDI, the digital devices can communicate with very low latency, and broadcast converters are generally very consistent. Cheap consumer grade converters from Amazon probably do more in software or have less memory or are MCU rather than FPGA based, so they will have more latency.

  • Input TBC latency. 1+ frames. In the case of TBC2, the frame latency is adjustable.

  • Analog processing latency. Almost instantaneous. Usually less than a microsecond even in worst case. If an object were to fly past your face at this speed, you’d never know it was there.

  • Backend / display latency. Could be anywhere from almost instantaneous (a CRT monitor) to several frames (a cheapo projector or a long video transmission line from the sound booth to front of house in a venue.) Want to decrease this, buy a display with faster refresh.

  • Software user interface latency The sum of all other latency increments. For example, the interval between clicking the mouse vs frame displayed on the monitor.

  • Analog/hardware user interface latency The sum of analog processing latency and backend/display latency. For example, the time between turning a knob on your video synth and seeing the result. The only latency here is going to come from the display you choose.

9 Likes

Part 2. Let’s talk about the OP suggestion – a direct analog input amp on your video synthesizer instead of an input TBC – relative to the example case described.

Here’s the conundrum.

  1. If you genlock your entire video synth to the incoming video stream, this saves you 1 frame of delay from the TBC. This does not make any difference at all on frontend software latency or frontend video latency or backend/display latency. So the reality is you are only going to reduce latency at a single step among many equivalent steps. All the other steps are going to still introduce latency. Disadvantages: Only one video signal can be input to your system at a time this way. And without a TBC, recursive processes between that video input and your video synth’s output will “Smear across” instead of “stack up in a feedback loop.” You are also limited by the stability of this single video source – if it’s not perfect, you have glitches that propagate throughout the whole system.

  2. The other case as it relates to the example would be to genlock the the video output itself, #4 so that it’s in time with your system. This requires a device that is capable of that in the first place. This doesn’t really do anything for you! You’re not escaping the use of a TBC – you’re just moving where it is in the signal path. The video output card is just going to delay itself to match the external timing – just like the TBC2 module would.

So the latency gains of an input preamp is incredibly contextual to what I would consider a marginal use case – and most of times even an undesirable use case, which has lots of “ifs” or “buts” about how you can use it. And even in the best case scenario – you’re reducing the entire signal path from a few frames down by one – not eliminating it entirely.

A list of things you can do to reduce latency in general

  • Optimize the PC that displays the initial video file.
  • Use broadcast grade converters and video interfaces, and not cheap consumer grade ones.
  • Use a broadcast grade display monitor or a fast refresh monitor designed for gamers or others who need a low latency monitoring solution. Or use a CRT.
  • Use a higher frame rate video format (60fps progressive vs 30fps progressive, for example)
  • Invent time travel and apply negative latency.

Key points…

  • Video processes will always involve some kind of latency, even analog systems
  • Low latency video systems only involve 3-6 frames of total latency. It’s fast enough to mash buttons to the beat in your video playback software, output and process with video synth, and display the results – and the brain doesn’t register something as too far “off time.”
  • High latency video systems would involve 7-60+ frames of latency and be annoying to use in a live context. This usually comes from a slow display/monitor/projector.
  • Recursion latency is not the same as total latency, and there are many ways to accomplish both ultra fast latency or delayed recursions within the same system – without effecting total latency.
  • Analog latency is not the same thing as total latency – and this is why we use analog systems for video synthesis! Each module only adds an incredibly tiny amount of delay – it’s effectively real time. Doesn’t matter how many modules you patch, the latency stackup is barely present. Whereas a string of digital video processors or mixers for example, are in most coases going to add at least a frame or two of delay at each step.
9 Likes