Conversion to LZX RGB format

Hey there,

What do I need to do to convert HD video to the LZX RGB standard? My understanding is that the LZX 1V standard is unique. Before hitting LZX RGB inputs, any signal is going to need to be massaged: voltage scaling, DC restoration, sync strip, etc. I saw in another thread that Cadet III performs this function. But of course I did not see them for sale. What are my options currently?

What is the plan for Gen 3? A decoder module that can accept YUV would be good enough for my purposes at this time, but of course we need to get other formats in as well. I propose separate decoder modules for analog & digital signals. The less complicated the better, for both designers and users of the modules. I’d love to see an analog decoder supporting YUV and VGA. RGBS, RGBHV, and RGsB would be great, but not top priority. A digital decoder would need to support HDMI and SDI. DVI-D would be nice, but not essential.

Thanks!!

Aaron

TBC2 is the module we have developed for this purpose.

3 Likes

It would take a circuit being designed and manufactured that does those functions. There is not a standalone Eurorack format module that exists that does what I think you’re getting at, i.e. an HD component input amp with no intervening frame buffer.

I’m curious to hear about your proposed workflow with these myriad input formats or your desired outcomes from a video synthesis instrument. It seems like minimal to no frame delay is critical in your process which I’m not quite understanding due to lack of context–do you have any current work you could share with us that gives a sense of how you plan to approach the instrument? Some frame delay seems impossible to avoid in my own practice but I’m not doing anything where user input per individual frame is critical or can’t be compensated for.

5 Likes

I’m pretty sure the Cadet III does not do what you’re looking for. First, it’s SD-only since it uses a LM1881 (as far as I can tell, that’s only for NTSC/PAL, both of which are SD formats). But also it’s not RGB, it just converts a composite signal to a luma (brightness) signal.

The Syntonie VU003 is closer to what you want, since it’s YPbPr, but it too is SD-only because of the LM1881.

1 Like

From the LM1881 datasheet:

The integrated circuit is also capable of providing sync separation for non-standard, faster horizontal rate video signals.

Not sure the range of signals it will decode sync from, but it should do some hd I would think.

TBC2 looks great, at least for component signals. A post-release firmware update to support VGA resolutions up to 1080i would make it complete. But if I am not mistaken, there is no way to bypass the frame store. Forgive me if you already answered this question on FB. But can TBC2 be used as a simple decoder for synchronous sources?

And of course, TBC2 doesn’t address the need for HDMI and SDI input into an LZX system. This can be handled by external conversion boxes, but this is sub-optimal for many reasons, including portability. Ideally a modular system would have everything needed in one case with a unified architecture, power, etc.

Thanks!

The inputs are hardwired directly into the frame buffer, so no. What you are describing is an input amp module that doesn’t exist and would need to be designed and manufactured.

That would be ideal, wouldn’t it? For now, I think you’ll have to use external converter boxes if your workflow necessitates that.

Just circling back to my question above, I’m curious if you could fill us in on your need for a near zero frame delay in a practical context of your body of work or what you’re hoping to literally do with these devices. It might help folks on the forum give you better advice on what you’re trying to achieve that requires such minimal frame delay when some frame delay is usually just a factor of life in video synthesis up to pro broadcast settings.

Not saying there isn’t a need for it if we can understand why.

3 Likes

One of my core intentions is to play the video synth as a real-time instrument, controlled from a PC via MIDI and DC coupled audio CVs. Latency is a huge concern here. It’s already bad enough that the best audio interfaces available can only bring latency down to ~10 ms, and practically speaking it’s usually more like ~20 ms. Then if we’re going through a TBC, that’s adding at least ~30 ms. So at a minimum we’re looking at two frames of delay between hitting a MIDI key and seeing something change on the screen.

Then it gets even more ugly when we’re trying to generate musical sound and picture at the same time. If the video is being delayed by TBCs etc., and the audio is not, then there will be an A/V sync issue at the capture stage. That could be dealt with by shifting audio in post, but that is a solution to a problem that does not need to exist.

Yes, that’s what I thought, and that’s why I brought it up. There’s a need for this, at least in my use case described above.

I think people may be too comfortable with frame delay as a “factor of life”. I clearly remember a time when absolutely everything in a studio was locked to house sync. TBCs were expensive. Even VCRs weren’t necessarily going through full frame stores… the old U-Matic decks went through 16-line TBCs that introduced almost no delay.

This sounds like a fun approach but I wonder if you’ve had the opportunity to attempt this workflow with another analog video synth and had this exact problem occur that made it impossible to navigate. If you haven’t, there might be LZX users in your area that you could attempt to try this out to see how much it really creates a problem during the process? It seems like this is a major point of concern for you and I’d be sad to see you disappointed with gear that doesn’t meet specs you’d like it to have.

Can we see any of your current portfolio so we can try to make some helpful suggestions with context on what you’re attempting to use this for in more concrete terms? I get that you want to press a key and have something happen with zero delay but I’m having trouble conceptualizing how two frames would actually present an issue. I’m mainly curious as I’ve personally never encountered another video synthesist whose work was frustrated by two frames of delay and just want to better understand the limitations you seem to be butting your head up against.

Cool, maybe this would be a great opportunity to learn how to design a module that fits your needs and share it with everyone else! I had no idea how to design a module a little over a year ago and found the time to make a Quad S-Video Distribution module I wanted with no formal training. Maybe that’s not in the cards for how you want to spend your time, which is totally understandable. If not, maybe you’d want to personally commission someone to research, design, and manufacture it for you?

I don’t know your circumstances but it’s unlikely a very niche case module will appear on the market by posting an idea about it on a forum–just trying to be real if you’re looking for this module to actually exist. Syntonie’s VU003 is probably a good starting point but the YUV->RGB LZX 0-1V colorspace conversion is different between SD/HD and the sync stripper (LM1881) may need to be a more modern adaptable IC like LMH198x.

Sure, it makes sense that when TBCs were expensive that there was more of an emphasis on gear that primarily worked with genlocked inputs. I have ancient gear from the 70s with separate composite sync and composite blanking inputs, even!

Maybe digging into that older gear will get you the result you want? TBC2 is just not the module you’re looking for if you want zero input delay nor can I think of any TBC that has zero delay due to how they work. I’m not an expert on all TBCs that have ever been produced so maybe you have model numbers of examples of some you could share with us?

2 Likes

Thanks, but that Syntonie module is SD only.

The line TBCs I’m referring to are rare these days. But the concept is that they don’t store the whole frame. You need to make bidirectional connections between the TBC and the VCR. The TBC replaces the sync coming from the deck, stabilizing the signal. It also cleans up the raster, performing dropout compensation. That’s an example showing how delaying all signals by at least one frame is just unnecessary. It’s convenient to just feed random wild signals into a frame store, but it’s not always the best solution. Pretty soon you’re in a situation where you need to delay the audio, too, and things get messy and expensive.

As for my own work, you’re welcome to take a look, but what you see there isn’t necessarily indicative of what I am going to try to accomplish in the future. The closest thing would probably be “PSEKELIS”. I created this at CalArts by feeding a prerecorded musical piece into a bunch of gear including a Fairlight CVI, CV-modded Optical Electronics raster processor, Hearne/EAB Videolab II, and ARP 2600.

http://www.dr-yo.com/video_psekelis_hfr.html

I don’t think I can help from here as you probably need to find older rare gear or invent newer gear that does what you want at the price point you’re willing to pay. Good luck!

2 Likes

From the LMH1980 datasheet (emphasis mine):

Advantages of the LMH1980The LMH1980 offers several advantages over the LM1881, including:

  • Lower operating supply of 3.3 V
  • Wide temperature range of −40°C to +85°C
  • Auto-detection of the input video format using a fixed-value resistor biasing architecture
  • Support for HD tri-level sync separation
  • HSync output and HD detect flag output
  • Smaller package

That’s why I figure the LM1881 isn’t suitable for HD.

1 Like

I’ve never noticed the delay in any kind of appreciable way
and I love working with memory palace + sensory translator
so I guess I just don’t see this as a problem for my own work

what video synthesis equipment are you currently working with?

I am getting back into this field after a long hiatus. But I have experience with the Sandin and Hearne/EAB systems, DVEs, raster manipulation, etc. Plus I was a video editor and videographer, and I’ve been a computer musician and 3D graphic artist for a very long time.

The only reason I am getting back into video synthesis is because LZX is supporting high definition. I would have bought their gear long ago, but I am a snob about HD. :smile:

You don’t strike me as a snob at all–this is really great stuff! Thank you for sharing!

I understand exactly what you want @dryodryo and I understand exactly why you want it. I do find that latency becomes more significant for tight rhythmically oriented patches. I am starting a project with a tap dancer, next month, and I know that some elements of this may benefit from low latency response.

People talk about one or two frames latency, but this isn’t what I’m seeing from my Memory Palace or Structure. It’s more than that (there is another thread about this). When I plug into a venue’s projector, that will add some more latency. At a certain point, some types of patches no longer work.

My approach for working around this type of thing is to have the digital stuff going into the Cortex and then trying to add any required snappiness using analogue processing. My desire is to minimise any latency that I can control in order to give myself more options.

I suspect that if you transfer the audio performance elements into the voltage controlled realm, then you may be able to cut down on some of the latency that is intrinsic to MIDI and audio interfaces. This way, your video synth can work from the exact same triggers as the audio.

I get the impression that low latency input is a relatively niche requirement. I don’t know whether the Syntonie VU003 – COMPONENT2RGB could be updated to use the LMH1980 and put an additional Genlock output to the front panel (and possibly the rear, too). By my thinking, that would provide one component input that sends a genlock signal and could bypass any frame buffered encoder sections (Chromagnon etc).

1 Like

Did you by chance do some work with KPSU? I feel like I remember listening to a Dr. Yo show back in 2016. Are any of those episodes still live? I’d love to relive those god damn glory days!

Yes, that was me. I think the episodes have all been taken down from the KPSU site.

I am working on an infographic that clearly sets out some terms and definitions for video latency, but have not had much time to work on it. I think it may clarify this discussion more, and provide a basis for describing what we’re all trying to accomplish here – there are several very different ways that latency is introduced in the signal path in a complex video processing system, and several very different ways in which that becomes relevant – or not – for various user workflows.

This is a fascinating topic that rarely escapes the world of folks who are designing the circuits themselves. It’s important to understand LZX is a “deconstructed video” format – it lets you do all kinds of things to the signal and interface with it in a manner outside any consumer or broadcast gear.

So it can be very hard to have a discussion on latency outside a very specific signal path (which is why we all keep asking for examples of gear being used and techniques being attempted.) Maybe it is easier, if we at least have those details nailed down in context, even if it is for the purpose of theoretical analysis. That way we can have the discussion using words like nanoseconds and microseconds!

5 Likes