Using ESG3, TBC2, or Chromagnon in HD sync modes: Compatibility with earlier LZX modules and instruments

You’re right on the timing details! I was mixed up on the SD vs HD timings in my head. 525nS is right.

For ESG3, we have the HD pulse width set to 44 cycles of the 74.25MHz or 74.176MHz reference clocks, so ~590ns.

That becomes very tricky – the right answer is that you start the ramp on pixel #1 when the AVID region goes high – and that timing is variable per video format. So it’s not something you can get a precise reference for from the LMH1980 alone – you’d need to tune the ramp start phase per format to get it perfect (maybe adjustable monostable delay of HSync with a trimmer?) Easier to base this on a sync generator/FPGA that is responsible for all of the timing. This is ultimately why we didn’t go with an analog PLL based ramp generator for DSG3 – we had an “auto calibrating” ramp circuit working beautifully, but the blanking periods would have had to be manually adjusted.

That said, all you need for a “working” ramp generator is a stable hsync reset – it doesn’t have to start exactly in the right spot to be patchable!

5 Likes

Hello everybody!
I’d like to connect Vidiot mini jack outputs on the panel (i.e. the diamond output) as video source for ESG3. I set the format switch to PAL.
However, the diamond shape image is not synced and it is scrolling left to right.

How to stabilize it?
thanx! : )

I don’t have either - but, you’d need to connect sync out from esg3 to a sync in on the vidiot

As far as I know there’s no sync in in the Vidiot…am I wrong?

I checked the manual - you could try the horizontal sync input - page 7/8 of the manual item 18 - worth a try at least

or see page 13!

3 Likes

I believe you can send the sync from the back of the ESG to the camera input of the Vidiot.
Or you can send the the Vidiot’s video output (or maybe the Eye output?) to the sync input on the back of the ESG.
Or if you have something connected to the Vidiot’s camera input, you can send the camera output to the ESG.
You’ll have to set the ESG to generate or receive sync accordingly.

5 Likes

Thank you it works by VIDIOT camera output to ESG3 sync input!

4 Likes

That’s how I sync my system :slight_smile:

2 Likes

If you are using ESG3’s component output for monitoring, you can also send it’s composite output to Vidiot’s composite input for sync. That also gives you an optional feedback output from Vidiot, if you want it.

6 Likes

Would using the ESG3 as an output module for Rutt-Etra type stuff make any difference/sense?

I mean using the HQ mode.

Just ordered one in order to get better quality video out from my setup and haven’t looked into the other possible benefits and use cases yet.

It’s been a while, hello everyone!! :slight_smile:

Welcome back! Are you using any older LZX modules in your vector rescanning patches and want to work ESG3 into the mix?

1 Like

Thanks!!

Yes. I’ve got the usual suspects: VC, Passage, Nav and Shapechanger.

If you are using Visual Cortex as part of your patch, you will be limited to NTSC/PAL when syncing your ESG3. I don’t think you would see any change in your vector rescan workflow by adding ESG3.

1 Like

That makes sense. Thanks! :slight_smile:
Actually I do have the TBC2 so by using that the only thing coming from the VC would be the ramps.

Well the ramps are synced, so anything out of the Cortex is still limited to NTSC/PAL. However…TBC2 makes its own ramps, so you would have an HD ramp source there. TBC2 can replace Cortex in your rescan patches.

2 Likes

I will definitely give this a try.

It would be interesting to hear from someone who has done this with the new modules as of how noticeable the difference is.

TBC2 is good for HD ramps, I have tested this in a vector scan processing workflow.

If you care about the cleanest possible ramps, with no crosstalk, follow this procedure. Crosstalk between ramps will manifest as skewing of the raster, it won’t be perfectly rectangular.

Use one channel of TBC2 for the video image, if any.

Set up the other channel of TBC2 to output ramps. One of them on the Red output, the other on the Blue output. Turn the Green output off entirely… disabled.

But you know what, I got more control by using DWO3 or VU009 sawtooths to generate ramps. If you’re not able to get a single sawtooth cycle out of DWO3, send a static voltage (e.g. Matte) into the CV input and set the attenuverter to negative. This will slow down the oscillator more than you can by just turning the frequency down all the way.

5 Likes

I was wondering about HD vs SD ramps within the context of vector rescanning and lo and behold, it’s actively being discussed here. I’m considering buying a TBC2 for many reasons, one being the ramps.

Right now I use both VC and Diver as ramp generators. Doing a lot of rutt etra type stuff with external video/images, but also stuff without external inputs (shapes, patterns etc generated within the system). I don’t want to change my workflow for the rescanning portion (I’d like to keep what I’m filming off the X-Y displays as SD, so I can process the rescan video with all of the normal SD LZX modules, rest of my projects are in SD as well) but if I could increase the resolution of the ramps in my vector patching (using ramps from TBC2), I’m assuming I’d get a lot more detail within whats displayed on the vector screen itself… am I understanding that correctly? Almost as if the “fabric” that is the ramps is larger, or to put it another way, if the gain wasn’t adjusted on the display the ramps themselves would be much larger.

I use navigator pretty early in the chain in nearly every vector patch, would still work to rotate the TBC2 ramps? It’d need to be synced to TBC2?

Very basic example of what I’m envisioning:
TBC2 (Set to an HD resolution)
Encoder A- R (h ramp) G (v ramp) B (n/a) (as described by dryodryo above)
Encoder B- media loader high def sized image

Both Encoder A H+V Ramps to Navigator (synced to tbc2?)

Navigator outputs to 2 cadet crossfaders
Encoder B image mixed in to taste for displacement effect

Crossfader output to X-Y display
(And run the Luma from encoder b the TBC2 to the Z input)

X-Y display rescaned by SD camera
Colorized, keyed to taste, etc etc then output by VC

So I guess I’m kinda suggesting an HD path to the XY displays then a seperate SD path for rescanning. In my head it makes sense that there’d be a big difference using these ramps instead, HD video is much larger, but I’m not sure if it translates the way that I think it does.

2 Likes

You could also use Angles for ramps. NGL, TBC2 ramp randomize function is pretty fun.

Your display’s Z axis may not be able to cleanly resolve HD luma as I don’t think I’ve seen a vector monitor that can do over 10MHz on Z. The output impedance of your modules also makes a difference in whether the available details are cleanly output or simply low pass filtered.

If you are rescanning in SD prior to encode/capture, I’m not sure you’ll really be able to tell a difference regardless of how sharp HD ramps/footage appears on the monitor. You’re effectively downscaling at your colorizer/encoder stage with VC.

Post some comparisons if you end up going down this route!

1 Like