You’re right on the timing details! I was mixed up on the SD vs HD timings in my head. 525nS is right.
For ESG3, we have the HD pulse width set to 44 cycles of the 74.25MHz or 74.176MHz reference clocks, so ~590ns.
That becomes very tricky – the right answer is that you start the ramp on pixel #1 when the AVID region goes high – and that timing is variable per video format. So it’s not something you can get a precise reference for from the LMH1980 alone – you’d need to tune the ramp start phase per format to get it perfect (maybe adjustable monostable delay of HSync with a trimmer?) Easier to base this on a sync generator/FPGA that is responsible for all of the timing. This is ultimately why we didn’t go with an analog PLL based ramp generator for DSG3 – we had an “auto calibrating” ramp circuit working beautifully, but the blanking periods would have had to be manually adjusted.
That said, all you need for a “working” ramp generator is a stable hsync reset – it doesn’t have to start exactly in the right spot to be patchable!
Hello everybody!
I’d like to connect Vidiot mini jack outputs on the panel (i.e. the diamond output) as video source for ESG3. I set the format switch to PAL.
However, the diamond shape image is not synced and it is scrolling left to right.
I believe you can send the sync from the back of the ESG to the camera input of the Vidiot.
Or you can send the the Vidiot’s video output (or maybe the Eye output?) to the sync input on the back of the ESG.
Or if you have something connected to the Vidiot’s camera input, you can send the camera output to the ESG.
You’ll have to set the ESG to generate or receive sync accordingly.
If you are using ESG3’s component output for monitoring, you can also send it’s composite output to Vidiot’s composite input for sync. That also gives you an optional feedback output from Vidiot, if you want it.
If you are using Visual Cortex as part of your patch, you will be limited to NTSC/PAL when syncing your ESG3. I don’t think you would see any change in your vector rescan workflow by adding ESG3.
Well the ramps are synced, so anything out of the Cortex is still limited to NTSC/PAL. However…TBC2 makes its own ramps, so you would have an HD ramp source there. TBC2 can replace Cortex in your rescan patches.
TBC2 is good for HD ramps, I have tested this in a vector scan processing workflow.
If you care about the cleanest possible ramps, with no crosstalk, follow this procedure. Crosstalk between ramps will manifest as skewing of the raster, it won’t be perfectly rectangular.
Use one channel of TBC2 for the video image, if any.
Set up the other channel of TBC2 to output ramps. One of them on the Red output, the other on the Blue output. Turn the Green output off entirely… disabled.
But you know what, I got more control by using DWO3 or VU009 sawtooths to generate ramps. If you’re not able to get a single sawtooth cycle out of DWO3, send a static voltage (e.g. Matte) into the CV input and set the attenuverter to negative. This will slow down the oscillator more than you can by just turning the frequency down all the way.
I was wondering about HD vs SD ramps within the context of vector rescanning and lo and behold, it’s actively being discussed here. I’m considering buying a TBC2 for many reasons, one being the ramps.
Right now I use both VC and Diver as ramp generators. Doing a lot of rutt etra type stuff with external video/images, but also stuff without external inputs (shapes, patterns etc generated within the system). I don’t want to change my workflow for the rescanning portion (I’d like to keep what I’m filming off the X-Y displays as SD, so I can process the rescan video with all of the normal SD LZX modules, rest of my projects are in SD as well) but if I could increase the resolution of the ramps in my vector patching (using ramps from TBC2), I’m assuming I’d get a lot more detail within whats displayed on the vector screen itself… am I understanding that correctly? Almost as if the “fabric” that is the ramps is larger, or to put it another way, if the gain wasn’t adjusted on the display the ramps themselves would be much larger.
I use navigator pretty early in the chain in nearly every vector patch, would still work to rotate the TBC2 ramps? It’d need to be synced to TBC2?
Very basic example of what I’m envisioning:
TBC2 (Set to an HD resolution)
Encoder A- R (h ramp) G (v ramp) B (n/a) (as described by dryodryo above)
Encoder B- media loader high def sized image
Both Encoder A H+V Ramps to Navigator (synced to tbc2?)
Navigator outputs to 2 cadet crossfaders
Encoder B image mixed in to taste for displacement effect
Crossfader output to X-Y display
(And run the Luma from encoder b the TBC2 to the Z input)
X-Y display rescaned by SD camera
Colorized, keyed to taste, etc etc then output by VC
So I guess I’m kinda suggesting an HD path to the XY displays then a seperate SD path for rescanning. In my head it makes sense that there’d be a big difference using these ramps instead, HD video is much larger, but I’m not sure if it translates the way that I think it does.
Your display’s Z axis may not be able to cleanly resolve HD luma as I don’t think I’ve seen a vector monitor that can do over 10MHz on Z. The output impedance of your modules also makes a difference in whether the available details are cleanly output or simply low pass filtered.
If you are rescanning in SD prior to encode/capture, I’m not sure you’ll really be able to tell a difference regardless of how sharp HD ramps/footage appears on the monitor. You’re effectively downscaling at your colorizer/encoder stage with VC.
Post some comparisons if you end up going down this route!