Recent LZX entrant asking/musing about a couple things

HD sync vs SD sync is really not that much faster. Frequency and bandwidth are two different things. If you can get an audio oscillator to lock to SD H/V sync, it seems unlikely it won’t be able to lock to other sync pulses in timings that are just a little bit faster.

EDIT:

Yep, I’ve been able to do this with both RCA and CV/Gate sync bus versions of Prismatic Ray. My experience has been that PR’s outputs have a perceptibly softer character in 1080i and especially 720p (an inherent low pass filter is being revealed in those timings) in comparison to SD timings but no obvious issues with sync itself.

3 Likes

Ah, interesting, thanks! Makes sense. If/when I go to an HD encoder it’ll be interesting to see how they fare.

I believe H/V sync is likely to be implemented in a TBC2 firmware update. I’ve certainly been harping on it long enough. :slight_smile:

In general, any module that does not require SD sync will work fine in HD timings, with some horizontal blur. That blur is scene-dependent. In other words, with some images you will notice significant blur, with others you may not notice it at all. It all depends on how much detail is present in the horizontal dimension. Fine details are lost, hard edges are softened, but smooth gradients are unaffected. It’s basically a very light slew applied to changes in the brightness of each channel. Try to push a one pixel wide vertical stripe test pattern through, and you’ll get something close to solid gray. But with a normal camera image, it may look perfect.

Technically the chips are not fast enough to do full HD resolutions even though HD timings are not a problem at all.

For reference, pixel clocks:
1080i60 ~62 MHz
720p60 ~55 MHz
480i60 ~9 MHz

If I’m not mistaken, the chips in pretty much all modules top out at 10 MHz.

2 Likes

I wasn’t aware about TBC2 getting that in the future–I thought my wallet was safe from it since I’m not focused on processing external images/video!

Regardless, this is good info to know. I’ve been working with gradients up until very recently, so smooth and slew is in my wheelhouse.

1 Like

There’s been no promise of H/V sync front panel output for TBC2… I’m just speculating. But presumably it would not be a difficult feature to add.

BTW, in all of my patches I use a sample and hold circuit to prevent horizontal tearing. I use Pulp Logic tiles because they are simple and inexpensive. Send any video synced oscillator or vertical ramp through a comparator to generate a pulse. That’s the S/H trigger source. Modulation source goes into S/H signal input. Result: modulation value is held for the entire duration of the field/frame.

3 Likes

I think this is proof that the CV/Gate sync bus output works

2 Likes

Yup, can confirm you just need to flip the dipswitches on the back of TBC2 to enable H/V sync to be sent to CV/Gate bus.

4 Likes

I confirmed this earlier this year, in another thread. :wink:

Also, I noted an interesting sync artifact when syncing vertically in HD. My Prismatic Rays are slow enough to get oscillating, after an HD sync, that there can be a noticable space on the left side of the image before the first bar appears.

For me, this just makes the PRs more fun. When a patch makes me notice this effect, I can look for creative ways to use it. That can only be a good thing.

3 Likes

Great info share that should be stickied!

3 Likes

I’ve still not put a visual to the term “tearing” in my brain, but your explanation makes sense. It’s good to know that other modules made for audio can be useful in a video synth context! I’d not thought of using a S/H in video, though again I’d have to try it out to know what it does visually.

I had a Kinks in my video case briefly, but didn’t fully explore it before it was moved back to audio case duties. I liked its first and second function blocks, though I don’t recall what (if any) limitations it had for video.

1 Like

Patch a clock or trigger source into your encoder. You will mainly see the whole screen flash on/off but when the pulse does not line up exactly with the vertical sync generated by your sync generator, you will very briefly see the flash ‘start’ in the middle of the frame. This is what we mean by “tearing”.

Video may seem continuous but it’s actually discrete moments in time when you consider we’re seeing n frames per second.

2 Likes

Yeah, “tearing” is any instantaneous change in the picture that does not happen during the vertical interval between frames. It manifests as a sharp horizontal line. Commonly seen in fast-moving computer games, where the render update is not synchronized to the display refresh rate. Graphics cards have a setting called “Vsync” in which the rendered data is held in a memory buffer until the next vertical interval. It’s an option because some people want faster updates to rendered frames, others want smooth video with no tearing.

By the way, if you want horizontal bars in an image, resulting from e.g. cranking a modulating audio oscillator up into supersonic video range, then you can’t use the sample/hold technique. It’s always going to limit the modulation frequency to the current frame rate, so you will never get intraframe transitions, i.e. horizontal bars.

Also by the way, we’re starting to see this sort of thing implemented within modules. VH.S Aural Scan has the vsync S/H circuit built into it.

2 Likes

Thank you both for thse explanations. I’m at work so can’t try the clock into encoder thing, and am having difficulty understanding:

Does this mean that I would use, say, a ramp instad of an oscllator (either audio or video) in order to try the S/H technique? Or is it meant for processing external video?

I find the term “intraframe transitions” a useful way of conceptualizing horizontal bars–thank you!

Aural Scan is a tempting one, for sure. Eventually I’d like to coordinate my audio and video, but feel I have much to learn.

Question for either of you–or anyone, really!: how is the frame sync out on the front of Visual Cortex related to this discussion? I’ve not yet tried plugging it into an audio oscillator to sync it since I was having too much fun with the other outs.

The concept here is to limit the modulation frequency to the frame rate. A free-running oscillator, or any modulation source, can change value within a single frame. If it’s a sudden change, you get a sharp horizontal line. If it’s a gradual change, like a sine wave, you get a gradient.

In the former case, you can put the modulation source through a sample/hold circuit. Whatever value the modulation source has at the beginning of the frame is held until the next frame. No jagged horizontal bar.

In the latter case, e.g. with a high frequency audio oscillator, the horizontal pattern may be desirable. Soft scrolling horizontal bars look cool. If you want that, then you can’t use the S/H method, because that quantizes the modulation so that it can only change between frames, not in the middle of a frame.

Visual Cortex Vsync output should be perfect as the trigger for the S/H. In the absence of a true Vsync, one can generate a pseudo-Vsync from a ramp or video-synced oscillator through a comparator.

None of this is directly processing video images, it’s solely about wrangling the low frequency and audio frequency sources that feed into video processor modules.

1 Like

Ah, scrolling vs synced audio oscs! That was the distinction I didn’t pick up on, makes much more sense now. Recently I’ve only used audio oscs synced, if you can believe it. Sync up a PDO, get those self-phase-modulations going, (attenuate) and have some squiggly fun (for just one example). And I’ve definitely been on a gradient kick, more sine than tri (though don’t tell my VU009!). Thank you for taking the time with these explanations! I’ve got a few case adjustments to make, which I actually enjoy once I get the time.

2 Likes

Yeah, if you have an audio oscillator that syncs its duty cycle to an external clock, you probably don’t need the S/H. Just feed it Vsync or pseudo-Vsync with sufficient amplitude for the trigger. But for more complex modulations, S/H is a helpful technique.

1 Like

I’ll definitely be playing with the S/H teachnique now that I’ve understood it a little better.

It has been interesting to see which audio oscillators take the VC’s V-sync well–some of which I didn’t expect, and others I thought would take it well don’t. Some day I will do a proper test of the ones I have, likely using the Vermona Amplinuator to boost the sync if/as needed since the modules of theirs that I have play relatively well with video.

1 Like

Often you don’t even need to increase the gain of the V sync–you just need to offset/bias it so the rising edge is crossing the voltage threshold the oscillator’s sync input is expecting.

3 Likes

Ah, interesting! I’ll have the offset option on hand when I give this a shot. I like me an offset module–might test them as well for handling of video signals, got a handful that can offset an incoming signal.

Bias/offset becomes much more important in video than audio because we’re primarily dealing with a DC coupled signal instead of AC coupled signals.

If you send offsets into an audio output module, you’re not going to really hear anything exciting. At best, AC coupling will negate any DC offsets you introduce and at worst, you’re going to force the speaker cone to just sit stuck in one position and it will probably not sound great.

Plug offsets into a video encoder module, and you’re directly altering the black level of each color component.

3 Likes