LZX Gen3 Modular Releases 2021-2022

Yes, you can think of ART3 as a kind of fully voltage controlled SMX3 with a few extras (and 2x2 instead of 3x3). If you have quadrature VCOs, those can be used to generate the rotational offsets – or you can use the Polar2Cart module if you want “Rotation Angle” to be a single video rate parameter.

So to spin some ramps, you need.

  1. A module to generate ramps.
  2. A module to generate Sin/Cos offsets (Quadrature VCO, Mapper, future Polar2Cartesian module discussed here)
  3. A 2x2 four quadrant multiplier matrix (ART3, etc.)

For Gen3, we’re splitting multiplier matrix, colorspace converter, and rotation driver into separate modules (Mapper and Navigator are like 2-in-1 functions in this sense.) This makes the initial patch more complex in some cases, but the modules can now be used like building blocks in a lot of new ways. You can patch up some more expanded subsystems in ways that were not possible before. For example, one system could have just a sample of rotational control, while another system could triple up on some things for 3D rotation or parallel YIQ processors, etc.

5 Likes
4 Likes

Any patch tips or conceptual strategies for less experienced Mapper users?

1 Like

There’s a fresh video/stream on Twitch created just over 24hrs ago with an hour of patching goodness :+1:t3:

I’m not sure if this fresh stream would immediately help you @mrfang or other Mapper users but it wouldn’t hurt to check it out.
And the super informative classic “3 Patches” video for Mapper of course.

1 Like

Try making a Double Linear Rainbow as your “Home Row” patch:

HRamp or Vramp → Hue input. Hue CV Level fully CW. Hue CV Bias fully CCW.
The ramp’s brightness is now shifting the hue angle as it transitions from black (no offset) to white (720 degrees of hue phase shift, 2 trips around the color wheel)

Saturation & Brightness bias controls - set CW to taste - hyper saturated rainbow vs muddy pastel rainbow, etc.

RGB Out → video output

Now swap out the ramp with Luma video, and you’ve got a 1960s video art style colorizer function.

In it’s relationship to Swatch, Mapper also contains a YUV to RGB function block – but does not contain an RGB to YUV function.

7 Likes

Another new LZX module before 2022 close?! I’m in and will find a spot for!!!
Needing TBC2 soon though :wink:

3 Likes

@creatorlars Is there an exceptionally clever way to reintroduce color information into a rescanned vector signal? Patch the vector rescan camera into the Swatch Y input?

I do not understand ART3, but I’m sure that will come. I didn’t understand DSG3 either until I got hands on experience.

Thank you Sensei

I would modulate the Hue/Angle of the Pol2Cart module (or a Mapper!) with the luma signal from the camera.

With Swatch in general: find creative ways to split the Luma channel into interrelated YIQ components and feed those into YIQ inputs. For example, you could patch the luma into Stairs, and feed any 4x outputs directly into the IQ+ and IQ- inputs. That should make something fun.

9 Likes

Thanks again. I didn’t articulate my question well. Maybe you answered the question I was trying to ask, but I’m not sure.

What I’m contemplating is the possibility of reintroducing the color of the original video source into the rescanned vector graphic, emulating an RGB vector monitor.

It’s not easy to wrap my head around this because I do not fully understand the polar to cartesian concepts. Even articulating the problem is presenting challenges, especially for my currently influenza-afflicted brain.

Oh, I get you now. Hmm. The issue is that to deform/transform the image’s spatial arrangement (as is the point, with scan processing), then you have to do that to all color components in the image – there wouldn’t really be a way to restore them afterwards – except maybe some sort of digital algorithm.
Here’s one rather straightforward way to do color rescanning with a little pre and post processing of the source material:

  1. Encode source footage such that Red, Green and Blue are split into consecutive frames’ luma channels. The result is “sequenced RGB” frames at one third of the frame rate.
  2. Rescan the XY display as normal luma content, capture the results with a camera that is genlocked to the source.
  3. Decode the “sequenced RGB” back into single images, restoring its original frame rate.

Steps #1 and #3 could probably be done with some ffmpeg scripting.

1 Like

Thanks, that’s what I thought. But I had to ask!

Even if the color components are in sync, they all have to be transformed the same way. The raster is a spacetime matrix. Mucking with the raster means that although the brightness of the scanning dot is nearly simultaneous with the original video intensity, the dot lands in a different place on the screen.

But then I thought, well, we have the transformed spatial coordinates – the rescanned vector graphic. And we have the original color information, which we can extract using Swatch. Maybe there’s a way to marry the two.

The sequential RGB approach sounds clever, but it would fall down if the vector graphic was animated in any way. I guess one could quantize the animation to frame triads with a V-sync circuit incorporating sample & hold, a 1/3 clock divider, and a synchronous start trigger, or even a sequencer. But I don’t have all of that currently. And anyway it would be pretty unintuitive trying to perform at 1/3 speed.

Lacking a fast RGB vector monitor, or a stroke-to-raster converter, it sounds like digital emulation is the only way to get full color raster manipulation.

Thanks again, happy holidays!

1 Like

Yeah, it’s possible – but you would need to replicate the RGB sequencing process for any modulation sources (record the animation sources as an image, then play them back the same as you are doing the source), or just use static sources (like fixed ramps, etc – which is fine if you are composing stills.)

1 Like

Interesting idea. That’s an awful lot of computer work. IMHO, in this case it’s better / faster / easier to fake it with digital emulation. But this paradigm of prepping video signals in unconventional ways does expand the range of possibilities. Definitely “outside the box” thinking. That’s always welcome, even if it turns out that staying in the box makes more sense in the particular situation at hand.

Thanks as always.

2 Likes

It just depends on the project, I guess! This is a lot less scripting than is involved in a complex animation rigging or game development project, for example. If it gets you the right 5 second recording you’re after, it may be worth the pursuit. Having modern computers and analog video synths existing in parallel to each other is very exciting to me.

3 Likes

Yes! It’s exciting to have 21st century tools available!

It’s also nice to use the tools we have instead of repeated GAS, pun intended.

For IT geeks like me that like to program, it’s fun to build something that works exactly the way I want. I’ve done a lot of programming for fun the past year, building the digital side of the hybrid workflow I have in mind. I’m guessing that it’s also fun for the folks that can get really deep into the hardware as well.

3 Likes

Question about the design language decisions on Swatch, and not a criticism, but a genuine curiosity. Why are the upper inputs (Y through Q-) positioned on the left side of the module? I’m used to inputs on the right. Does this configuration allow for a particularly useful patch flow with specific modules to take advantage of what appears to be a unique layout in the the LZX ecosystem? Thanks for any insight you care to share.

2 Likes

It seemed a little weird to me at first too, but after seeing how the module works, I’ve come round to it. I’m guessing LZX chose to do it this way so signal flow would be from left-to-right in general and so that related I/O on the top and bottom are near each other. The bottom-left (RGB) input leads to the top-left (YIQ) output. The top-right (YIQ) input leads to the bottom-right (RGB) output. And the top-left output (YIQ) output is normaled to the top-right (YIQ) input. So the signal flows up on the left, down on the right, and if the normalization isn’t broken it flows left-to-right on the top. (It’s important to remember that the YIQ input does not lead to the YIQ output.)

If the top part had inputs on the left and outputs on the right, then the bottom parts would be related to the top parts diagonally AND the normalization flow of the YIQ parts would be right-to-left, which I’m guessing seemed weirder to LZX than to do it the way they did (and I’d agree).

Another way they could have laid it out would be RGB Input at top-left, YIQ Output at top-right, YIQ Input at bottom-left, and RGB Output at bottom-right. IMO this would be a reasonable layout signal-flow-wise. But it wouldn’t be possible to lay out cleanly on the grid that LZX has been using for gen3 (look at Swatch next to the rest of gen3, especially Sum/Dist). And you’d probably be stuck with the situation of having the top-most jack of the bottom-left YIQ Input in line (or perhaps almost-but-not-quite in line) with the bottom-most jack of the top-right YIQ Output, which would be unideal.

In otherwords, the layout makes sense if you consider Swatch as two modules side by side that just happen to have the top (YIQ) parts normaled from left-to-right, and (I think?) it’s the best way it could be laid out given gen3’s established grid.

Anyway, this is just a guess at LZX’s design choices. Not certain it’s right, but it makes sense to me that this would be why.

5 Likes

Swatch has two separate functions. An RGB to YIQ converter (left side) and a YIQ to RGB converter (right side.) If you consider that the YIQ out to YIQ in are normalized, then the primary RGB in/out are left/right side as usual. We tried it a few different ways and ended up settling on the configuration that kept the RGB I/O on the bottom and in line with other modules. So you could think of it like RGB in/out, with the top area being the send/return (YIQ out/in).

If you want a cheat sheet that should clear anything up, it’s pretty simple:
RGB in → YIQ out → YIQ in → RGB out

So I’m imagining a situation where you have a 6U case and the bottom 3U is all RGB left-to-right flow. And the top 3U is vector processors, oscillators, etc. Swatch is in the bottom row, and your YIQ processors are piecemeal, in the top row. YIQ signals leave the RGB path (exiting YIQ outs at the top of Swatch), get modified, then re-enter the RGB path (entering YIQ inputs at top of the Swatch). So RGB flows left to right, while YIQ sends up, then returns down. We tried some variations with arrows and lines illustrating this, but it felt very busy.

Ultimately it was important to keep this module 8HP, but we did not want to break design rules – so we handled RGB IO in the existing left-to-right flow convention, and used the top part of the module to interpret the YIQ components differently. I think it gives the module a unique look, and the inversion of expectations with the YIQ IOs helps underline the identity of the module – you’re supposed to pause and go, “Huh. These IOs work differently from the typical RGB path. Maybe YIQ isn’t just RGB by another name after all.” At least these were my thoughts/justifications!! :slight_smile:

8 Likes

Swatch is perfect. The layout does deviate from convention, but it’s no big deal at all. Just think of the signal flow within Swatch as clockwise, starting from lower left.

I’ve installed mine in the center of the rack, as equidistant from everything as possible, to keep cable lengths and spaghetti clutter to a minimum. Swatch wants to connect to everything. At first I thought it would be most convenient to install it next to the encoder, because generally Swatch will patch directly to the encoder inputs. But then I realized that Swatch is in many ways the centerpiece of the system.

2 Likes

Since Swatch breaks color into 2 dimensions, does controlling it with an X-Y joystick make sense as a single controller to effect 360° color shifting?

Haven’t quite wrapped my head around the possibilities of the module yet and not sure I’ve seen folks do that in any demos (though I haven’t necessarily watched all of them yet).

Oh, another question: in demos I have seen, it appears that patching into I & Q but not Y produces an image. Whereas, with Y input at 0, I would expect black. In the absence of a Y input, is the module normalled to averaging the luma values of the IQ inputs?