It just depends on the project, I guess! This is a lot less scripting than is involved in a complex animation rigging or game development project, for example. If it gets you the right 5 second recording you’re after, it may be worth the pursuit. Having modern computers and analog video synths existing in parallel to each other is very exciting to me.
Yes! It’s exciting to have 21st century tools available!
It’s also nice to use the tools we have instead of repeated GAS, pun intended.
For IT geeks like me that like to program, it’s fun to build something that works exactly the way I want. I’ve done a lot of programming for fun the past year, building the digital side of the hybrid workflow I have in mind. I’m guessing that it’s also fun for the folks that can get really deep into the hardware as well.
Question about the design language decisions on Swatch, and not a criticism, but a genuine curiosity. Why are the upper inputs (Y through Q-) positioned on the left side of the module? I’m used to inputs on the right. Does this configuration allow for a particularly useful patch flow with specific modules to take advantage of what appears to be a unique layout in the the LZX ecosystem? Thanks for any insight you care to share.
It seemed a little weird to me at first too, but after seeing how the module works, I’ve come round to it. I’m guessing LZX chose to do it this way so signal flow would be from left-to-right in general and so that related I/O on the top and bottom are near each other. The bottom-left (RGB) input leads to the top-left (YIQ) output. The top-right (YIQ) input leads to the bottom-right (RGB) output. And the top-left output (YIQ) output is normaled to the top-right (YIQ) input. So the signal flows up on the left, down on the right, and if the normalization isn’t broken it flows left-to-right on the top. (It’s important to remember that the YIQ input does not lead to the YIQ output.)
If the top part had inputs on the left and outputs on the right, then the bottom parts would be related to the top parts diagonally AND the normalization flow of the YIQ parts would be right-to-left, which I’m guessing seemed weirder to LZX than to do it the way they did (and I’d agree).
Another way they could have laid it out would be RGB Input at top-left, YIQ Output at top-right, YIQ Input at bottom-left, and RGB Output at bottom-right. IMO this would be a reasonable layout signal-flow-wise. But it wouldn’t be possible to lay out cleanly on the grid that LZX has been using for gen3 (look at Swatch next to the rest of gen3, especially Sum/Dist). And you’d probably be stuck with the situation of having the top-most jack of the bottom-left YIQ Input in line (or perhaps almost-but-not-quite in line) with the bottom-most jack of the top-right YIQ Output, which would be unideal.
In otherwords, the layout makes sense if you consider Swatch as two modules side by side that just happen to have the top (YIQ) parts normaled from left-to-right, and (I think?) it’s the best way it could be laid out given gen3’s established grid.
Anyway, this is just a guess at LZX’s design choices. Not certain it’s right, but it makes sense to me that this would be why.
Swatch has two separate functions. An RGB to YIQ converter (left side) and a YIQ to RGB converter (right side.) If you consider that the YIQ out to YIQ in are normalized, then the primary RGB in/out are left/right side as usual. We tried it a few different ways and ended up settling on the configuration that kept the RGB I/O on the bottom and in line with other modules. So you could think of it like RGB in/out, with the top area being the send/return (YIQ out/in).
If you want a cheat sheet that should clear anything up, it’s pretty simple:
RGB in → YIQ out → YIQ in → RGB out
So I’m imagining a situation where you have a 6U case and the bottom 3U is all RGB left-to-right flow. And the top 3U is vector processors, oscillators, etc. Swatch is in the bottom row, and your YIQ processors are piecemeal, in the top row. YIQ signals leave the RGB path (exiting YIQ outs at the top of Swatch), get modified, then re-enter the RGB path (entering YIQ inputs at top of the Swatch). So RGB flows left to right, while YIQ sends up, then returns down. We tried some variations with arrows and lines illustrating this, but it felt very busy.
Ultimately it was important to keep this module 8HP, but we did not want to break design rules – so we handled RGB IO in the existing left-to-right flow convention, and used the top part of the module to interpret the YIQ components differently. I think it gives the module a unique look, and the inversion of expectations with the YIQ IOs helps underline the identity of the module – you’re supposed to pause and go, “Huh. These IOs work differently from the typical RGB path. Maybe YIQ isn’t just RGB by another name after all.” At least these were my thoughts/justifications!!
Swatch is perfect. The layout does deviate from convention, but it’s no big deal at all. Just think of the signal flow within Swatch as clockwise, starting from lower left.
I’ve installed mine in the center of the rack, as equidistant from everything as possible, to keep cable lengths and spaghetti clutter to a minimum. Swatch wants to connect to everything. At first I thought it would be most convenient to install it next to the encoder, because generally Swatch will patch directly to the encoder inputs. But then I realized that Swatch is in many ways the centerpiece of the system.
Since Swatch breaks color into 2 dimensions, does controlling it with an X-Y joystick make sense as a single controller to effect 360° color shifting?
Haven’t quite wrapped my head around the possibilities of the module yet and not sure I’ve seen folks do that in any demos (though I haven’t necessarily watched all of them yet).
Oh, another question: in demos I have seen, it appears that patching into I & Q but not Y produces an image. Whereas, with Y input at 0, I would expect black. In the absence of a Y input, is the module normalled to averaging the luma values of the IQ inputs?
Is Y normalled to 1?
Yes, that would allow you to do IQ color picking and offsets. Color rotation (like continuous hue cycling) will require a rotator module.
Is Y normalled to 1?
Y input (of the YIQ to RGB side) is normalled to Y output (of the RGB to YIQ side). The RGB inputs are normalled to 0V (Black.)
I can see that I need to take a closer look at how YIQ works.
Edit: Rewrote this reply after the first attempt.
If you are patching from arbitrary sources (oscillators, ramps, keys) directly into the YIQ inputs, it’s possible to generate illegal YIQ colors. Illegal colors can be a good thing! But to generate a legal YIQ source, where IQ are constrained by Y’s amplitude, you need a saturation processor (dual VCA) so that you can multiply Y with IQ. You also need a quadrature source for IQ where the outputs are 90 degrees out of phase, like a quadrature oscillator or polar-to-cartesian function.
Without a YIQ generator, you should go in thru the RGB inputs with your image – and then go into IQ- inputs, with your modulation sources (like XY from a joystick.) The RGB outs will then be the RGB source image, but with IQ distorted by the joystick’s offsets.
Swatch is a YIQ converter, and the Polar-to-Cartesian module that is following it up can be used as a YIQ generator. The rotation/affine module that is also coming can be used as an IQ modulator. Together they can be patched as a comprehensive YIQ manipulator/generator/rotator tool suite. Swatch on it’s own has lots of tricks and utility, but is not the full picture.
I think I am getting it now.
One wants to think of YIQ as separating the luma and chroma values. Wherein the final RGB is always some multiplication of Y times IQ (0y equals black; 1y equals full saturation or white).
But, in actuality, IQ is always (supposed to be) relational to Y (hence the requirement for VCA treatment before input into IQ). And the math behind the YIQ to RGB conversion is addition/subtraction, not multiplication. So you can have a Y value of 0 and still end up with an image, even though logically that doesn’t make sense (under the more conventional HSV way of thinking about color).
…Okay, just noticed that you rewrote your description and made it clearer (and maybe better than what I just said). But I am pretty confident that I get it now.
If you’re wondering why we didn’t combine all this into one big module – we did! That’s what Chromagnon is.
Breaking it up into modules, Swatch has a very special place – it allows Chroma to exist in the same signal path as general use vector processors for shape generation or scan processing. So we can release “generic rotator module” and you won’t need separate variants for chroma rotator, shape rotator, scan rotator, etc.
I will admit that, for my purposes, the original Color Wheel module concept could’ve perhaps been more useful. Or even something stripped back from that, with only RGB ins/outs and then the control over HSV.
But I totally appreciate why breaking it out with more modularity is practically more realistic and functionally more exciting.
Color Wheel is still going to be entirely present, as well as the functions of Mapper, Navigator, Polar Fringe, and Shapechanger, in the functional set of these three modules alone – so you may feel different when you see it all together. Nothing prevents us from doing a Color Wheel module – but modular roadmap wise, doing the modular functions first means we have a Color Wheel function, and also cover a lot of territory beyond it.
Can we get some of the Swatch demo videos back on-line? The “twitch” content was removed I think and I didn’t get a chance to watch all the content.
The last LZX J. Woods Youtube vid made mention of a Swatch vid on the way I think.
Twitch streams are indeed lengthy and fun but the archival nature of the Youtube content is nice as a ref manal of sorts.
Yes, Johnny’s working on a video. I also posted some patching tips here.
Thanks for the Swatch vid details! Gonna go back and watch some of the twitch ones and looking forward to Johnny’s video!!!