There was a thread about this on the fb group but I had more thoughts and figured this is a better repository for patch ideas.
For animating colours:
Use a quadrature oscillator (Bajaoscillator) and mix it in with the RGB source, one phase for each red green blue. You can mix this with passage, bso crossfade, or a matrix mixer.
If you have a feedbackpath with delay (MemPal, mixer, etc), send your color info through wave folders (Arch, Staircase, or audio wave folders) for that nice deterministic chaotic behavior. If you have mapper and a S/H module, you can apply this on Hue, or you can do it on each color channel.
I have also sent color video feedback through Color Chords and created periodic color changes that way.
If you look up Phil’s Castle lesson 1 in this forum, you can combine these with a posterise effect, which steps your image’s luma into bands, which will nicely limit you to 8 colours.
I have been thinking about how to achieve an 8/16bit style ‘colour cycling’ effect - explained in a lovely way here: The lost art of color cycling - Animating with color - YouTube
The nearest I can get would be to first apply the above posterise effect, and then to use Bajaoscillator into RGB to move colours, but to first pass it through a Sample and Hold (with a slew option to soften things a bit) to get less linear movements through your Hue or RGB. I suppose also if you used something like a WMD PDO, you could have much finer control over the colour cycling palette because you can also control the degree of phase between its outputs.
This will be a fun topic to explore, I really enjoyed that Colour Cycling video when it was shared elsewhere.
Worth noting the Bajascillator outputs are 60 degrees apart which works well for RGB - 0, 60 and 120 whereas most eurorack quadrature oscillators are 90 degrees apart.
I’ve been meaning to play with the Quadraturia mode on my Ornament & Crime module, which offers wavetable LFO (based on the Mutable Frames easter egg mode) with adjustable output & offset level, phase and waveform.
I’m pretty confident it can hit 0-1V at 60 degrees with the right settings Will report back with my findings!
that is amazing, thanks for sharing!
saves me some time, can just fire up the 'scope and see what setting it takes to get 60 degree phase separation (guessing 84 or 85)
for anyone that’s unfamiliar with it, I’d suggest trying to get the 8HP version - “uO_c” - in your rack, it’s worth it for Quadraturia alone and everything else (a myriad of functions) is a bonus. Especially given you have ability to scale the LFO outputs without external modules.
Okay, so my Bajaoscillator arrived today. Here are my beginners’ problems which might help anyone else:
VC diamond ramp > doorway for sizing and hard/soft edges > into all 3 passage thru > VC RGB in
Bajaoscillator > all 3 passage in
But I’m running into an issue with my patching logic. The bajaoscillator adds amplitude to each RGB channel, which is basically 3 sets of brightness changing, so I am running into this issue:
If the shape doesn’t have a hard edge (eg. a diamond ramp via castles), then the shape grows and shrinks with the amplitude, along its gradient, and its pretty impossible to balance the RGB amplitudes so that you keep the shape and its gradient stable, but change the colour of it. I’m guessing having a castle shape or soft edge that changes colour needs some other approach?
The only thing I can think of is just to go through Mapper and animate the hue, but maybe there are better ways.
Yes, Mapper is what you want to modulate the chrominance/hue directly, and without changing the luma/brightness value. Any type of direct RGB manipulation will also involve shifts of and cross modulation of the image’s brightness. Neither is really a superior approach (that is, 3-phase displacement vs polar-to-cartesian shaping) they are just different techniques. Sometimes the RGB cycling approach can lead to different palettes and the shifts in brightness make the image more dynamic as a result.
Sum each color channel with its own step sequencer clocked by a divided down frame rate output.
Let’s say we are using 3 Befaco Muxlicers or any other step sequencer matched up with R, G, & B out of any full color image you’re working with in the system. In this patch it’s an external camera feed in through CTBC but it can be whatever:
You could flip around the inputs into Passage (IN<>Thru) or add another Passage in line so you can invert/bias the sequencers’ outputs to allow for subtraction from the RGB channels coming from your source.
The key input on FKG-3 could be a variety of things (it’s just a H ramp in the example) and is the way to not just have the entire screen tick through the same sequenced palette cycle. I would probably start by putting the Y from CTBC into the Key input so I could constrain the palette cycling to the brightest areas of the external camera input.
EDIT: You could also just use one sequencer and put its output into a Passage to get three simultaneous stepped voltages sitting at different, repeatable amplitudes.
Another idea to try is to use a Quadrature VCO as a UV(Chroma) source and then converting UV + a static Y value using a YUV to RGB matrix. (This is similar to what’s behind the scenes, at Mapper’s output stage.) This will give you continuous hue cycling.
Here are some settings for YUV to RGB and RGB to YUV on the SMX3 (but any 3X3 matrix can probably get you close.)
Yes. “HSL Amp” is a no brainer for core gen3 modules. This would be like Mapper but with RGB input as well as output. I think this is a good example of a function that should be wholly represented in a single module – RGB in, RGB out, HSL controls – (but LZX hasn’t done so yet.) FKG3 was like this, too (integrated keyer/key generator).
Actually what I was asking for is a conversion module, not an HSL proc amp. I want to take an RGB signal from TBC2 or whatever, convert it to HSL voltages. Send that to other modules, e.g. function generator, summing mixer, etc. Take that back into the conversion module, convert it back to RGB for encoding.
You can do that with 2x SMX3 using the YUV/RGB settings shown above. The Hue/Saturation are encoded as UV quadrature, so any of the system’s existing (or future) 2D processors will work to modify things like Hue (using a rotator function) or Saturation (using a dual VCA/amp). In other words Hue – when patched as a signal on patch cables – is a 2D plane represented by the UV voltages, and you process it using 2D functions rather than singular VCAs, etc.
An HSL amp is effectively a 4-in-1 module: YUV-to-RGB circuit, Polar-To-Cartesian Waveshaper (Hue to UV rotation processor), Dual Amp (UV amplifier), and RGB-to-YUV circuit encapsulated in a 12HP function block. We can include the IO of the YUV-to-RGB amp and the RGB-to-YUV amp on separate outputs so you could use it as a dual converter as well! So maybe what I mean is less “HSL amp” and more “HSL/YUV/RGB toolkit/converter/processor”?
That doesn’t mean a YUV/RGB cross converter wouldn’t be a nice utility function for a single module though. The way the cost of the parts and space required for the circuitry works out, it’s to everyone’s advantage (aka Max Feature to Cost Ratio) if we combine things in these 12HP blocks.
I also have to think about the cost of certain functions when split across different modules. For example if all modules are $399, and it takes two separate modules to accomplish a hue transformation, then the hue transformation feature costs the user $798. Whereas I think maybe the best place to start is to make sure that function (which is a core function I think) can be done in a single module.
I could see some large panel modules of patch-only function blocks including YUV/RGB converters though, a “Patch Programmable Color Lab” kind of thing.
Thanks for that, I always learn so much from you Lars.
The thing is, I want to be able to process hue and saturation separately, with each parameter being its own one-dimensional space. I want to do that processing externally to the conversion module, in other modules. And most importantly, I need it to be intuitive. I guess I could theoretically process an R-Y / B-Y pair to get the same results, but that is not at all how my graphic artist brain works. Yeah, I can read a vectorscope, but that doesn’t mean I know how to adjust R-Y / B-Y values to dial in a particular color. I mean, it’s really not at all intuitive; changing either value will change both the hue and the saturation. So we’re back at a similar issue that we have with pure RGB workflows: we can’t just tweak hue or saturation without affecting the other. They only work as a pair. You can build in some smart functionality to your other modules, creating a hue and saturation “front end interface” to adjust the R-Y / B-Y values under the hood. But that doesn’t help the situation in which, e.g. I want to send just the hue through a function generator and leave the saturation alone.
I totally get your mission to consistently pack $400 worth of functionality into 12HP. It makes a lot of sense, in terms of modular circuit design, front end interface design, and marketing. So a color conversion module could have proc amp controls, that’s fine, but the most important thing for me is to get a proper RGB → HSL and HSL → RGB conversion. Like a bidirectional Mapper. I really don’t care about R-Y / B-Y, that has extremely low utility value for me.
That’s why with analogue you need to process your chrominance (UV, etc) with a quadrature function generator (like the Polar to Cartesian function and Dual VCA in Mapper) rather than a mono function generator. Ultimately what I am describing as HSL amp is exactly what you’re asking for, let me try to explain.
I guess I could theoretically process an R-Y / B-Y pair to get the same results, but that is not at all how my graphic artist brain works.
Part of this is our understanding of graphics vs analogue/video signal paths and functions, and our tendency to not think of “parameter inputs” as also “potentially image rate transformations.” For example, you can’t patch the luma channel of a separate file directly to the “Hue” parameter in Photoshop and expect an image rate colorizer to appear. You can get that transform through a series of steps. But in an LZX system, the HSL inputs on an “HSL amp” are by default an “HSL to RGB converter with adjustment controls.”
So if you want to curve, fold, manipulate the “Hue” of your image in complex ways? That’s all in what you’re patching to the “Hue” input of the “HSL amp.” And that could be anything (a ramp or waveform, another image, the same image’s luma solarized 8 times over, etc.)
For HSL “output” that’s best represented as UV or Sin/Cos quadrature for exactly the reasons you mentioned – we want to process Hue/Saturation using modules designed to operate on a 2D plane rather than as mono functions. Hue does not really exist without Saturation (there has to be a constant value at least, to calculate the chrominance of one or the other.) Whereas U and V do exist, even without each other’s presence.
To sum it up, in patchable analogue, it’s best to think of Hue and Saturation not as a “signal type” or a value – something you patch on a cable – but as parameters describing a transformation of the UV colorspace. So whether we hide those high parts count transformations (Polar to Cartesian, Dual VCA, etc) behind the panel or give them frontpanel controls, they exist regardless. In the case of standalone HSL to RGB to HSL, etc it makes the most sense to “patch it out” and give the user access to those expensive functions directly. If modulating/generating Hue as it’s own channel is the goal, you just go into the Hue input with your alternative “hue source.”
Of course, there is more than one way to solve these problems with circuits. For me the real practical thing is “an HSL converter costs the same amount of big circuits as an HSL generator/converter/processor.” So from what I’d have to charge for the module, the HSL instrumentation frontend just fills out panel space that would otherwise be entirely blank. No way to make the whole circuit “smaller” without compromising other parts of the design standard (bloating power consumption, bloating panel space, etc, deeper mounting depth, etc).
If I imagine it, the simple ‘artist’ control for this prospective module I’d enjoy would just be some CV control inputs post RGB conversion, and then you could patch in a joystick with eg. X into Hue/U and Y into Saturation/V? And then converted to RGB on its way out…
A joystick is a great example of how chroma/UV is a 2D plane. If you take XY as UV, then patch those to a YUV to RGB matrix function (like posted above), your joystick will now work like a Chroma value selector without adjusting luminance (Y) which can be a static value or have a different source. Your joystick’s output is “XY/UV”, but you can control Hue/Saturation based on how far you move the joystick from center (Saturation) and angle of rotation (Hue.)
I may be wrong, but I think @dryodryo is just trying to exhaust the applications imagined for all of this from their background as a teacher and instructional designer in digital graphics. I don’t mind that, I like the challenge of explaining “how this fits into that” and the context for design decisions. This particular intersection between analogue systems and graphics is incredibly interesting to me, because it’s part of the goal with all this, for the analogue system to be the counterpoint/complement of (and not the equivalent of/replacement for) existing graphics tools.
So, case in point, hue transformation – one simple slider and floating point value in software! And HSL to RGB is usually a very low level presumption in any graphics engines out there. We’re used to this being an effortless operation. It makes sense that being able to separate hue and saturation and process them as unique values should be possible, since that’s exactly what you’d do in a graphics environment, to avoid modulating saturation and luminance and address hue directly.
In analogue though, it’s a Big Circuit, in parts cost, complexity, footprint. Hue and Saturation aren’t signals, they are complex analogue computing functions/transformations – If you’re gonna have it in there, you want to give the user full control over it’s inner workings. You want the macro version, with all the details that make the real time analogue circuit powerful in ways the digital graphics system is not. There’s a lot more that an HSL circuit can do in a freeform, wide bandwidth environment than just color conversion! So this informs how we strategize the instrument. In the end I hope that by embracing the form of the circuitry itself, we avoid abstracting away the details that make the analogue workflow unique.
UV/Chroma Transformations (these are 2D functions)
And, personal perspective:
Advantages to RGB colorspace: This is closest to how light works. This is how human eyeballs work. It’s the most natural way to mix two images and, as signals, carry the most relevant information about the color. It’s how you learn about color in the fields of physics or optics.
Advantages to YUV colorspace: This is closest to how artists think, especially if we’re using Hue and Saturation based transformations of the space (it’s modeling of the concept of a color wheel – something that does not exist in nature). This is how you are taught color balance, paint mixing, value ranges and contrast, etc in a painting or design class.