Animating and modulating colours

OK Lars, can you explain why I shouldn’t think of HSL as a signal…?

Is it is simply easier to create a module that modulates some incoming signal using HSL math? Or, to put it another way, is it is harder to convert an RGB triplet signal to an HSL triplet that can be sent to other modules?

The signal flow has major implications for the types of patches/effects/mind blowing moments that can be made. I’m totally fine with the paradigm of “send your signal through this one box and send a complex modulation signal to mess with it”. But I would actually prefer the paradigm of “send your signal through this network of weird boxes that interact with one another in messed up ways”. Honestly I think the latter approach is more desirable because it’s more open to happy accidents.

1 Like

Are you asking me? Thanks, here it is:

The video synth stuff is old, it’s from the 90’s. Scroll down to the bottom of the page.

Here’s one of my favorites, I’m planning a sequel. WARNING stroboscopic flicker / epilepsy trigger

3 Likes

Is it is simply easier to create a module that modulates some incoming signal using HSL math? Or, to put it another way, is it is harder to convert an RGB triplet signal to an HSL triplet that can be sent to other modules?

Yes. Difficulty isn’t really the question, it’s about complexity and parts count/cost – and more importantly, that those costs would inevitably just restrict you rather than giving you freedom with the signal path. Have you ever ran across any example of HSL as a signal rather than a transformation of an RGB/YUV image? There are HSL encoded images, but they are all digital and require specific encoding/decoding algorithms. If you are going to mix/manipulate those HSL image formats, they get decoded to RGB or YUV first, and that’s where the transformation or mixing takes place.

I’ll try! Hopefully I make sense – this would all be better with some formulas, charts, and example images (and a fresh night’s sleep.)

First off, Hue and Saturation are not linear signals like YUV and RGB – they are Angle/Distance transformations of a 2D point. If represented by a linear voltage, how would we map angles (something that can wrap around a centerpoint an infinite number of times) with a finite voltage? We’d have to decide on a range, like 0-360 degrees = 0-1V. And that cropped range would ultimately become a confining, annoying pain in the ass.

The UV plane

Now any HSL encoded image, when mixed with another HSL encoded image, is going to need special functions/encoders to mix those hue channels. What happens in analogue if we mixed two “Hues”? Do they wrap around with a modulus of 360 degrees? Is that a waveshaper? We can’t just sum them together, or that puts us out of range (instead of 90 degrees - 180 degrees = 270 degrees like we’d want, we get “INVALID HUE”.) Do we now need special “HUE INPUT ONLY” inputs on modules with modulus operators integrated? That’s going too far. We want to convert chroma into something that can be patched, mixed, have logic operators, etc.

So it’s much more powerful to place those kind of functions (Hue adjustment, Saturation adjustment) as part of a Polar-to-Cartesian waveshaper or rotation modulator, since now we can endlessly sum or cycle hue or adjust its rotary range as a part of the function (check out Navigator, it can do X8 rotations along a single 0-1V control input or rotate UV endlessly in either direction.)

Now, if you say “but I don’t care about mixing images properly, I want to mess with them!” Well in that case, nothing’s stopping you – combine those UV channels using any kind of analog transform (like the logic mixes on DSG3) and you will get something experimental, and likely interesting – or even demodulate UV yourself with multiplier modules – you can invent your own colorspaces or palettes. This is going to be much more organic feeling and less frustrating than a cropped angular range out of a converter. Or if you don’t like scientific recipes like HSL at all, just forget the HSL amp and invent your own transformations thru patching the basic ingredients (multipliers and summing blocks.) To put it another way, do you want your experimental chroma to be limited to a single textbook formula? Or do you want to design your own infinite angle extraction techniques from UV?

Then there’s the question of circles and squares. When we rotate hue, that is a sinusoidal transformation. It draws a circle around the center axis. Adjusting saturation is the length of the line (distance) the point is from center. If the distance is the same as the point moves, the saturation does not change. If the distance changes but the angle does not, saturation increases and hue does not change.

But how do you fit that circle into the full range UV plane? Is it oversized, does the circumference of the circle hit the corner points of the square UV plane? No we can’t do that, then your hue rotations would slide unnaturally and you would introduce cross modulation with Saturation when the UV positions exceeded bounds (that would defeat the point of Hue/Sat on separate signals.)

So what about just putting the circle entirely inside the square? Well that’s much better, and represents what a 2D hue rotation does – but if we convert Hue/Sat in that way to a signal, aren’t we cutting off the whole range of hues near the corner points? (Yeah.)

UV, on the other hand, contains the full picture of chrominance without any discontinuities. It can be converted to RGB and back again without losing any of the colors possible in the RGB colorspace, since it’s a direct linear matrix translation. You can sum two hue angles simply by mixing the UV components.

So my opinion? If you want to go “ultimate chroma” – yes, including angular displacements, the direct creation of UV thru hue waveshaping, control mixing, combination of opposing angles, etc then the obvious answer is UV signals + 2D/Quadrature processors. From my perspective, that’s the best of all worlds. A single module which can handle RGB input, RGB output, but still give you HSL transformations in between, or create RGB directly from HSL modulation inputs? You could patch an entire system up to generate that single Hue signal that gets sent into the HSL amp’s modulator. In other words, we don’t need an Angle output to do that – we can just turn any signal into an Angle!

10 Likes

Here’s one of the first experiments I did when we made Mapper (originally Colorspace Mapper).

This patch is incredibly simple! The Luma channel of an external camera is patched directly to Hue angle, and the resulting HSL to RGB transformation is sent to the output after mixing with a single video oscillator. The camera is pointed at the display 90 degrees rotated, to create 2D bars.

Notice that every time the camera lines up with itself, the feedback goes nuts!

Throughout the video, I am doing lots of manual adjustment to Hue modulation depth and angle offset, as well as the final color encoder (in this case the original CVE, which is similar to the new ESG3.) If you encoded hue to a fixed angle instead, you would miss a lot of the tunability required to find some of this video’s best moments.

7 Likes

Thank you again Lars for your very detailed and educational posts. I hope this is not taking you away from more pressing matters.

For my part, I definitely want both precision and crazy unpredictability. But if I could only choose one, it would be precision.

Before I even asked for “HSL as a signal”, I was aware of some of the complications you mentioned. Hue being an angular value, how do we convert that to linear, how do we wrap that value continuously, how do we combine signals, etc. Maybe I underestimated the complexity of the problem as it relates to your existing standard. The modulo math could be built into the HSL input of the conversion module… but if we’re adding Hues externally, we can easily exceed the limits of the +/- 1V LZX standard.

Regarding circles and squares, it was my understanding that a vectorscope shows the entire color space, with a circle indicating the legal range. Anything outside that circle is broadcast illegal. So although one might easily generate a signal with, say, R-Y = 0.5 and B-Y = 0.5, the resulting color/point in the extreme upper right corner is out of the legal range. Maybe that’s not so important nowadays. Certainly the old standards aren’t all relevant anymore, e.g. pedestal at 7.5 IRE seems very quaint and silly. But anyway, I was assuming that “HSL as a signal” would map to the legal range of NTSC, , i.e. a Saturation of 1 is coincident with the maximum legal limit, at the edge of the vectorscope circle.

In the end, I guess I have to grudgingly accept that an “HSL modulator” module is how it has to be. However, I still think it’s limiting in certain ways. It would be preferable to pipe Hue through some processing stages in the same way we can with Luma. But I get that it’s a big ask, and the constraints of the existing system design make it impractical.

It’s a shame, because although I get your point about doing anything you want on that R-Y/B-Y plane, it’s completely non-intuitive, not at all how artists see and think of color. Sure, the color wheel is a fictional construction, but it’s an extremely useful model that helps us conceptualize and get the results we want.

3 Likes

Right, and that might be the biggest point I’m trying to make. It’s better if the range mapping is user controllable at the point at which it’s actually an angular displacement, and the user can fully control depth and scale of the map based on their input sources.

Because you’re paying the cost, either way. An HSL converter without hue modulation/displacement and multiple modulus thresholds is a huge circuit – so in my head I imagine a single converter like having a a giant synthesizer function — but it only loads 1 preset and there’s no adjustability. Or like driving a bus with no passengers. A lonely ride, even though you bought tickets for all your friends.

And user controllable doesn’t have to mean “imprecise” and it need not exclude “fixed” either. We can use below the panel trimmers and detent potentiometers to have a tunable input scale at the center position. Even if you like a fixed hue angle mapping constant, you’ll be glad the knob is there for some patches.

Well I’d have to think about that. I think that has more to do with chroma bandwidths in CVBS encoding – for RGB native path I don’t know if the colorspace gets limited in some way. I’m not a mathematician! But I understand the practical constraints very well.

And from that perspective it’s to our advantage to make sure anything that is passed as a colorspace signal/triplet can convert to and from RGB losslessly…
So an important question to me for any colorspace conversion is… Can you convert to and from RGB without losing anything across the whole range? In the case of YUV, the answer is yes! I am not sure if that passes through illegal colors or not at the extremes. But there’s enough headroom in the LZX signal path to make it happen even if it does.
The same is not true for an RGB - to - angle function though.

Another big reason to prefer UV as a transport format is that we’ve got a whole analogue system based around manipulating 2D planes for ramp and HV waveform sources already. So if we are transporting chroma as UV, all of those dealing with XY vectors or HV waveshapes are now Chroma processors and Chroma generators. If we can use all of those same modules for both “chrominance” and “spatial pattern synthesis”, the functional density of the modules practically doubles. We can spend more time on the “general purpose quadrature function generators” than on separate “chroma function generator” and “pattern generator” modules.

2 Likes

I’m with you, but that’s where the HSL Amp comes into play. Think of what I’m proposing this way…

  1. Hue and Saturation are on the knobs & CV inputs.
  2. UV in and UV out are IO options
  3. RGB in and RGB out are IO options
3 Likes

At a more meta level, I’d also massively caution (with my day job hat on as a uni academic in art history) against trying to bend the limits of the tools to meet an assertion of ‘how artists think about colour’ as being X or Y, especially for an experimental art such as this where what makes an ‘artist’ and ‘how to think about colour’ (and hell, in this discussion, even what colour is) are wildly up for grabs. You can barely convincingly assert that about professional paid artists in one place or time.

6 Likes

Definitely! There’s a difference between “how art students are taught color theory in a painting class” (what I mean by the advantages of HSL colorspaces) and “how artists think about color” (it’s the goal of the instrument to empower, not dictate, that.)

When we’re turning the RGB knobs and finding the colors that resonate with us, we’re really destroying presumptions and designing our own colorspaces in the end. The maths just have to have a starting place that’s usable (RGB, YUV, HSL, etc.)

But there’s not really anything to debate when it comes to what color is in our medium (video.) It’s RGB or YUV signals values encoded in some way as an electrical signal. So whether or not we do something different with the signals, or interpret their relationships in different ways, we always need to reach an RGB or YUV conclusion. Otherwise it would not be video!

Every color you can generate with video already exists as a point inside this cube:
image

When we talk about colorspaces, it’s really a discussion of how we’re moving around inside that cube – the path we’re taking – when knobs are turned and values change. Hue selection (HSL) is unique in that you can have circular paths and rotational orbits. RGB moves the point 1 axis at a time. YUV traverses the cube diagonally on some axes.

And maybe that’s a good point to bring us back to the original idea of this thread: color modulation and animation. Whether we’re talking an image rate process like a colorizer, or a sub-frame rate process like an LFO animating a color channel, it’s all just paths between points in that cube. One is just happening faster than the other.

So RGB, YUV, HSL, whatever crazy matrix you made with a Matrix Mixer, etc – they’re just different modes of transit between the same list of possible RGB destinations.

8 Likes

I don’t think you understand. It’s not a “big ask,” it’s just a misunderstanding about how the math works for all this behind the scenes of graphics software. A fact that’s been hidden from you with code.

Hue is not a signal. Hue is a modifier. Every Hue slider you’ve ever adjusted, every color picker you’ve ever used, every hue value you’ve written in script or code, every hue-based blending mode you’ve applied. Even HSL encoded images. They are all modifiers describing a UV (and ultimately RGB) transformation.

Can we convert UV to a voltage representing the angle of Hue displacement? Yes. Still not a signal or a color channel. You’ve only converted the Hue modifier into a voltage describing it’s effect on the UV plane. This is just encoding the modification of a parameter as a voltage. This voltage only retains any association as a “hue” until you start trying to modify it.

Luma is a signal. Chroma/UV is a signal. Hue is not a signal. You could process the angle of hue displacement in degrees through processing stages, but what comes out the other end isn’t a hue. It only becomes “Hue” relative to the input of an HSL amp, which will always be processing or generating a UV plane.

If you want to “pipe Hue through processing stages” like you’re imagining, you can!! It’s even more powerful than you’re imagining! But it’s called “pipe UV through 2D processing stages, some of which might involve Hue and Saturation modifiers” or it’s called “use an arbitrary voltage to rotate the Hue of a UV source”. If you want rotation (Hue) or depth (Saturation) to be a part of your processing chain, you don’t try to turn those into signals – you just use Hue/Saturation modifiers to process your signal.

To make an analogy, X and Y coordinates are signals. But X Scale and Y Scale are not. Those are XY modifiers. You can have a number that represents a scaling operation, but that’s entirely relative to the XY coordinates you’re going to be scaling. The concept of Scale does not exist without XY, the same way the concept of Hue does not exist without UV.

4 Likes

And for anyone tired of this thread, here ya go…

2 Likes

OK, Lars, but correct me if I’m wrong, but we’re still not able to separate Hue and Saturation using generic XY or HV processors.You need to convert the cartesian UV coordinates to polar vectors. The “Mapper 2.0” module would do that, obviously, but nothing else would. Any transformation of U or V is going to affect both Hue and Saturation in non-intuitive ways.

And Gavin, I’m also a university level teacher/trainer, in 3D graphics. As I said earlier, the HSL model is a convenient fiction. But it’s ubiquitous, because it’s super useful. I can guarantee you, if you conduct a poll of artists, the number of them who understand how R-Y / B-Y color theory works will be close to zero. I’ve been doing video and computer graphics for 30 years, and I barely understand it.

Tools exist to meet the needs of users, not the other way around. Let’s not make rationalizations for why users should contort their brains, likely unsuccessfully, to accommodate the limitations of tools.

At the end of the day, any comprehensive attempt to modulate color (beyond raw RGB values) that does not incorporate the HSL model on some level is going to create barriers to entry for the overwhelming majority of users. We can argue that until we turn blue (or red, or green) but that won’t change the facts of how all artists have been trained and how nearly all color modulation is performed in hardware and software, dating back to the introduction of color TV.

2 Likes

HAHA! Great title Lars.

When I was watching the Johnny Woods video on SMX3, I realized that this was possible and wondered why he didn’t demonstrate it. SMX3 is a complete color mixer. Add or subtract any color component from any other color component. Frackin’ genius.

4 Likes

Sure we can!

A Saturation Processor is a Dual VCA. FKG3 is a Triple Fader which is even more than we need to turn it into not just a standalone Saturation proc – but one with some very interesting side effects brought on by its RGB logic modes. And also, it can be a UV/Chroma crossfader/mixer/keyer! The RGB out jacks just mean “unmodified.” In other words, if you patch YUV in, you get YUV (not RGB) out. And the RGB key modes become YUV key modes instead.

And a Hue processor is the same thing as an XY Rotator (like Navigator) or Polar To Cartesian Converter (like Mapper). UV in, rotate, UV out, and you change the hue without modifying saturation. The ART3 Affine Morph is perhaps even more crazy in this case, we could use it to multiply UV channels from different sources, or apply a trapezoidal displacement to the entire chroma space, or cycle the hue with a quadrature VCO.

All you need to use those modules in those ways is RGB/YUV conversion. That can be programmed manually with SMX3s. So really we could stop there and it’s already within reach.

You can think of the HSL Amp as a combination of all of the above: RGB/YUV conversion + rotator + polar-to-cartesian converter + a dual VCA + YUV/RGB conversion all under the same hood. If you want to think of doing more complex stuff, you get multiples – the complex things you’re doing are parts of the patch between a series chain of HSL amps, for example. It’s exactly what you’re asking for (a comprehensive approach to the HSL colorspace for LZX modular); just not in the form you’re imagining (that Hue/Saturation can be signals on cables.)

Tools exist to meet the needs of users, not the other way around. Let’s not make rationalizations for why users should contort their brains, likely unsuccessfully, to accommodate the limitations of tools.

Eh, we’re just talking about physics, here. Rationalization doesn’t enter into it – it’s just the shape of the world before us. With an analogue system we are building a graphics workflow from the bottom up (the user has control over the signal from its raw state and every function in between). So taking graphics concepts like Hue modifiers and applying them downward, without acknowledging their underlying abstractions, would just take us to “fuzzy overpriced Photoshop” rather than an analogue video synthesizer. Those abstractions are actual circuits that must exist physically on a board. Part of the point of all this is to not try to abstract things away. That’s what ultimately reveals some new technique that we’ve lost somewhere along the way in the development history of image editing and software GUIs.

5 Likes

I would love to do that! And/or create some YouTube training videos. But I’m not ready yet.

First of all, I have exactly zero LZX modules in my possession at this time. I’ve got a few thousand dollars of LZX gear on pre-order! :grinning:

Secondly, I feel like I have a lot to learn before presuming to be able to teach anyone else. Take a look at this thread, and others on this forum… it’s all about Lars schooling me! And I’m grateful!!

My rates for graphics training are kind of high. But, A) a webinar would distribute the cost among multiple learners, and B) I am doing this for LOVE and GOODWILL rather than profit, so I feel that steep discounts are in order.

Maybe in the spring or summer, once I’ve had time to work with the tools, we can talk about training.

Thanks for your interest!

2 Likes

OK, but isn’t that an expensive way to do it? I’m thinking a straight-up video rate VCA would be more efficient. I.e. I don’t want to tie up a high level module to do a low level job like modulate saturation.

Great! So we daisy chain U and V signals through these modules, and Bob’s your uncle? Doesn’t seem like there’s much of an issue after all! Except rack space and cash, of course.

That brings up a question. It looks like neither TBC2 nor ESG3 can send/receive YUV signals in LZX format. TBC2 accepts YsUV and provides Y & RGB outputs. ESG3 accepts RGB inputs and provides YsUV outputs. Is it at all possible for a firmware update to add YUV I/O functionality? At least to TBC2? It would be sweet to, once again, not tie up high level, sexy modules like SMX3. I mean, we’re already sending YsUV into TBC2. Can we just not convert it to RGB? Likewise, can we make ESG3 just not convert to YsUV, and merely add sync to an incoming LZX YUV signal???

I respect your design philosophy, but I don’t necessarily agree. The development history of any medium does tend to impose limits on our thinking and processes. However, that same history lets us leverage existing knowledge. Stand on the shoulders of giants, that sort of deal. Ideally, a good design finds the “middle path” between convention and innovation.

By the way, I thought about the fact that, since broadcast video U and V don’t have sync on them in the first place, we could just plug them directly into the system. But then I remembered that the voltage levels aren’t the same. :frowning_face:

I would love to do that! And/or create some YouTube training videos. But I’m not ready yet.

Personally I would enjoy any educational content even just from the angle of what you’d normally consider “advanced graphics and animation for video professionals”, outside of analog tools at all. I am familiar with many software tools you teach that I still have no real practical experience with. But I can also subscribe to get access to some of your courses already. I miss Summer School at the Community TV station.

OK, but isn’t that an expensive way to do it? I’m thinking a straight-up video rate VCA would be more efficient. I.e. I don’t want to tie up a high level module to do a low level job like modulate saturation.

I look at the cost this way – It almost always works out least expensive to have a single module that fits multiple roles well, than to have multiple modules that fit single roles well. So with modules like SMX3 and FKG3 for example, it’s not that you permanently relegate them to “lesser duties” – it’s more that, contextually, every time you patch, you can use them that way without having to buy another module.

To put it another way: The lowest cost system is the one where you can use every module in every patch!

So practical/realistic system example – you’re a video artist who vibes with HSL colorspace. You’ve got an HSL amp and couple FKG3. The HSL amp is your primary Hue/Sat modifier, your system’s “color picker”. But one day, you have a patch where you really want to pre-process Saturation to apply some animated beating to make your chroma hit a peak spot intermittently, that makes your feedback look AWESOME. These are the cases where you’re glad for that extra degree of functional versatility that FKG3 can provide, because thinking outside the box, that module can do so many things. It’s designed that way, “KEYER” is just a prompt, a “step 1”.

Likewise, can we make ESG3 just not convert to YsUV, and merely add sync to an incoming LZX YUV signal???

That’s a bit tricky. Because yes, you can get RGsB out or YPbPr out from ESG3. But both presume an RGB input. If you put in YUV to ESG3, you will get a viewable signal, but the bottom half of the UV signals (the negative portion) will get clipped off by the Blue/Red clipping circuits.

I agree, this would be nice. For TBC2, YUV output is definitely something we can consider. In fact we should be able to do that at zero processing cost in the FPGA by using only the CSC already built into the TBC2’s output encoders (ADV7393.) We could also have an SMX3 style “programmable CSC matrix” as a software function. (The TBC2 procamps right now are on the frontend, but the outputs have CSC matrix too)

Like I mentioned up thread, unlike polar-to-cartesian and rotation circuits (big, expensive, circuits) a fixed RGB-to-YUV and YUV-to-RGB translation is actually quite cheap. 6x opamps (input, output buffers) and some passive resistors is all you need. So on the “low cost utility” angle, bidirectional YUV/RGB conversion is, as it were, a “small ask.” If we can keep things 12HP – it makes everything more easy to achieve. So that’s why I was thinking, a low cost color utilities module might be worth some further thought. Or a “multi utilities” the way DSG3 is for shape (switch for colorspace converter select, etc.) These kinds of modules are best thought out after we get lots of feedback from patching the first set of Gen3 modules, so there’s lots of room to discuss it.

2 Likes

That’s a great point. But that’s where SMX3 comes in, and exactly why all those knobs go to 2X instead of just 1X gain. So broadcast spec? DC black level perfect? Maybe not… but you can break those rules. The gain doesn’t have to be perfect. That’s part of what’s great about the format, is that sync is going to be stable on the output no matter what you’re feeding into the signal path upstream. All the first tests of video input to the system were done this way – component video directly into the matrix module.

2 Likes

Another point on 75 Ohm video transmission direct to 3.5mm jacks:: You’ll already be coming in hot with PbPr. Typically UV are +/-357mV post-termination. But before termination they are +/-714mV. If you go direct to a 3.5mm jack your input impedance is 100K, not 75R. So you’re not really going to have the typical halving effect. The transmission’s signal integrity may not be ideal, but it will work.

2 Likes