Animating and modulating colours

We have a very specific philosophy about this, it’s true. The initial premise of this project is that we wanted to pick up the torch of analogue image processing from the point in history when it was put down, and abandoned for computer graphics based technologies. So it is intentionally an exercise in anachronism (what were they working on, with analog, before the computers took over?), in treasure hunting (where else could they have gone with it, that computers did not go?) , in detail seeking (what did we miss, were there ideas that were half realized?).

And in that sense, we’re here specifically to find the counter approaches, that may be coming from a perspective more rooted in the hardware of the analogue circuitry itself. We want to be informed by the process constraints of analogue computer implementations, and we want the UI to be informed by those constraints as well. For example, we’d much rather explore the completely alternative ways to approach chroma processing (like vector processing of UV quadrature signals) rather than adopt the way graphics software already approaches the problem.

7 Likes

Well, you didn’t stick with that premise with Memory Palace, did you? I mean, it seems like an evolution of the Fairlight CVI. So you’re open to digital gadgets. For me, the question is not so much of how exactly the inner guts work. Analog and digital are just tools, and you pick the best tool for the job. The main concern for me is the FLOW of the working PROCESS. 3D animation, for example, is a generally frustrating process due to high complexity combined with low interactivity. It’s not at all like what I envision the perfect video synth experience to be… playing a real time instrument that has immense flexibility and responsiveness. Yes, you trade off the complexity, you’re going into a simpler realm. So from that point of view, sure, I’m down with the anachronism concept. But you know, a Eurorack module does not have to be analog to be useful, powerful, exceptional.

There you go again, using my super advanced rocketship module to deliver the mail. There are some voltage conversion options out there, I have to do more research, figure out which ones are fast enough. The only fly in the ointment with this plan: the sync-stripped Y channel coming out of TBC2 is going to be delayed relative to U and V. So they would have to go through video delay lines, which it turns out are hella expensive. Or else just live with the effect of the chroma coming in one frame early.

Yes please!!

On the output side, I’d be delighted with a simple voltage conversion and sync insert on Y. Not within your parameters of 12HP/$400. But maybe someone could come up with one. Or maybe you can do a future version of a high level encoder like ESG3 that can handle YUV input.

The prospect of an all YUV workflow is sounding more attractive the more you tell me.

The design concept for the Orion series was similar, but displaced in time. It follows the same ethos of anachronism and treasure hunting. The Fairlight CVI is maybe even the best example of what we look for, which is unique ideas brilliantly implemented, that were reaching for an evolution of format … but never got a sequel (or maybe, the sequel could have been something else, something interesting, but a different path was taken.). So the prompt mainly just shifted timelines from the late 60s thru early 80s, to the early 80s thru late 80s (pre Video Toaster era.) But the prompt was “What if video never became a part of the personal computer, and mid 80’s concepts of hardware DVEs and attempts at ‘video instruments’ like the CVI, had followup instruments?”

None of it has anything to do with analog vs digital, really. Video itself is a hybrid signal, and some of the very first video art uses digital techniques and lo-bit computers (Video Weaver, for example.) For us it’s more about the idea of revisiting a period in history, observing the state of video technology at that point – looking at the state of things now – and trying to find out what exists in the gap, if there is one. The thesis is based on the idea that technology moved so fast through these years, that we must have left some great ideas behind.

In another 10 years, maybe we’ll have wrapped up some of these ideas and be moving onto a different “WHAT IF…” this time, focused on the early evolution of PC graphics cards, game console wars. I mean 1990s graphics software has some of the weirdest ideas I’ve ever seen!! I know there are plenty of lost ideas there.

Graphics software evolved to use color pickers and HSL controls pretty much exclusively – especially if you’re writing software code, manipulating colors thru animation, etc, HSL is it! It’s something software excels at. “100x hue sliders with different easing curves and start/stop times” is the kind of scripting trick that got me jobs doing Flash animations in the early 2000s.

But for hardware circuits and patching, and dense, exciting creative systems, the UV quadrature approach is seeming to me, to be what is “the natural answer.” It backs up on the stuff the computer is best at (HSL and billions of parameters) and doubles down on what modular analog systems are best at (fully interpretive signal paths and instantaneous results, very complex transformations being represented as two patch cables instead of a long custom script, etc).

Analog approaches don’t have to be simple – it would take one hell of a computer to simulate in real time what even a small sized LZX system is doing (from a CPU’s perspective, every single 3.5mm IO jack in your system is an HD rendering thread!) But analogue approaches do need to be simple to interact with, and digestible enough to reinterpret creatively.

I appreciate this conversation, thank you.

3 Likes

I am really thinking it may be the way to go for many artists. I am into “saturate the heck out of it all, RGB madness, primary colors! palette morphs, rainbow patternmaking psychedelia” etc. Which is more RGB focused. The YUV space feels more subtle in comparison, speaking from how it feels “to put your hands in it.” Having Luma on its own channel prompts you to do a value study of your composition before any color is added. And that separation makes it easier to have color hit only peak spots or intermittent moments with the chroma components.

Also, while UV components do feel a bit awkward for picking a specific color on 2 knobs , what they really excel at is “finding interesting and beautiful gradients.” In a video synth, the latter is often what you’re looking for more than trying to find emerald green like you would with a color picker tool.

1 Like

Absolutely. Any artist begins in the middle of those positions you describe. How do performance artists think about colour? After all, they’ve one of the dominant artists’ groups who have shaped video art. I don’t think they have a HSV chart in one hand to consult. Any more than painters who once had a class in colour theory do, as you say.

That middle contains tools, too, of course, which have always determined the theory (whether we’re talking paint pigments or CRT screens). it’s great that in designing the tools, you have this understanding and sensitivity to the variety of current art practices outside the dominance of computer graphics (and the blindness and assumptions that can cause) and the actual material history of video art, where the line of tools/art is not a clear technical division at all. The tools were the art - It would be completely incorrect to try and understand eg. the Dan Sandin modules some of your modules are based on in terms of Sandin having some kind of consumer/enduser mindset!

During lockdown, I couldn’t get my MA history students into the V&A to open up the computer art archive like I usually would, so I just did a zoom lecture on early video art organised by techniques and tools, and piped through my expedition modules with appropriate patches for each artist (castles for Steve Beck, getting out the oscilloscope for Steve Rutt and Bill Etra). It was a surprisingly instructive to do!

Maybe you need to put some fun switches on this module for truly outmoded paths around that colour cube: Cennini, Ladd-Franklin…

3 Likes

The pleasure is all mine, it’s a very stimulating, educational process for me. If I’m able to contribute something to your design process, I’m honored.

WARNING: nerd alert! Regarding analog and digital, rendering threads etc. – and this is veering off topic – on the computer, the bottleneck isn’t image processing. It hasn’t been for a while. Especially with the last few generations of GPUs, we’ve actually far outstripped Moore’s Law. Manufacturers just keep packing more and more cores onto these chips, now it’s in the thousands. (I just looked it up, an RTX 3080 card has 28 million transistors.) The bottleneck on the PC is storage. Not necessarily storage space, but definitely bandwidth. Your choices are A) compress the hell out of your footage, B) spend a fortune on a RAID, or C) be content with just one or two channels of simultaneous disk I/O. This is going to change with faster PCI bus standards, wider adoption of Thunderbolt, and ultimately even faster forms of SSD tech. But anyway, if you’re just talking about “take this moving image and do something to it in real time”, today’s PCs are quite adequate for that task. I mean, look at all the killer video stuff people are doing with Raspberry Pi, which is a very low end platform from the point of view of the type of work I do.

4 Likes

Exactly – memory bandwidth is the limiting factor. That’s why FPGA based platforms are so appealing, is that you can have a dedicated custom “GPU” inside the FPGA maximizing what can be done with the embedded system’s bandwidths in a fixed environment. But even something like Memory Palace or TBC2 has maxed out the memory bandwidth required for 7-8 concurrent video streams (analogue doesn’t even blink even if it’s well beyond that number.)

When I say rendering, I’m of course excluding hardware acceleration or GPUs – and I’m presuming a CPU is treating the stream the way an analogue computer does; as a real-time constant value. I’m also presuming that something is being rendered pixel-per-pixel – modern GPUs accelerate this in all kinds of ways, but they also abstract a lot of behind the scenes access to textures and memory. I love letting modern GPUs do their thing – I’m not trying to compete with that.

But as all of us have been in consent about in this thread, tools inform the output! And art created on a foundation of technical abstractions is going to inform the range of work created with it – even if, on paper, those abstractions are more powerful. You can see the recent trend in gaming towards pixel-art rendering as evidence that even the game industry is recognizing that sometimes you need to get into the point-by-point details to achieve certain artistic aims.

To put it another way, there’s more than one way to make a GPU! The type that dominates PC/console gaming and mobile architectures is highly optimized for those purposes. There are a lot of things they don’t do well. And it’s not that those are “better” things. They are just “interesting” things. And I’m interested!!

4 Likes

I was actually talking about storage I/O bandwidth, but yeah. Memory bandwidth too. Analog doesn’t suffer the latency of things like memory caching, which is great. But on the other hand, if you don’t have any money, you don’t have to pay taxes. :wink:

On the subject of analog computing, this is the area I would like to see developed. 90’s era graphics oddities are of limited interest to me. I admire the kind of work Joost Rekveld is doing, resurrecting and expanding old analog computers for abstract vector graphics. There’s a deep connection between analog computers and video synths. And yet video synths never developed any higher brain functions. You’re lucky if you get logic gates and basic arithmetic. I would love to see proper patch programmable video computers… trig functions, imaginary numbers, linear algebra matrices, the works. My math skills are crap, but the availability of something like that would really set a fire under me to do the learning required.

1 Like

Absolutely! You can think of the LZX modular as an analogue computer for RGB graphics at its heart. SMX3 is a linear algebra matrix – the Polar-To-Cartesian and Affine transform circuits we’ve talked about in this thread a bit are powerful analogue implementations of DSP functions used in graphics and GPUs. But there are still many functions and implementations to explore.

A big practical advantage of the video synth over a traditional analogue computer or even an audio synthesizer is that it doesn’t have to be as precise or in “tune” as a computing environment for numbers or music – we can adjust it by feel and find what we want to see. And that makes all the difference in the world, as it’s what puts all of this into the realm of possibility for an individual artist (it’s not cheap of course, but ~$4,000 vs ~40,000 is a huge difference when it comes to buying an instrument.)

This isn’t nearly capable of processing video, but this project looks all kinds of fun!

2 Likes

“THAT” looks pretty cool. It could potentially be useful for modulation. But again, we need a bank of voltage scalers… it’s +/- 10V.

SMX3 may be a 3x3 matrix by some definition, but if I’m not mistaken there’s no mechanism for performing matrix math. I mean, there are no outputs for individual values, so I don’t see how we could, e.g. multiply two matrices. But I am out of my depth here.

But back to the topic… color. Have you thought about a Y/C patch paradigm? Chroma IS a signal, where saturation is represented by voltage level and hue represented by phase. I’m thinking that any module with a sync input that can recognize color burst should be able to process saturation and leave hue alone. To modulate hue, there would need to be some mechanism to offset phase relative to color burst reference. Not sure how that works technically, if it’s feasible to tweak phase without using delay. I.e. different standards use different burst frequencies.

I know this harebrained scheme would result in lower color resolution than UV. But I’m just throwing out ideas. Still clinging to my HSL assumptions. But, to paraphrase Pascal, nature is only first habit.

1 Like

One value is set by the fixed -2V to +2V voltage of the knob position, and the other value is set by what’s patched into the input. It is a fixed matrix, just like a colorspace converter in graphics/DSP terms (which has 9 programmed multiplier coefficients). A version with 18 jacks instead of 9 jacks + 9 knobs is a potential variation on SMX3, and I can imagine that being part of our range of modules. But either way, it’s a 3X3 matrix transform – whether the inputs are from knobs or jacks is a question of the user control implementation. (It’s the same circuit, either way! SMX3 is a fully electronic gain controlled instrument, there is no passive control or attenuation of the input signals.)

But back to the topic… color. Have you thought about a Y/C patch paradigm? Chroma IS a signal, where saturation is represented by voltage level and hue represented by phase.

Sure! But that is a Composite video processor – and that opens up a huge can of worms design complexity wise (and it would limit us to NTSC/PAL only). On the other hand we can treat YUV/RGB similarly by the same signal paths and modules.

There’s nothing preventing that by the spec though! It’s just that composite video is passed on RCA connectors and not part of the 1V signal path that RGB/YUV can be.

There’s no such thing as “HD chroma subcarrier”, at least not as far as video standards go. It could be possible, but not practical.

In some cases “YC” video in terms of digital encoding is referred to as Luma/Chroma, like the BT.656 spec with YUV 4:2:2 encoding. But the “C” is actually UV in those cases. Every other “C” sample is either U or V.

The main point is that RGB and YUV are linear voltages. Chroma modulated subcarrier is not – the hue value is encoded as a phase, and the saturation is encoded as amplitude – even though the signal is passed as a voltage, neither hue nor saturation can be identified by the value of that voltage at a point in time (you need subcarrier demodulation, aka chroma to UV decoding, to do that.)

2 Likes

I was thinking on this more, and I think stereo audio is a good analogy for UV as Chroma. And a PAN control is a good analogy for hue or saturation controls over UV.

For example, most audio mixer strips don’t have separate Left/Right gain controls. They use a single “PAN” control – this single control is effecting a dual channel (Left/Right) signal path, even though it is just one knob. That’s directly analogous to a Hue/Sat control in the UV proc/mixer context.

To make another audio analogy, a mastering processor or the audio mixer’s output section isn’t going to have pan controls. It’s going to have the Left/Right gain controls separately. This is analogous to “mastering” your video path by calibrating UV while looking at a vectorscope, etc.

3 Likes

OK, then forget it! Bad idea, but worth asking. I guess. :grinning:

1 Like

Could you possibly add basic definitions for Polar & and Cartesian in analogue video use to your vocab list? I grasp them, but barely …

3 Likes

A version with 18 jacks instead of 9 jacks + 9 knobs is a potential variation on SMX3, and I can imagine that being part of our range of modules.>

Oh please make it so! All I’ve ever wanted was a VBM with jacks instead of pots :smiling_face_with_three_hearts:

1 Like

Definitely! It’s easier for me at least, to understand it visually. Here’s an image from a CNC website I borrowed that illustrates it plainly and simply:

image

In the above image both “Cartesian” and “Polar” are identifying the same point in the graph, they are just recording it in different ways. One with XY values (Cartesian coordinates) and one with Angle/Distance (Polar coordinates.)

Now let’s replace “XY” with “UV” and we have different words for the same thing:
Angle = Hue
Saturation = Distance

To relate this to our discussion of chroma, review the “Chroma plane” as posted earlier up.
Now let’s define a chroma value called “Lavender” in Cartesian coordinates (UV).

Now let’s define a chroma value called “Lavender” in Polar coordinates (Hue/Saturation)

So a “Polar To Cartesian Waveshaper” takes inputs representing Angle (Hue) and Distance (Saturation) and converts them into UV values.

In electronics function blocks, polar-to-cartesian transform involves…
n * Sawtooth To Triangle waveshapers, where n = ((total span in degrees / 360) + 1) * 2
2 * Triangle To Sine waveshapers
2 * Linear VCAs

This is such a complex circuit that it’s been denoted a “complex function block”, and therefore worth having its own user facing implementation (as opposed to just expecting the user to patch it across multiple raw functions.) And that’s the point of modules like Mapper (or the HSL amp discussed in this thread.)

8 Likes

I hope you don’t feel like I’m shutting down the brainstorm – cataloging all possibilities our minds could conjure and looking at intersections is how we developed all this in the first place! And of course, the answers we ended up with aren’t the only approach. But I really like being able to describe how we ended up where we did, and why, and where there’s still a lot left to consider. 50 years from now, maybe there is another LZX, and they can read these words, and take up where we left off. Or try something totally different and apply these early 21st century concepts to their holo-projectors and retinal implants.

We’ve talked a lot about how that if you have an FPGA powered scaler/encoder and scaler/decoder, that “what’s in between” doesn’t even have to be a feasible video signal at all. Chromagnon is experimenting with that a lot (non-standard decimated rasters for laser output, etc.)

5 Likes

I’m into it. These first 5 modules have to be super dense and functional as a core, mainly for the practical reason that we really want < $2000 video synths to be a reality. So 3-5 modules have to complete the picture of a full instrument, and there have to be several possible instruments you could make in that price range and size. After that is in place, the following designs don’t have the same constraints!

I want more modules that are “jacks only” in general though. And some modules that are (cheap!) and just “voltage generators” to go with them.

8 Likes

I can imagine a logic module being jacks only. Like a combo of elements from the old Visionary binary modules, but in the Gen3 format.

2 Likes

Absolutely not. You’re just telling me what’s feasible.

Pish posh. LZX is going to outlive all of us!

Regarding control voltage of pots such as SMX3, this is high on my wish list. I’ve got three Bahascillators sitting here crying out to send tri-phase waves into a color component mixer. is it reasonable to consider a hardware revision of existing modules to add CV expansion headers? Then all one would need is a ribbon cable and some jacks.

3 Likes