How Many Oscillators Do I Need? OR Adventures in Non-RGB Patch flow NOW WITH ANSWER!

I’m going to post some personal thoughts about Video Rate Oscillators here and share some ways that I use them. Purely personal experience and preference.


How many video rate oscillators do you need? What do you want to do? Once you graduate from ramps, oscillators are a powerful source of shape, motion and color creation. The answer for me is ‘as many as possible!’

I have a pretty heavy schedule but I do want to provide a few examples to illustrate my belief. More as soon as I’m back in the studio in the coming weeks.


Here’s a start and a challenge. A non-RGB workflow on the Cortex: R:C:C:R RED- COLORIZE-vC(omposite) - RED. Patch up some things and see what you get. Two VWGs Modulate three PRs (two main PRs go into Video Soup), plus some arches, curtain, staircase and a lot of crossfaders to add texture and depth:


Hey Bill! Would you mind sharing a little more details about the signal chain? Particularly the R:C:C:R portion. Thank you WIATROB senpai!

1 Like

I’m imagining that means patching solely into the red inputs on channel A and B while also patching into the colorizer and composite inputs on the cortex


this is wonderful work and an inspiring challenge
I’ve gotten rid of some oscillators recently (one of the VWGs in your patch I suspect :slight_smile:) because I needed to make room and already have a tiny studio space

one of my favorite little tid bits is making an extra filter (curtain) into an oscillator
also if you modulate that self oscillating curtain you can get things from star fields to stable bars


Correct. Excuse the dust grit.


@7pip See @wednesdayayay comment. I’m not so patient with patch details, but I will add some techniques in the thread before I pull the patch…

1 Like

The answer to “how many oscillators do I need?” no matter what your workflow (shape generation, video processing, noise, castles, etc.) its:

4 Minimum.

Moar is better, 12 is best, but four is great. :exploding_head:

BTW, I didn’t know the answer when I started this thread.


Bill is correct. Video synths have an extra dimension that audio does not have, so they like more oscillators.

You can think of oscillator = 1D pattern. The pattern can be either animated (LFO), vertical (audio range) or horizontal. So to make a moving 2D pattern or shape, you need 3 oscillators. That’s the equivalent of a single VCO+LFO monosynth voice in audio world, like the Roland SH-101.

If you want to make more complex shapes and patterns, often oscillators work will with a second oscillator as a sub-modulator. So to make a more complex pattern voice, you need 4 to 6 oscillators. This is like the equivalent of a 2-3 VCO monosynth voice, like the Minimoog.

If you want multiple moving patterns/elements/shapes, you need multiple voices. So the video equivalent of a 6-voice polysynth like the Roland Juno-6, would have anywhere from 18-32 VCOs!!

Note that in any of the above cases, H/V ramps (like from Cortex, count as “oscillators”.) And of course you could get your LFOs and audio rate oscillators from the world of audio synthesis.


Great analogy @creatorlars! Due to the nature of perception, relatively simple combinations of waveforms can create quite bit of sensation in the ear. Of course real world audio waveforms are much more complicated, but most can be simulated by relatively simple combination of waveforms. What we see is incredibly complicated, and therefore much harder to simulate.

A combination of fairy simple waves and envelopes can simulate a violin sound, but what combination of waves will it take to simulate the image of a violin?

Certainly it’s not my goal to simulate reality (or I’d be using 3D rendering software) but I do want to create spaces that are visually complex in similar ways. I’ll outline some of those techniques used in the patch above in subsequent posts (for example using attenuated feedback modulation between two oscillators to punch out related ‘shapes’ from each others waveforms).


Well, I let that patch get away from me. Part of my issue with sharing patch information is that I can’t Keep It Simple. I’ll try top isolate the basic techniques again or record a patch session ala Chris Kanopka’s Video painting series.

This is where it ended up:


The red red colorize composite… That’s the default patch starting grounds in my rack.

1 Like

@wiatrob Do you any type of “oscillator” in mind, or did you when you started this thread? The types that come to mind for me are

  • Video-rate ramps
  • Video-rate VCO like Prismatic Ray
  • Video-rate LFO like BSO Triple-LFO or Pendulum
  • Audio-rate VCO
  • Audio-rate LFO

Do any of these qualify as 1 of the “4 needed oscillators” to you?

1 Like

There isn’t really any such thing as an “audio-rate” or “video-rate” LFO.
LFO’s are by definition low frequency, often going from low end bass frequencies down to multiple second or even minute long period times.

(LZX-style) Video level outputs/inputs is a useful distinction (Do i need to have a translation module / do i get reduced “useful knob range” on the attenuators)

“oscillators” is not necessarily the best term here?
Maybe “independent source” or something like it is better.

I think i’d sort them into:

  • Video rate oscillators
  • Noise sources
  • Ramps
  • Audio rate/“Vertical rate” oscillators
  • LFO’s (And other slow modulation sources)

I’d argue that a camera/screen feedback loop fits into this somewhere, maybe even replacing at least 2-3 oscillators/“sources” in a complex setup.

Another interesting thing to try is to just patch raw sound into video as vertical modulation. (e.g. have a hard key against a horizontal ramp modulated by something with a clean synth bass.)

I haven’t really started playing with sequencers and controllers in video yet, but they, and anything else that lets you change variables over time will be interesting modulation. Don’t forget the “poor musicians LFO” (e.g. wiggling a knob with your hand).

These are of course rules of thumb. As with audio synthesis, having interesting processing: keys/ mappers/ colorizers/ folders in video, reverb/ delay/ high resonance filters/ and so on in audio will allow you to get more complexity with less input.


I was speaking purely of video rate VCOs like the prismatic ray. Since much of my work is period-based, lots of LFOs are a necessity. I almost,ost never use LFOs or audio oscillators as signal sources because they don’t sync well enough for my tastes. This is however, a purely personal preference.


We could call them ‘spam cans’ but they really ARE oscillators by definition. :wink:

This is all my preference and opinion, as @transistorcat said there is no wrong or right way to do this stuff.

The majority of my body of work is generative visual synthesis - making controlled shapes.

For that purpose I consider only the following as workable PRIMARY signal sources for my workflow:

  • Video rate Oscillators
  • Ramps

Secondary processing is important, modifiers like frequency multipliers (Staircase) and operators (Arch) and multipliers (Faders, key’s, etc) can combine the two primary signal sources in myriad ways.

Then modulators like LFOs/function generators and sequencers. These are most of the modules I have from the audio world. And MATHS :slight_smile:

While I do use feedback in my work, It’s either pure feedback outside the synth or as a modulator (both direct and re-entrant*). I don’t find that feedback is controllable enough to be a primary signal source for generative synthesis, so for me it can’t replace any primary sources.

*edit: Direct feedback is patching the output of a module back into one of it’s modulation inputs, directly or through other modules. Re-entrant feedback is taking the encoded signal back into the synth.