Input / Output off control

Hey all.

A little puzzle I’m wondering if anyone can help with -

What exactly do I need to add to my system to just have VC video input turned off - OR - the VC output blacked out when CV input is received ?

I have Visual Cortex, and Sensory Translator, and I’m getting nice effects with just those, mixed with some digital video, and CCTV cam input, but I want it to be that the video input - live camera for example - is switched off/ blacked out unless receiving signal.

First question is - is there any way of doing this with what I have (I can’t think of a way) , or if not, then what would be the best module to go for next for next. Doorway? Staircase?

I’m not sure I understand what your signal flow is, it sounds like maybe:

  • Camera -> VC Input Decoder
  • VC decoder -> VC Colorizer/Compositor Ch. A
  • Something else -> VC Colorizer/Compositor Ch. B?
  • VC output encoder -> Display

That is, you are compositing 2 inputs, and you want the output to be blank when the camera input stops, rather than showing what compositor channel B is being fed? If that’s right, I think you can accomplish this if the compositing mode is set to Multiply, and the keyer slider is all the way to the left. This should make it so both channels must have positive luma content at some point on the screen for anything to show up there. Note that if Ch B is in Negative mode you will get white output in this case instead.

As far as what to get next, it depends a bit on what you want to do. Both Doorway and Staircase are powerful processors for camera signals. Quickly you may find you want some oscillators (Prismatic Ray, Fortress) to create more complex pattern elements than are achievable with VC ramps alone, and modulate CV inputs on other modules, and mixers (Passage) to combine video-rate elements with your envelopes from Sensory Translator.

1 Like

Hey thanks so much for the advice.
Sorry about the late reply, I’ve been crazy busy with work. Not enough hours in the day!

I’m gonna try this out tomorrow night. Tuesday night is studio night these days. Unfortunately it’s pretty much the only night these days!

Thanks again!

okay, I’ve tried a few things.

It seems that VC gets the most interesting effects for me when I’m sending signals (from Sensory translator and an LFO module) to all, or at least most of, the Colourizer/Compositor inputs. If I want to have it so that I can use some external CV to turn ‘off’ the camera, then I need to disable the colour/compositor inputs on channel B, and then when the Trigger input on the channel selector gets a signal, it will switch off/on.
Problem is that then, the mix output of VC is boring compared to when composit/colour inputs are all getting some signal.
Basically, I want to be able to run VC as I am running it, but to have something switch the final mix output on, only when it receives a signal.
On an aside note, even if I was to just use one channel of VC as my ‘ON’ input, and use a trigger (LFO Square wave for example), I notice that the trigger behaves erratically, sometimes leaving channel A active for a moment, before switching back to what it should be doing i.e. alternating between channel A and B at the speed of the LFO. What could be causing this?

Anyway, back to my main problem. In a nutshell - Here’s an example of how I typically trigger a video clip in Resolume. Audio in > Opacity Gain of clip. So my visual is playing only when it receives sound in the selected frequency.
My analog LZX system has Sensory Translator to handle the audio filtering, and if you imagine VC as my video clip, what do I need (another module I imagine) to trigger the ‘opacity’ of the final output of VC?

Anyone?!

:sweat_smile:

p.s. I have of course figured out that I can trigger the opacity of my analog input within resolume, effectively turning off the VC from the main composition, but that’s not the same. I want to do this specifically outside the box.

Hm, the tricky part I think is if you want to specifically control the output of Visual Cortex, since Cortex kind of expects to be your final output encoder. With a pre-encoder expander you could take the output you like from Cortex and patch it elsewhere, re-encoding it later with a separate output encoder like Cadet II. Within VC basically you have the VC input over the channel B inputs, which can act as an opacity control in the center position, since it acts as a full color crossfader between channels A and B.

Are you working with a component or composite camera input? If it’s composite, or you only need the luma signal from the camera, you may be able to get the effect you want by patching the luma from the input decoder through a fader (Bridge, Cadet VI) first, and controlling the fader with your envelope from Sensory Translator. Controlling full-color opacity in an analog system requires substantially more involved circuitry than gating a single signal path, but is achievable using either the VC compositor or an advanced compositor like Marble Index.

Are your Animator & Key Generator minipots fully CCW? These set rise and fall times for the animation voltage. If they are fully CCW, the animation should respond more or less immediately to triggers; otherwise the transition will lag going each direction depending on the minipot settings. I think of the button / trigger input more like a “take” button on a video production mixer: you trigger it to do a wipe with constant slope. When using the slider these minipots act as slew controls for the slider motion.

1 Like

Sorry about my laggy reply!

Okay, so do you mean I can do this with just the Cadet VI fader? Or you mean I need a combination of both Bridge and Cadet VI? It might work for me to patch pre-cortex like this. Does the VI module just display black on channel B if nothing is patched into channel B? So I could potentially patch luma from the camera into the VI and then into the VC, like you say, but I think without any camera input the VC will still output a colourful signal. So, no I don’t think that will work for me.

To give you an idea of what my end goal is - if you imagine, I’m playing my music live, and my sampler triggers certain video clips (all black and white) when it plays. My modular synth triggers VC parameters via Sensory translator, making for some colourful shapes that appear underneath masks in the layers of digital video.
Here’s the thing. Because I want to improvise live with my synth, sometimes only occasional notes will play, and sometimes it won’t be playing at all. When you hear the synth, that’s when the colourful shapes appear. That’s the goal. So, while I found that I can program tracks to mute the VC input to the computer (which is the final output), that’s no use when I want to improvise. If I cut out the synth, the colour input will still be seen. Even if the camera is off, VC will still spit out some form of colour image. I only want colour/VC output when I turn the synth on. Otherwise the system isn’t working as it should.

Do I really need to buy another VC or the similarly expensive Marble Index (if that even would achieve what I want?)

Anyone, anyone, please feel free to chime in with suggestions.

Yep, all this sounds right. I mentioned Bridge just because it also includes a crossfader. As does Pendulum come to think of it, but yeah these are all single-channel faders

Neither the Visual Cortex or the Marble Index I think is really designed for this. In particular the problem if you want the Visual Cortex output specifically is that you can’t really access the output signal, it’s intended a final compositor that feeds directly to your display or capture device. If you had a signal generated elsewehere in the video synthesizer that was going to VC channel A, no problem: leave channel B unpatched, trigger a wipe or patch to the compositor CV input. This does seem pretty achievable with some software or an outboard video mixer that accepts a MIDI trigger or something, though I know you said you don’t want to involve a PC.

If you had some more dramatic processing you could do with your camera feed (Staircase and Castle modules are great for that kind of thing) I think using VC to “gate” the output is probably pretty satisfactory for a workflow like this.

Thanks, I think I just need to change my way of thinking with this. Of course, I already have a nice gate / fader / switch, with the visual cortex, so like you say, the thing to do would be make some more dramatic processing to the camera feed.
I had gotten too attached to using all of the functions of VC for the processing of my video feed, when are obviously much better tools for processing. And then I can leave VC be the master at the end of the chain, like it wants to be.
I think Staircase will be purchased :wink:
Thanks for the help!

2 Likes