I just got a shiny new Expert Sleepers ES-8 and was thinking … maybe there could be a module in VCVrack that would convert signals to 0-1V ? This could either process signals from my Eurorack or from VCVrack and then could send them to the CV inputs on my LZX modules.
I dont use VCV (but i might now!) so im not 100% but i think you could just use an attenuator, vca, vc mixer, etc. and just dial back the voltage. it doesnt have to be exactly scaled down.
try it out and post your results! this is a really cool idea.
I had no idea it was so simple! So I bought one. VCV Rack, of course, is free.
The FH-2 (sadly I have the FH-1) can work with 0-1 volts but that’s with MIDI. VCV Rack can pass straight control voltages. I’ve experimented a little and it definitely works, but it’d be nice to get some video voltage specific modules together (for example, a VCV version of Bridge). I’ve been meaning to learn C… well …
any attenuator will do the job perfectly well, either pre- or post-ES8
to be honest you can just stick the outputs from the ES8 (or pretty much any other eurorack module) straight into LZX modules they can all handle 5v - they just clip at 1v - so 1v+ in = 1v inside
btw I currently have a late 2016 13" MBP - i5 escape key model and it won’t run both VCVRack and Lumen at the same time without serious audio issues - hopefully in July I’ll get a 2019 15" MBP i9 - fingers crossed that will run both at once happily
I have yet to try it (in the car with an iPad) but in the RJ Modules set, the “Range” voltage converter should be able to translate any modulation source into LZX land. You give it the expected input range and the expected output range. This would seem to allow through compatibility with VCVRack for control voltages.
A brief update. Yes, it works very well! You can use range as an alternative to Bridges. There is also a Range LFO and you can dial the voltage in on that too.
Would love to see a VCV Rack module that converts video on the system into LZX compatible signals. I have no idea how video signals become euro-compatible signals, so it’s also possibly an opportunity to dig into that a bit and see what it would look like in code.
I heard that the VCV Rack guy (Andrew Belt) proposed some Rack based “analog” video modules at a talk in SF, but haven’t heard anything since.
Wouldn’t this require your DC-coupled audio interface (ES-8 etc) to be capable of horizontal frequencies (several MHz)? Otherwise I think you could just attenuate LFOs / audio rate signals down to 1V before or after sending them through the ES-8.
No idea! This is where I show that I have zero clue what’s happening when I feed an RCA cable into my vidiot and then the camera out “eurorack compatible signal” into other eurorack devices to manipulate in weird ways. I believed that the signal that went into the vidiot was different than the signal coming out of the “eurorack video out” jack…but…I have no idea. I’m curious though, but I acknowledge I probably don’t have the basic knowledge necessary to understand the answer. Is it not turning video into a 0-1V signal that’s different than what you’d pass to any other RCA jack?
Can be recreated in code on VCV Rack? Load a video, then output luma / RGB where each of the four outputs are a 0-1V signal? My mental model of what that bit of Cortex is doing is to “decode” the video signal into its parts, then output each part as a separate simple wave, with sync on its own channel as well. Analog video signals are luma - color - sync, just all encoded cleverly in a analog signal (Wikipedia is awesome).
I’m positive I’m oversimplifying something or maybe missing some big chunk of concept…but…maybe this is possible in code? Vizzie / Max is doing something or other when they turn video into Max signals…so…seems like something VCV could do, with the bonus of outputting it into modular via ES-8/9. That would be neat.
Yes, this is basically correct. For a composite signal (where all the video information is encoded on a single wire) Visual Cortex basically extracts the sync pulses and averages out the color subcarrier to extract the luma, all done with high frequency analog circuitry. This is basically also what some electronics in the Vidiot do to get a 1V luma signal from a composite camera input. YPbPr decoding is somewhat different, this is more like sync extraction + color space conversion to get Luma + RGB from a Luma/Chroma representation.
However even a single extracted luma signal contains information at very high frequencies for a CPU or audio DAC to handle: LZX signal paths are designed to handle signals from DC to 10 MHz, as images with a lot of horizontal detail can wind up having frequency content up to several MHz, and there’s no way VCV can do real time processing at a sample rate that high. A computer is probably always going to have to deal with emulating analog video behaviors via digital image processing tricks, rather than working with video-rate signals directly. Something like Memory Palace is also working with whole video fields or frames internally so that it can do image processing via matrix arithmetic rather than having to process native analog signals.
Your best bet for getting digital imagery out of the PC and into your LZX system is probably to use an HDMI output and an HDMI to YPbPr converter box, then sending YPbPr to Visual Cortex for decoding into patchable full color video. This works quite well for me! The converter box just shows up to my computer as another display.
Basically computers generally don’t bother with modeling video signals that advance one scanline at a time, they instead operate on whole frames. This is probably also true for whatever representations Max etc use for video signals.
As has been pointed out, it’s a question of sampling rate, bandwidth, and sync. Video just won’t work in an audio signal path. Think about the video card in your PC… they usually have two video outputs. If you want a third or fourth, you need more video cards! And you need PCIe bus sockets to support them.
SD video is sampled with a 13.5MHz or 27MHz clock. Audio sampling is commonly a 44.1KHz - 192KHz clock. Generalizing here, but say we’re sampling 13.5MHz video and 44.1KHz audio, both at 24 bits per sample (24-bit RGB, 24-bit audio.) We could run 300 channels of audio streaming within the same video bandwith as 1 SD video source.
In addition to the wider frequency range, if we’re going to mix two video sources in relation to each other, every pixel has to be precisely aligned, down to the nanosecond (this is the function of video sync and genlock.) Otherwise when you mix them, the images don’t match up. Furthermore, even though video HSYNC is 15,734KHz (within audio range)… that frequency would have to be a precise division of your audio sampling frequency in order to produce stable sync for video purposes. 13.5MHz is the standard video sampling frequency because it divides down into clocks suitable for both NTSC and PAL timings.
What’s so powerful about hardware, analog instruments for modular video is that you don’t run out of memory bandwidth. Every single CV input and every single image generator has the same computational power as a CPU crunching 10,357,632 pixels per second!
As an interesting addendum to “video is way too fast for audio”, frame rate modulation is kind of the reverse. If you quantized all your CV signals at 29.97Hz (frame rate) then your audio would have zippering and sound bad. But when you do this with video, it looks smooth and fluid, and prevents any mid-frame glitches.
So: VCV Rack – really powerful for animation, motion, and some audio rate modulation! This could help you get a ton of utility out of your video rig. Save the hardware for the modules that process and generate the video signal.
Also: Check out Diver. It’s specifically meant as an “audio source to video oscillator” conversion tool, allowing you to make 2D video waveforms out of an audio source.
I’ve made some VCV Rack audio modules, and back around 0.5 I started planning out a video toolkit. The trouble is that the one-sample-at-a-time audio calculation engine is completely inappropriate for video. It would require some rather significant new hooks in the engine to create a reasonable video path, and I was pretty sure Andrew wouldn’t take that patch back then. It might be worth revisiting now that 1.0 is stable. Would be curious to hear more about what he said at that talk!
This thread has been a great read…thanks everyone!
Unfortunately I wasn’t there in person, just paraphrasing what I heard second hand. This thread has been really interesting, now I’m really curious to figure out what he meant as well .
Maybe it was an off the cuff remark, or he’s got a different path in mind? Maybe something like Lumen?
This is really amazing. Thanks for the perspective.
Total tangent: Have you ever looked at the guts of a Fisher Price PXL-2000? I had one when I was a kid. I know the cassette tape moved through that thing tremendously fast and the quality was still very very low. Still, given where this thread headed, I’m impressed they were able to get any sort of video on a cassette tape. If you folks ever work on a “record video to cassette tape” machine, I’ll preorder