Using LZX video signals for audio… I’m curious and will explore this as I build up my rack
Would anyone here have played around with listening to their video?
If so, would you have advice for optimising the signal before it leaves the rack?
Critically, techniques to get a 1v unipolar to 5v bipolar and removing DC offset
Then I’m interested if you used any filtering or frequency dividing to shape the sound…
In a week or two I’ll be answering my own questions, and will reply with my experiences, but thought it’d be interesting to spark discussion and learn if it’s something others have explored - it’ll inform what audio modules I fit in a video-focused rack
Pretty much any outboard audio mixer will handle the removal of DC offset as a matter of course, though you will get some big pops if you plug stuff in with the gains up.
A 1Vp-p signal is plenty hot for typical audio mixers.
I would be worried about large amplitude stuff above a couple kilohertz frying tweeters. It might be helpful to keep an oscilloscope on the output to catch potentially destructive signals beforehand.
Also interested in the simplest sort of processing path from HDMI / composite video to any audible signal, even if the experiment may not be interesting as-is
Main goal of the experiment is some mirroring of shifting psychedelic patterns and processing audio until it’s interesting
If I can somehow set up feedback loops back into the system, all the better.
Very easy to hear a composite video signal–just use a RCA to 3.5mm (or whatever audio jack you’re plugging into) adapter! You won’t hear the upper order harmonics that you’d see as an encoded video signal on the screen but anything in the composite video signal that’s below 20kHz would be audible.
Analog composite video is only like 1Vpp so it’s very unlikely you would hurt a speaker if you plug it right in but try it at your own risk!
@killumina The other way around yields much more usable results. Unless you are feeding the video into something like TouchDesigner (or VCV Rack when my new plugins finally drop) where you can read the color/luma values in various X/Y positions in the video stream and do something in audio land with that info.