Lars Condensation Wiki (info by module & topic)

#1

                   

    The 48HourVideo boys have been collecting notes from the community to make better sense of analog video synthesis. We want to share our stash of Lars’s communiques, and would love your help continuing to build this community resource. With this Wiki post, anyone at Trust Level 1 or higher can add their own edits. Just hit the Edit button on the relevant post and add your info! Mind the formatting.

    The content is currently focused on the Expedition and Orion series, so we especially need help with relevant knowledge regarding Cadet and Castle systems. Team VideoSynth LETS GO! :smiley:


-----------------Information by Module:----------------------


Expedition Series Modules

    

Arch

Re: Draw circle with ramps
You can transform the linear ramps into logarithmic (parabola) using the bottom right section of Arch. You’d need a log circuit each for horizontal and vertical, and then mix the results.
(CKB, How to draw make a circle with ramps)

3 of the 4 functions in Arch are patched out utility block versions of some of the Shapechanger circuitry!
(CKB, How to spin a shape)

Bridge

As far as tricks go, you can use the Multiples sections in amplifier mode as a soft keyer with 20% softness. Add in a variable subtracting voltage with Passage before passing the mix to the Multiple/amplifier to achieve threshold modulation. Also great if you just want to boost the contrast of something. The fader section is great for doing a complex blend between all kinds of things. Try feeding a variety of shapes and waveforms into the A, B, and CV inputs. Same with the mixer section.
(CKB, Interesting techniques for Mapper)

Re: Use as a Multiplier
“Crossfading between inverted and non-inverted versions of an identical signal will create a 4-quadrant multiplier. You can think of a VCA/2-quadrant multiplier like an attenuator and a Ringmod/4-quadrant multiplier like a voltage controlled attenuverter! (If that helps.)
(Mult a signal to mixer Subtract in and Fader B in. Patch mixer out to Fader A in)”
(FB, https://www.facebook.com/photo.php?fbid=1771271306257471&set=gm.885787248259263&type=3&theater&ifg=1)

Re: Sensory Translator
“… With Sensory Translator, it works great if you use an envelope output from ST into the CV input on the Bridge fader – then patch video rate/shape elements into the A and B inputs for a crossfade.
(CKB, Interesting techniques for Mapper)

Color Chords

Re: Opacity control
   The opacity slider is not a level control. The level controls are the red, green and blue knobs for each channel (turn them all down if you want to null your input.) The opacity control is like a transition between additive and alpha channel blending. With opacity at zero, everything is additive (like a normal matrix mixer.) That is, layered inputs will stack on top of each other, each one adding to the brightness of the channel. With opacity at 100%, the related input mono source is subtracted from the lower channel mono sources before a 0V clipping circuit that is between the mono input and the RGB gain controls. It’s a bit of analog math to allow you to segment and transpose shapes before colorization.
   So think of Color Chords like an odd duck sitting somewhere between a colorizer, matrix mixer, and analog computing functions. It’s meant to be fed a bunch of overlapping patterns and shapes. Marble Index is what you want, if you want a full RGB compositing environment with voltage controlled opacity and masking.
(CKB, Color Chords 0% Opacity rationale)

Re: Mapper:
   … In the pre-processor case, it allows you to define a background image in Hue/Saturation/Brightness colorspace and then layer RGB elements on top of it. This would be ideal in cases where you want the dynamics and characteristic look of hue/saturation modulation, but want a little more precise control over layered elements on top. So good for compositing different things that you dont necessarily want to all “melt together” into the same shape.
   In the post-processor case, Mapper converts your Color Chords’ RGB controls into HSY controls instead. When you are adding multiple things together in a hue-based colorspace, they tend to all melt together and form transitions between different hues as you begin adding and layering. So it would be good for a case where you’re trying to form a composition that looks more like one big complex organism rather than layered or overlapping elements.
   A third way would be to patch Color Chords into channel A of the Cortex compositor and Mapper into channel B, and use Visual Cortex to mix the RGB outputs of both modules with each other. Especially interesting if you’re feeding Mapper and Color Chords with the same mult’ed elements – lets you crossfade between two wildly different variations of the same composition.
(CKB, Interesting techniques for Mapper)

Re: Failed startup, first production batch
In the very first production run of this module, a troublesome -5V regulator was installed that causes a problem with the module failing to start up on powerup. It needs to be replaced with the right part, but sometimes the issue is fixed with a quick power cycle of your case. If its a recurring problem, definitely submit a repair request on our website and we’ll get that fixed for you.
(CKB, Color Chords 0% Opacity rationale)

Curtain
Doorway

   There are many ways to use this module. Electrically, it is a voltage controlled high gain amplifier with clipping and offset, which controls a voltage controlled crossfader. The outline function also turns it a variant on a full wave rectifier.
    In video/broadcast terms, it is a key generator and a monochromatic keyer (it has a single channgel crossfader, not an RGB crossfader.) When fed ramps and gradients, this makes it a wipe generator, where the threshold controls the wipe transition like on a video mixer or console. If fed luma video as the source, it is a luma key generator.
   In synthesis terms, it is a waveshaper and crossfader. It can change the shape of input waveforms in various ways, and then use those waveforms to crossfade between two sources.
   In the system use case, typically you send some waveforms or mixes of them to this, and use it to generate a hard edged shape or key pattern. That pattern can then fade between two other patterns, so it lets you quickly build up complex arrangements. You then either mix those arrangements directly to RGB video (by patching them to color inputs or mixes) or use them as modulation for compositors or colorizer offsets.
(CKB, How Is Doorway Used?)

… Hands down favorite for getting multiples of is Doorway. That module is full of tricks – the background/foreground inputs being used to join together keys is so powerful, and the key to unlocking complex shapes and figures outside of the standard quadrilaterals.
(CKB, https://community.lzxindustries.net/t/doubling-up-on-modules

Liquid TV
Mapper

Mapper loves it if you pre-modulate the amplitude of whatever you’re feeding to hue – the Bridge fader section and Doorway can be great for this!
(CKB, Interesting techniques for Mapper)

Re: Color Chords
   … In the pre-processor case, it allows you to define a background image in Hue/Saturation/Brightness colorspace and then layer RGB elements on top of it. This would be ideal in cases where you want the dynamics and characteristic look of hue/saturation modulation, but want a little more precise control over layered elements on top. So good for compositing different things that you dont necessarily want to all “melt together” into the same shape.
   In the post-processor case, Mapper converts your Color Chords’ RGB controls into HSY controls instead. When you are adding multiple things together in a hue-based colorspace, they tend to all melt together and form transitions between different hues as you begin adding and layering. So it would be good for a case where you’re trying to form a composition that looks more like one big complex organism rather than layered or overlapping elements.
    A third way would be to patch Color Chords into channel A of the Cortex compositor and Mapper into channel B, and use Visual Cortex to mix the RGB outputs of both modules with each other. Especially interesting if you’re feeding Mapper and Color Chords with the same mult’ed elements – lets you crossfade between two wildly different variations of the same composition.
(CKB, Interesting techniques for Mapper)

Marble Index

Re: Cortex Colorizer VC Reproduction:
Passage -> Marble Index is the most literal route to go here. This gives you a brightness/contrast control over each color, and the ability to send the a single input to all 3x outputs, each with their own bias/gain controls, the ability to sum in another voltage (i.e. brightness modulation) on each color separately, and then advanced composites/blending, fader controls (via Marble Index’s compositors.)
(CKB, Colorizer on Visual Cortex)

Re: Cortex:
I would say that Marble Index + Visual Cortex would be a more natural expansion for compositing/keying now that Marble Index is out, unless you needed the dual video output encoders that double Cortex would give you.
(CKB, Doubling Up On Modules)

   Marble Index is more about voltage controlled mixing between sets of RGB inputs. On top of this, Marble Index has a LOT more circuitry related to complex blending modes – so it’s kind of more like an analog computer designed to achieve compositor math typically found in computer graphics. I’m not sure if anything like it really exists… trying to do something totally new. You can think of Marble Index more like a video mixer, except instead of A/B banks you have 3 layers, and instead of an A/B fade control, you have 2 opacity controls.
   You can read more about the math behind blending modes here: https://en.wikipedia.org/wiki/Blend_modes Most of the modes described on this page can be achieved (with analog circuitry) using different combinations of modes on Marble Index.
   Likewise, most of the Porter-Duff compositing methodologies are achievable using a pair of Doorway keyers to manipulate alpha channels (which then get passed along to control the Opacity CV inputs on Marble Index.) http://ssp.impulsetrain.com/porterduff.html
   Marble Index […] is really the final piece of ‘glue’ between all of the color mixer/manipulation modules we’ve designed for the system so far, and can function in harmony with Passage, Mapper, Color Chords, Cortex compositor, etc to enable a huge variety of different workflows that have complex hierarchies and full expandability without any “end points.” Cortex is designed to be the system’s primary RGB/color module in under 3U systems, but as systems expand, the color complexity can be relegated to other modules and Cortex then becomes like a final output processor, where it’s easy to do ‘fade out to black/color’ or output effects (like Negative, really handy for external feedback loops).
(CKB, Alpha compositing with Marble Index)

Navigator

Re: Spinning shapes
   The missing ingredient [to spin ramp shapes from Cortex with Navigator] is an H/V mirroring (saw-to-triangle) circuit. If you want to rotate a 2D shape, you need to rotate the ramp waveforms, THEN perform H/V mirroring/saw-to-triangle conversion on both ramps, and then you would be ready to mix or combine them in some way to create a 2D shape key.
   Navigator includes the facility for mirroring, but its mirrors are placed BEFORE rotation, in the signal path – this is great for quadrilateral/bilateral mirroring when feeding Shapechanger, but does little if you want to create a rotated shape without the Navigator+Shapechanger combo.
   To make myself doubly clear, (1) The Navigator+Shapechanger combo gives full 2D shape generation/rotation (in the spinning the shape clockwise/ccw sense.) (2) If you want to do this with Navigator alone, you need a way to (a) mirror your HV ramps post-Navigator, (b) mix/combine them in some way, (c) generate a key of that mix, such as with Doorway or Cortex. FYI, Staircase is five mirror circuits in series! So a pair of them could mirror H/V and then keep going…
   In fact, Navigator+2xStaircase makes for a pretty interesting pattern/geometry voice in its own right. These kind of relationships are what the XY series modules are all about!
   When imagining patches, it may also be helpful to think of Navigator as positioning/rotating a camera, rather than the shape itself.
   … Everything derived from the ramps which have been rotated will have their own geometries rotated identically. Say you have a Navigator with HV outs multed to 2x Shapechangers. One creates an ellipse, another a squiggle. When the Navigator rotates the ramps, both shapes will rotate, as if a camera pointed at them is spinning.
   For me, a video synthesizer “voice” consists of 2 components: one for spatial parameters (position and rotation) and one for geometry (width/height, curve). Think of it like the VCF-to-VCA relationship in a monosynth. Usually the VCF is before the VCA, but you can also do it the other way around create different and interesting results. Navigator+Shapechanger present to you a holistic approach to the spatial+goemetry processor concept, but there are other ways to patch up the essential pieces of ramp processing based shape generation.
(CKB, How to spin a shape)

Navigator rotates the XY dimensions of the “scene” that the shape is created within.
(CKB, How to spin a shape)

Passage

General Info
Passage (which is really just an evolution from the Visionary series “Triple Video Processor”) is another good one to double up on. It’s a great way to get anything into RGB and also just functionally dense for utility mixing.
(CKB, Doubling Up On Modules)
https://github.com/lzxindustries/documentation/tree/master/Navigator
Don’t forget that Passage is an RGB processor/mixer too. You can feed that into the Color Chords or Cortex Compositor, or to post-process the outputs of Color Chords/Mapper to mix in additional RGB signals, modulate RGB, etc.
(CKB, Interesting techniques for Mapper)

Re: Cortex Colorizer VC reproduction
Passage -> Marble Index is the most literal route to go here. This gives you a brightness/contrast control over each color, and the ability to send the a single input to all 3x outputs, each with their own bias/gain controls, the ability to sum in another voltage (i.e. brightness modulation) on each color separately, and then advanced composites/blending, fader controls (via Marble Index’s compositors.)
(CKB, Colorizer on Visual Cortex)

Pendulum
Polar Fringe

Re: Joystick damping grease for looseness
Alright folks the solution I’m suggesting here is using Nyogel 767a damping grease. A small swath along the inside of both tracks of the joystick has improved tactile performance dramatically. Smooth travel and solid hold.”
(CKB, Jonah, Polar Fringe loose joystick)

Prismatic Ray

General Info
If you modulate Prismatic Ray’s Pedestal control with audio, you will get waveform generation as well (Put Prismatic Ray in fastest frequency mode and up the gain a bit.)
(CKB, Question, how to get high frequencies for horizontal frequency domain)

Re: Highest Freq vs.Video Wave Generator
… Prismatic Ray has a lower max frequency range. The priority was to set the minimum point of the sweep at about the same point as H ramp and V ramp, and then maximize the sweep from there. The reason VWG got much higher was the presence of a frequency doubler on the triangle core’s output. Staircase has 5 such circuits and can work on a whole 2D pattern, so if you have one of those, try post processing the Prismatic Ray outs with it to get into some tiny details kind of territory!
(CKB, Prismatic Ray frequency range)

Re: Vertical sync unstable via rear panel
Change the Gate/Trigger switch to Gate mode. It’s best to use the Trigger mode only when sync’ing from something like a video camera or other source, as it shortens the pulse width of the sync reset.
(CKB, Prismatic Ray - Rear panel RCA V-sync twitchy)

Sensory Translator

General info
Best places to patch those Sensory Translator outputs: key threshold CV ins, any amplitude/VCA/fade/gain function CV, any CV input on Staircase. And for more complexity, mix them with something else using Passage before doing any of the above (negative bias is good so that something sits “below black” until the frequency band envelope is active. Try patching the frequency band outs so that they “reveal” things inside your patch.
(CKB, Interesting techniques for Mapper)

Re: Comparison, vs. Diver:
   Sensory Translator vs. Diver: They are both tools that are great for audio visualization, but are entirely different functionally.
   Diver is more like a sampler/buffer that can transpose audio waveforms across the dimensions (horizontal and vertical) of the video image. Think of it like a 1D to 2D converter, or a video oscillator that can capture the waveform of an audio input. So this would be used to actually take the shape of an audio waveform and make it visually viewable, freezable, and 2-dimensional. In context of the whole system, that’s quite powerful because all the analog shape modules (Staircase, Navigator, etc) work as video rate waveshaping processors for horizontal and vertical waveforms, meaning Diver can feed audio into a patch in such a way that all your resulting pattern generation is based on the original audio itself. Say you were using this with a track, it would work best if sent something simple – like a bassline or the kick track, or whatever you want to add specific visual emphasis to.
   Sensory Translator is an analog FFT decoder and envelope follower, essentially. This splits audio into 5 different frequency bands and then applies low frequency amplitude following to each. So the results could be patched to different destinations that will then animate in time with the presence of each frequency component. So this is good for sending in a full mix or track, and splitting it up so that you can have different frequencies affect parts of your video patch differently.
   The two would play well together because you could use the envelope band outputs from Sensory Translator to do things like modulate the phase inputs on Diver. Audio-visualization works best in my opinion, when you are doing at least 2 different things that interact, so these two modules are a powerful combination.
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

Re: Bridge
Oh and with Sensory Translator, it works great if you use an envelope output from ST into the CV input on the Bridge fader – then patch video rate/shape elements into the A and B inputs for a crossfade.
(CKB, Interesting techniques for Mapper)

Shapechanger

Re: Spinning shapes
(1) The Navigator+Shapechanger combo gives full 2D shape generation/rotation (in the spinning the shape clockwise/ccw sense.) (2) If you want to do this with Navigator alone, you need a way to (a) mirror your HV ramps post-Navigator, (b) mix/combine them in some way, (c) generate a key of that mix, such as with Doorway or Cortex. FYI, Staircase is five mirror circuits in series! So a pair of them could mirror H/V and then keep going…
(CKB, How to spin a shape)

Re: Gamma circuit
My gamma circuit (as implemented in Shapechanger) uses two LT1256. One as a power of 2 multiplier (expo curve), and the other to crossfade between expo and log curves. Log curve is created by subtracting the expo curve from the input signal at a gain of 2X (using an opamp.) This design allows for a mathematically accurate sweep (with video rate CV) between expo, linear, and log curves.
(CKB, Future Cadet Modules?)

Re: Memory Palace
Shapechanger went through seven prototypes! It’s one of my favorites. And being able to map textures to it’s output dimensions with Memory Palace is an example of what’s so exciting to me about the hybrid analog+digital contexts the Orion series is adding to the system.
(FB, https://www.facebook.com/groups/lzxindustries/permalink/1143338379170814/)

Staircase

General info
I would personally recommend Staircase as the single module to give you the most geometric complexity for your budget possible out of a minimal rig.
(CKB, Getting more shapes from Visual Cortex)
 
If you’ve got a small system, Staircase is the one to double up on. That module definitely gives you the most geometric density for your dollar and HP space, and multiples simply expand upon each other.
(CKB, Doubling Up On Modules)

Visual Cortex

Re: Vector scanning with VC
Visual Cortex alone is enough to display video on a vector monitor! You need H and V ramp waveforms (for X and Y), and a DC coupled luma signal (for Z.) Cortex can provide all these ingredients, but the fun really starts when you start processing the signals.
(CKB, Visual Cortex alone enough to display video on a vector monitor?)

Re: Cortex Colorizer VC reproduction
    Passage -> Marble Index is the most literal route to go here. This gives you a brightness/contrast control over each color, and the ability to send the a single input to all 3x outputs, each with their own bias/gain controls, the ability to sum in another voltage (i.e. brightness modulation) on each color separately, and then advanced composites/blending, fader controls (via Marble Index’s compositors).
    The ‘Spectrum’ mode is something special, that only appears in Cortex. It is a waveshaping based Colorizer. You could think of Staircase as the more advanced version of it, if feeding the Staircase outputs into RGB” (CKB, Colorizer on Visual Cortex)
    Re: Sync generator creative use: “Great question! The short answer: not really. But you still want to have them available. The syncs themselves are never viewable, because they are active only during the portions of the video signal that aren’t displayed.
    Some exceptions and experiments:
 

  • You can run the syncs thru audio filters to get blurs and streaks that leak into the display area.
        
  • You can patch the syncs to audio oscillators to see if they behave in a stable manner when used as synchronized video oscillators.
  • With logic modules, such as the LZX Castle series, syncs are often used as the clock or reset inputs to counters, and come into more practical use there.
        
    The Frame clock output can be used as a modulation source to force an interlaced ‘double image’ type behavior! You could also send it to a clock divider and use the divisions for animation timing that is quantized to the frame rate count.
    (CKB, Creative ways to use the sync generator section of Visual Cortex)

Re: Marble Index
I would say that Marble Index + Visual Cortex would be a more natural expansion for compositing/keying now that Marble Index is out, unless you needed the dual video output encoders that double Cortex would give you.
(CKB, Doubling Up On Modules)

Re: with Memory Palace & TBC2
Visual Cortex is analog RGB video mixer/final output, Memory Palace is digital framestore effects and preview output, and TBC2 gets 2x video sources into the system (in addition to the one component Source on Visual Cortex. There’s something to be said for the condensed workflow of something like the V4 as outboard gear, but there’s also a universe of exploration and textural quality to explore that you’d get from the modular workflow. If you then add Escher Sketch to Cortex+TBC2+Palace, it’s essentially a modular Fairlight CVI-inspired instrument. A specific arrangement we’ve been building towards for a very long time, as the CVI really embodies the idea of a conceptual “Video instrument’ that we’ve held close to our design goals since the beginning.
(FB)

 
See next posts for:

  • Post#2: Orion Series Modules, Vidiot
  • Post#3: Information by Topic

 

3 Likes

Lars Condensation - Open Submission
#2

-----------------Information by Module:----------------------


Orion Series Modules
Diver

Re: Diver vs. Sensory Translator:
   Sensory Translator vs. Diver: They are both tools that are great for audio-visualization, but are entirely different functionally.
   Diver is more like a sampler/buffer that can transpose audio waveforms across the dimensions (horizontal and vertical) of the video image. Think of it like a 1D to 2D converter, or a video oscillator that can capture the waveform of an audio input. So this would be used to actually take the shape of an audio waveform and make it visually viewable, freezable, and 2-dimensional. In context of the whole system, that’s quite powerful because all the analog shape modules (Staircase, Navigator, etc) work as video rate waveshaping processors for horizontal and vertical waveforms, meaning Diver can feed audio into a patch in such a way that all your resulting pattern generation is based on the original audio itself. Say you were using this with a track, it would work best if sent something simple – like a bassline or the kick track, or whatever you want to add specific visual emphasis to.
   Sensory Translator is an analog FFT decoder and envelope follower, essentially. This splits audio into 5 different frequency bands and then applies low frequency amplitude following to each. So the results could be patched to different destinations that will then animate in time with the presence of each frequency component. So this is good for sending in a full mix or track, and splitting it up so that you can have different frequencies affect parts of your video patch differently.
   The two would play well together because you could use the envelope band outputs from Sensory Translator to do things like modulate the phase inputs on Diver. Audio-visualization works best in my opinion, when you are doing at least 2 different things that interact, so these two modules are a powerful combination.
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

Diver is coming out soon, and an alternate concept [for audio visualization]. It samples audio rate signals but plays them back at video rate speeds (525 or 625 times faster!) This module can be thought of as an audio to horizontal/vertical converter.
(CKB, Question, how to get high frequencies for horizontal frequency domain)

Escher Sketch

   Escher Sketch is a control voltage generator module, similar to a joystick or a trackpad. Its function is to produce control voltages based on XY positions, Pressure, Velocity (movement speed), and gate voltages derived from these parameters, or manually triggered. It is the first ingredient in a video drawing/painting workflow.
   
A video painting workflow needs:
(1) Position Control aka “The Paintbrush”. This is Escher Sketch, but you could also control position with LFOs, joysticks, anything that generates CV.
(2) Shape/Brush Generator aka “The Paint/Texture”. This can be a function of frame memory (i.e., Memory Palace can work this way) but you can also use analog textures, patterns, and other shape generators (like Navigator + Shapechanger) to generate brush shapes.
(3) Frame Memory aka “The Canvas”. This would be a module such as Memory Palace, which has keying based input control
   
A basic patch for video painting:
– Escher Sketch X Out into Memory Palace X VC In
– Escher Sketch Y Out into Memory Palace Y VC In
– Escher Sketch Pressure Out into Memory Palace Opacity VC In
– If desired, analog shape/texture/key pattern into Memory Palace Alpha In
– The pressure of Escher Sketch now controls the transparency of lines being drawn into the buffer.
– The “Zoom” on Memory Palace controls the size of the brush as applied to the dimensions of the screen.
– The Hue/Saturation/Brightness on Memory Palace control the color of the brush shape.
   
And here are the answers to your questions:
   If combined with Memory Palace (for frame memory and positioning) you could use the Vidiot to generate analog textures that could be used as brush shapes. Vidiot could also control the color of the brush using its RGB outs.
   You need frame memory (Memory Palace) in order to do video painting. You can use the Escher Sketch XY outputs to control any voltage controlled parameter in the system, but Memory Palace is the paper onto which images are drawn and stay on the screen.
   There is a constrain mode that will snap your movements to 45 degree angle increments while holding the button. If you want to draw freehand geometric shapes, that can be done by combining it with Memory Palace’s functions for mirror, tiling, etc. Memory Palace has a mode that applies the effects only to the incoming image stream. So you can for example mirror your painting movements without changing what’s already in the buffer.
   I’m going to need more information to answer this question. Escher Sketch itself is quite simple in concept, all the complexity and different drawing modes/limitations come into play when you combine it with Memory Palace and other modules in a video painting application patch.
   Memory Palace is currently the only option for frame memory and has been designed specifically with this purpose in mind. We’ve discussed a smaller buffer module, but it will likely be grayscale only and optimized for in-patch-use rather than as the primary interface for a video painting instrument.
(CKB, What can be used with the Escher Sketch and how does it function without a screen memory module?)

Memory Palace

General info
   You still need connection to EuroRack rails. Using the external 12V connection is for cases that might otherwise be strained on power load – when used, the EuroRack power draw drops from +450mA/-40mA to +/-40mA. If you have a EuroRack power supply that takes a 12VDC brick (like Malekko Power) you can use the same brick to power the Memory Palace and your Euro supply (Vessel makes this easy, with its internal DC barrel connector, you can just use a 1 ft’ 2.1mm barrel cable, like used on guitar pedalboards.) So essentially it’s the same deal as with Liquid TV; a way to offload the excess current taken up by the displays and digital circuitry.
(CKB, Memory Palace Specs & Feature List)
 
   12V 1A are the recommended specs, 2.1mm barrel plug (center positive.) It’s a common power supply for cameras, LED lights, small displays, CCTV, etc. I’d just pick up something with decent reviews and a reputable brand. With the Vessel case, we ship it with an 8.34A 12V power brick (plenty of extra current on top of what the euro supply draws) and there’s an internal DC jack for chaining power to modules like Memory Palace and Liquid TV. You could do something similar with a DIY install if the PSU in your case supports 12V input.
(FB)
 
   All the analog components – mostly video op-amp buffers and low frequency ADCs – are running off +/-5V rails generated by ultra high PSRR LDOs straight from the EuroRack +/-12V rails (which are hopefully pretty clean already.) In addition to that, there are digital 5V and 3.3V switching supplies derived either from the EuroRack +12V or the External +12V DC Power Supply. These two supplies feed the Zynq system-on-module and video encoder/decoder circuits, LED display controller, OLED display controller, DVI transmission interface, etc. Within some of these circuits, there are additional LDOs for local 1.8V supplies. Definitely the most complex power supply structure we’ve had to design for an LZX product so far.
(FB)
 
Just a note about Memory Palace’s outputs, since I think some are curious and in some cases confused. The DVI output is a raw data dump of the contents of video frame memory, unmodified and preserved. We felt like including it because that is, in and of itself, interesting. The analog RGB and composite video outputs use a video encoder chip from Analog Devices which is probably the nicest chip on the market for this purpose, including oversampling and filter shaping of the raw data to high end DACs. An analogy would be that the DVI out is like playing your guitar into a nice clean DI box, whereas the analog outs are like playing it through a high end tube amplifier. Both have uses and and different tonal flavors, but the analog RGB outs should probably be considered the primary outputs.
(FB)
 
It’s best to think of Memory Palace as an expansion module to the modular video synth system that starts with Visual Cortex as the core module. That’s the intended usage of the design. There are ways to use it standalone (just feed oscillators and audio into the RGB inputs, or get a couple video oscillators like Prismatic Ray as your sources, or use stills) or with monochromatic video sources (Cadet 1+Cadet 3, Vidiot, etc), but to get external video in, you’ll want Visual Cortex or TBC2.
   That said, this stuff is meant for experimentation… you can patch your external RGB or YPbPr video source directly in with some adapters. You just won’t be properly DC restored or synchronized. You could have fun experimenting though.
   To be clear, we don’t have our format of 1V patchable signals because we’re trying to be proprietary or something – it’s so that you save money and have more expandability in the long run! If every module had external video inputs, they’d all be a lot more expensive.
(CKB, Composite to RGB conversion?)

Re: Cortex pairing
I would say the best way to use Memory Palace with Cortex is inside the Cortex compositor. You could use it to key your original source (dry) over your modified image (wet, from Memory Palace out.) You can also use it to layer analog gradients and solarization back in with your Memory Palace output (looks wonderful!) Or to introduce a secondary feedback loop around Memory Palace using the analog compositor.
(CKB, Connecting Visual Cortex to Memory Palace)

Re: Comparison, vs. Cortex/Vidiot
   Memory Palace is not meant to be used as a standalone unit – generally when we do EuroRack modules, the design decisions relate to the context of a larger system (we have a pretty expansive vision of how it all fits together.) That said, you could do an effects processing system with just the TBC2 + Memory Palace that would be awesome and do a wide range of effects, especially if you have lots of external modulation options. This may be the best approach if processing and a small initial system is your goal. You will need TBC2 to get external video into the system’s patchable workflow, and then Memory Palace provides all the effects and dedicated video outs.
   If you’re into the more analog processing techniques like colorization, solarization, RGB and wipe offsets, etc Visual Cortex by itself goes a long way as a processor as well.
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

Re: Visual Cortex+TBC2
Visual Cortex is analog RGB video mixer/final output, Memory Palace is digital framestore effects and preview output, and TBC2 gets 2x video sources into the system (in addition to the one component Source on Visual Cortex. There’s something to be said for the condensed workflow of something like the V4 as outboard gear, but there’s also a universe of exploration and textural quality to explore that you’d get from the modular workflow. If you then add Escher Sketch to Cortex+TBC2+Palace, it’s essentially a modular Fairlight CVI-inspired instrument. A specific arrangement we’ve been building towards for a very long time, as the CVI really embodies the idea of a conceptual “Video instrument’ that we’ve held close to our design goals since the beginning.
(FB)

Re: Alpha channel
   It’s analogous to the Opacity CV inputs, not the background input. In a frame buffer you can think of it like a linear read/write mask. It controls the transparency at which the incoming video feed or still is mixed with what’s already in the frame storage. Or say, in the video painter use case, if you wanted to use an analog shape you’ve synthesized as the “brush”, you’d patch the shape into Alpha. The color of the brush would then be patched into RGB.
   Or if you’re familiar with Shapechanger, you could feed the Stencil input into Alpha and the Gradient Out into some color channel inputs.
   Why this is so exciting: In most frame buffer/video mixer feedback effects we’re used to seeing, we’re seeing a luma or chroma keyer in the path. This typically means that brighter colors are written into the buffer and darker colors are not (with the luma key). Or a color range is written and others are not (with the chroma key.) With an ALPHA input, it’s like a 4th color channel just for transparency! So it doesn’t care what colors you’re writing into the buffer, it allows you to write darker and brighter or multi-spectrum colors however you like. In the context of your existing video synthesizers, this is powerful stuff and results in feedback geometries and video paintings that show a much greater dynamic range.
   Expedition series is all about creating complex linear analog gradient patterns. Alpha is one of the most exciting places to patch them.
(FB, https://www.facebook.com/photo.php?fbid=10218051305459388&set=gm.1134642673373718&type=3&theater&ifg=1)

TBC2

General info
Part of the hardware behind it, is to be able to support non-standard input formats and add more via firmware updates in the future. I definitely can try to make sure we have full 240p support (NES, SNES) at launch, as I’ve got those here to test with.
(FB)

Re: with Memory Palace + Visual Cortex
Visual Cortex is analog RGB video mixer/final output, Memory Palace is digital framestore effects and preview output, and TBC2 gets 2x video sources into the system (in addition to the one component Source on Visual Cortex. There’s something to be said for the condensed workflow of something like the V4 as outboard gear, but there’s also a universe of exploration and textural quality to explore that you’d get from the modular workflow. If you then add Escher Sketch to Cortex+TBC2+Palace, it’s essentially a modular Fairlight CVI-inspired instrument. A specific arrangement we’ve been building towards for a very long time, as the CVI really embodies the idea of a conceptual “Video instrument’ that we’ve held close to our design goals since the beginning.
(FB)

 

Vidiot Modular Standalone

General info
The PSU has a set of 5 international adapter plugs with it, and the Australian plug is included in the set. So it’s fully international.
(FB)
 
   Using [Noise] directly patched to modulate the keyer thresholds can create a nice textured look. I really love how a little bit mixed into the luma processor looks! But as far as visual effect though, generate snow/static is exactly what it does… it’s not for all patches.
   You might get some interesting random fluctuation if you patch [Noise] to the audio input on the rear using the envelope follower out. Even though it’s a stereo input jack, shouldn’t be a problem.
(FB)

Re: Sync with Cadet1
… Vidiot also contains a dedicated sync output jack. This jack loops the input sync thru to it if the loopthru switch is ON. But if the loopthru is set to OFF, it outputs Vidiot’s native composite sync.
So either…
o Vidiot Sync (or Composite or Luma) Out to Cadet 1 Sync In
o Cadet 2 Video Out to Vidiot Sync/Luma In
Option #1 (Vidiot sync to Cadet sync) uses the dedicated sync connections and allows you to easily slave the entire rig to another source (via Vidiot’s sync/luma input) easily. In this rig, Vidiot replaces the need for Cadet 2 and Cadet 3, which is nice.
… Vidiot’s sync generator, color encoder (output from colorizer), and VCOs all come from the Cadet schematics with only a few changes.
(CKB, Vidiot & Cadet sync)

Editorial note:
A very small number of units may need an internal cable flipped if normal troubleshooting does not resolve display issues. It has also been noted that our unit has the “Loop” switch installed backwards.


See next post for:

  • Information by Topic
5 Likes

Wow - So Memory Palace Can Work Standalone?
#3

See previous posts for Info by Module


-------------------Information by Topic:------------------------


Audio

If you modulate Prismatic Ray’s Pedestal control with audio, you will get waveform generation as well (Put Prismatic Ray in fastest frequency mode and up the gain a bit.)
(CKB, Question, how to get high frequencies for horizontal frequency domain)

Sensory Translator vs. Diver
They are both tools that are great for audio visualization, but are entirely different functionally.
    Diver is more like a sampler/buffer that can transpose audio waveforms across the dimensions (horizontal and vertical) of the video image. Think of it like a 1D to 2D converter, or a video oscillator that can capture the waveform of an audio input. So this would be used to actually take the shape of an audio waveform and make it visually viewable, freezable, and 2-dimensional. In context of the whole system, that’s quite powerful because all the analog shape modules (Staircase, Navigator, etc) work as video rate waveshaping processors for horizontal and vertical waveforms, meaning Diver can feed audio into a patch in such a way that all your resulting pattern generation is based on the original audio itself. Say you were using this with a track, it would work best if sent something simple – like a bassline or the kick track, or whatever you want to add specific visual emphasis to.
    Sensory Translator is an analog FFT decoder and envelope follower, essentially. This splits audio into 5 different frequency bands and then applies low frequency amplitude following to each. So the results could be patched to different destinations that will then animate in time with the presence of each frequency component. So this is good for sending in a full mix or track, and splitting it up so that you can have different frequencies affect parts of your video patch differently. The two would play well together because you could use the envelope band outputs from Sensory Translator to do things like modulate the phase inputs on Diver. Audio visualization works best in my opinion, when you are doing at least 2 different things that interact, so these two modules are a powerful combination.”
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

    

Capture, Display, Record, Output:
Re: 1 volt signal capture

    LZX 1V signals lack the elements required by video displays and capture interfaces, such as embedded video sync pulses, proper blanking and clipping of voltage levels below black and above white.
    In order to capture or display these signals as valid video, they must be patched into a video encoder module such as Visual Cortex’s Compositor/Output Encoder, Vidiot’s Colorizer, or the Cadet II RGB Encoder. The exception to this rule is the Liquid TV module, which allows you to view any LZX 1V signal directly on its display via its three preview channels. In the case of Vidiot, you could convert its RGB signals to Component video using the Visual Cortex module.
(CKB, How do I record LZX 1V patchable signals using video capture devices?)

Re: BMD Intensity Shuttle

… When connected to your computer you should be able to set up HDMI monitoring of the analog capture. Reliability as a live converter may be problematic. The Ambery.com device linked below is the best looking budget conscious dedicated upscaler we’ve found for Visual Cortex so far. http://www.ambery.com/dvihdtovgadv.html
(CKB)

… Avoid the USB version of the shuttle unless you can confirm your chipset. We have the Thunderbolt2 models and use them with a desktop PC and Thinkpad laptops. They are trouble free but sometimes we have to to reset the format settings after plugging them in.
(CKB, Capture and recording interface recommendations)

Re: Converting and upscaling

I run two dedicated [Blackmagic Design] Analog-to-SDI -> [Blackmagic Design] UpDownCross converters into two of the TV Studio HD SDI inputs. Not cheap, but it lets me mix the output of the two Visual Cortex modules directly with HDMI/SDI sources before capture.
(FB)

Re: Projector use

Some projectors contain analog component or composite inputs, but they often use low quality analog-to-digital converters. For the best possible performance, we recommend a high quality Analog to HDMI upscaler. This will allow you to send a very high quality HDMI signal from your system to the projector or any display with HDMI input.
(CKB)

Re: SD card straight-to recording device

Blackmagic Video Assist and Blackmagic Video Assist 4K. You’ll need an Analog-to-SDI converter to go along with but they’re great.
(FB, https://www.facebook.com/groups/lzxindustries/?multi_permalinks=1133181870186465&notif_id=1549293013989835&notif_t=group_highlights)

    

Design (System, Philosophy, History)
LZX design inspiration

Visual Cortex is analog RGB video mixer/final output, Memory Palace is digital framestore effects and preview output, and TBC2 gets 2x video sources into the system (in addition to the one component Source on Visual Cortex. There’s something to be said for the condensed workflow of something like the V4 as outboard gear, but there’s also a universe of exploration and textural quality to explore that you’d get from the modular workflow. If you then add Escher Sketch to Cortex+TBC2+Palace, it’s essentially a modular Fairlight CVI-inspired instrument. A specific arrangement we’ve been building towards for a very long time, as the CVI really embodies the idea of a conceptual “Video instrument’ that we’ve held close to our design goals since the beginning.
(FB)

Expedition

A lot of the work on Expedition series is from looking at the way computer graphics evolved from analog computing, and then figuring out ways to do the next level of analog by emulating a lot of the computer graphics workflows. For example, Shapechanger + Navigator perform a lot of the same math you’ll find in any 2D graphics engine, only they do it with analog computing circuitry. Marble Index (and Cortex Compositor) perform a lot of the same blending modes math you’ll find behind Photoshop or more robust digital compositing suites, only they likewise do it with analog circuitry. Most of the design challenge has been on how to take these functions and present them in a patchable hardware way, UI-wise, and design analog circuits that do the math.
(CKB, How to spin a shape)

The system design intent is you have Modules That Generate/Colorize RGB (Passage, Mapper, Color Chords, Staircase) and you patch these into -> Modules That Composite/Mix RGB signals (Marble Index, Visual Cortex).
(CKB, Colorizer on Visual Cortex)

Re: Doubling up on Modules
   Passage (which is really just an evolution from the Visionary series “Triple Video Processor”) is another good one to double up on. It’s a great way to get anything into RGB and also just functionally dense for utility mixing.
   One of the big tenets of the Expedition series designs is that I tried to make each module stand as an independent block, rather than being reliant on other modules to make the most of it. What that means for system design is that there aren’t many scenarios where you hit a dead end. Most modules process linear waveforms and can be chained forever. It also means that once you find a module you’re really communicating with and loving a lot, going double/triple/quad will usually work out great. So I’d really encourage you to just find the module you’re having the most fun with, and get a second one.
   Some of the Expedition modules will scale best with the reintroduction of a TBC into the system so you can start processing multiple external feeds (Marble Index, especially.)
   But my hands down favorite for getting multiples of is Doorway. That module is full of tricks – the background/foreground inputs being used to join together keys is so powerful, and the key to unlocking complex shapes and figures outside of the standard quadrilaterals.
   If you’ve got a small system, Staircase is the one to double up on. That module definitely gives you the most geometric density for your dollar and HP space, and multiples simply expand upon each other.
(CKB, Doubling Up On Modules)

Re: AC/DC switches
   To put it another way, turning AC mode on is removing any slow moving content from your modulation source. It’s a high pass filter with its cutoff at frame rate.
   With a positive DC biased signal (like 0-1V video) the signal is always additive. If you add it to a mix, the mix becomes brighter. If you subtract it from a mix, the mix becomes darker. You are modulating brightness .
   When you AC couple this signal, rising voltages will add, but falling voltages will subtract, since the relative DC offset (gray level) is gone. This gives the behavior with the modulation source adding brightness when it rises and subtracting brightness when it falls. In other words, you are now modulating contrast .
(CKB, A/C - D/C switches on Shapechanger and Navigator)

Re: Video synth ‘voice’
For me, a video synthesizer “voice” consists of 2 components: one for spatial parameters (position and rotation) and one for geometry (width/height, curve). Think of it like the VCF-to-VCA relationship in a monosynth. Usually the VCF is before the VCA, but you can also do it the other way around create different and interesting results. Navigator+Shapechanger present to you a holistic approach to the spatial+geometry processor concept, but there are other ways to patch up the essential pieces of ramp processing based shape generation.
(CKB, How to spin a shape)

History

    When we started we had no idea if there would be a market for any of this stuff, it started as a DIY project. We started selling the modules to finance the effort because we felt it was worthwhile. Many people told me it would not work out. The reason we’ve grown as a company is because we’ve tried to just respond organically to the needs of the community, rather than build a bunch of a product and hope to market and sell it.
    The year we started was the year they stopped analog broadcasts in the states, and at that point I knew it was only a matter of time until interest in the analog video signal and its creative idiosyncrasies started to rise. I never would have predicted the huge surge in interest surrounding collecting CRTs for gaming, or the retro futuristic movement in games. That’s an interesting phenomenon as well.
(CKB, How many LZX video synths exist in the world?)

Philosophy

The LZX Patchable Video Standard: Universal 1V native video signal processing in all modules, and completely compatible with Eurorack 5volt & +/-5volt signals. Follow link for extensive details.
(CKB, The LZX Patchable Video Standard)

We know this stuff is very expensive for most of you. Just know that you are getting what you pay for with LZX stuff. Everything is priced on direct multiples of component cost and assembly complexity (that is, we don’t upcharge anything just because it took longer to think of or design.) Just compare the price tag of Memory Palace + TBC2 to the Roland V4EX or other broadcast equipment with similar specs – and realize they are producing it at a much larger scale, with much more vast resources – if you want a basis for comparison. We don’t intend for anyone to be able to drop several grand on modules on the first day. We encourage you start small, add modules 1-2 at a time as you can, and learn the new possibilities of each addition exhaustively before continuing to grow. It’s meant to be a long and satisfying journey.
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

    With Expedition series we tried to thoroughly analyze analog image processors, audio synthesizers and classic video synthesizers on a circuit wise and functional level, expand on those ideas with our own, and put them within the context of a next generation video instrument.
    With Orion series, the approach is the same, it’s just that we’re focusing on a different category of device (and implied era of the 80s and 90s). So that encompasses everything from early DVE units like the Ampex ADO, dedicated video computers like Quantel Paintbox or the Amiga Video Toaster, hybrid devices like the Fairlight CVI, and also just early computer graphics and computer game consoles/devices as well.
    So Expedition is a shot at what analog video could have been if it had progressed from the 70’s and into the 80’s without a digital video revolution transforming the market.
    And [Memory Palace] is a shot at what digital video (with analog interfacing) could have been if it had progressed into the late 90’s without a software video revolution transforming the market.
    In both cases, they’re conscious experiments in retro-futuristic possibilities.
(CKB, System build examples)

   We still have several concepts in development for Expedition Series. So it’s not over, however I would say the Expedition Series is “complete,” in that all the major components and functional blocks of a holistic system are now in place.
   The distinction between the two series is related more to instrumentation concept and design language. Expedition Series is all about dense, highly functional analog modules that expand upon each other organically and have a specific focus on high speed analog wave shaping techniques and multipliers. It is an attempt to present our version of a “next generation” analog video synthesis system, that builds upon the legacy instruments, but also has a lot of new processing concepts we haven’t seen in analog video synthesis instruments before.
   Orion Series is part of the same system with Expedition, it’s not a separate synthesizer. It just focuses on something else: dedicated digital video hardware processing and generation, with a focus on hands-on-performance as a primary goal.
   Since they exist within the same landscape there won’t be Orion “versions” of Expedition modules or vice versa. Whenever we have an idea that is implemented better with an analog circuit at its core functionality, we will likely make a new Expedition module, and if it is an idea better implemented with digital processing and memory, we will make it an Orion module.
(CKB, Expedition series done?)

… It all starts with knowing that everything is a voltage control input in the system.
More voltage into oscillator frequency control = higher frequency on modulated oscillator.
More voltage into color encoder’s red channel = brighter red on your TV.
More voltage into an A to B crossfader CV (like Bridge, etc) = more of B and less of A mixed to the output.
Really fast voltage = camera image showing a dense pattern.
Really slow voltage = a low frequency oscillator doing a sine wave sweep for swaying motion.
Audio signal (medium frequency voltage) = horizontal displacement across the image
Voltage is a universal language, especially in the LZX system where low frequencies and camera images share the same patch points and signal spaces.
Once you get your head wrapped around this, it’s quite easy to understand that every signal in your system is just fluctuating voltage levels across time. What makes the patch isn’t the content of the signals themselves; it’s what they’re patched into.
(Unknown)

System building, for starters

What you can do with [Visual Cortex+Prismatic Ray+ Color Chords+Staircase+Bridge] vs. a video mixer + Vidiot is vastly different and the modules certainly take you into some territory that’s hard replicate otherwise. But Vidiot is essentially a module inside its own case, too, so you could expand from that instead of starting with Cortex (any of the modules that don’t require sync are great for this… Pendulum, Staircase, Doorway, etc.) I would just find a spot to dip your toe in, and then grow in the direction you find most interesting.
(CKB, Should I start out with Visual Cortex, Vidiot or the DIY Cadet modules?)

I would personally recommend Staircase as the single module to give you the most geometric complexity for your budget possible out of a minimal rig. [Editorial Note: Would still require a Visual Cortex to operate.]
(CKB)

System building, general

    Passage (which is really just an evolution from the Visionary series ‘Triple Video Processor’) is another good one to double up on. It’s a great way to get anything into RGB and also just functionally dense for utility mixing.
    One of the big tenets of the Expedition series designs is that I tried to make each module stand as an independent block, rather than being reliant on other modules to make the most of it. What that means for system design is that there aren’t many scenarios where you hit a dead end. Most modules process linear waveforms and can be chained forever. It also means that once you find a module you’re really communicating with and loving a lot, going double/triple/quad will usually work out great. So I’d really encourage you to just find the module you’re having the most fun with, and get a second one.
    Some of the Expedition modules will scale best with the reintroduction of a TBC into the system so you can start processing multiple external feeds (Marble Index, especially.)
    But my hands down favorite for getting multiples of is Doorway. That module is full of tricks – the background/foreground inputs being used to join together keys is so powerful, and the key to unlocking complex shapes and figures outside of the standard quadrilaterals.
    If you’ve got a small system, Staircase is the one to double up on. That module definitely gives you the most geometric density for your dollar and HP space, and multiples simply expand upon each other.
(CKB, Doubling Up On Modules)

Visual Cortex + TBC2 + Memory Palace gives you a 3 input video mixer with a wide range of both analog and digital effects, mixing and pattern generation capabilities.
(FB)

  Video synths have an extra dimension that audio does not have, so they like more oscillators. You can think of oscillator = 1D pattern. The pattern can be either animated (LFO), vertical (audio range) or horizontal. So to make a moving 2D pattern or shape, you need 3 oscillators. That’s the equivalent of a single VCO+LFO monosynth voice in audio world, like the Roland SH-101.
  If you want to make more complex shapes and patterns, often oscillators work will with a second oscillator as a sub-modulator. So to make a more complex pattern voice, you need 4 to 6 oscillators . This is like the equivalent of a 2-3 VCO monosynth voice, like the Minimoog.
  If you want multiple moving patterns/elements/shapes, you need multiple voices. So the video equivalent of a 6-voice polysynth like the Roland Juno-6, would have anywhere from 18-32 VCOs!!
  Note that in any of the above cases, H/V ramps (like from Cortex, count as “oscillators”.) And of course you could get your LFOs and audio rate oscillators from the world of audio synthesis.
(CKB, How Many Oscillators Do I Need? OR Adventures in Non-RGB Patch flow NOW WITH ANSWER!)

    

Eurorack power/bus/case

General info

10mA per HP on both +12V and -12V is the recommended power budget for the Expedition series modules.
(FB)

Vessel

With the Vessel case, we ship it with an 8.34A 12V power brick (plenty of extra current on top of what the euro supply draws) and there’s an internal DC jack for chaining power to modules like Memory Palace and Liquid TV.
(FB)

 

External References, Links

Books:

Compositing:

Video capture/converters/upscalers:

Non-Lars links:

    

Guides
1 Like

#4

Just a note to everyone: This is a Wiki post, edit at your leisure! :smiley:

0 Likes