As a less-technically minded lzx user, I use touchdesigner a lot with LZX, and I know other folks do. I’ve found it the most accessible and most ‘open’ system from my perspective, and pairs well with LZX already.
Free to use
It uses python, (and glsl if you want) so guess that interfaces with Pynq, and you can approach TD in a script-y way as well as a node way.
Lots of easy realtime I/O options already ready to go - cv, osc, audio, midi, Kinect, laser, projection mapping, all sorts.
Lots of basic functions already there - (eg. perlin, simplex and other noises for VanTa above; image analysis across multiple frames is accessible for a noob like me to do in it already and one of the things I do with it, or more fancy motion detection). Also lots of free ‘nodes’ already made by people to do other stuff, eg. import OpenGL shaders, create mandelbulbs, whatever.
Mac and PC compatible (though Mac being non-NVIDIA now means its the weaker option for some things)
Lots of existing community and development stuff to connect to for future possibilities.
My use cases/problems solved/imaginary possibilities:
- I pipe stuff from TD into LZX and back into TD, with blackmagics in between. I’ve used it to create complex realtime sources in TD from sound modulation/osc or whatever, and send these (alongside cv impulses/lfos for timings) to LZX (eg. I use it as a noise generator looping into CV with lzx, but obv TD’s noise output is a video not cv). And I’ve sent LZX back into TD, to process LZX images/cv impulses to do things like make nice ramp patterns into 3d shapes or become part of more complex graphic outputs. The possibilities for me are about moving from 2D into 3D rendering in realtime, for general complexity, and being able to interact through osc, Kinect etc.
- I’m starting to explore machine learning stuff as more accessible tools for this appear (including via TD), this is a big growing area, and expect this would be a primary use case by the time this module was ready for the light of day.
The problems solved are basically that it would make things more self-contained in the rack, and more immediate/performable, than going back and forth to the laptop, but offer more creative openness and complexity than exists right now. I didn’t get an erogenous tones structure because it’s use of shaders as outputs seemed a bit basic and limited compared to what I could already do with TD and a circuit of cvs/blackmagics.
I’d like to see the ability to take a TD project made on a laptop and load it into a module’s bank of projects/patches, having set the project’s cv inputs and outputs and video in/outputs to match the module’s cv/video in/out so I can then perform it as a patch within my LZX patch. I guess the alternate option to putting TD into a module, as suggested above, is something like a video-rate-conversion expert sleepers to facilitate more laptop-lzx communication.
Wednesdayyayay, there’s a simple TD ASCII noise tutorial here where you can swap in symbols as whatever you like. You could pipe video in rather than noise in to it too: https://youtu.be/uTXZJrtjGV8