Scriptable Video Module

@creatorlars mentioned on the FB group that there are plans to develop/release a “video scriptable module” and we can start the discussion about it. So here I wanted seed the topics about this that I think are relevant, so feel free to comment on them, to add new or whatever you feel.

1. GUI: Regarding the medium of programming, there are (at least but not limited) to mainstream approach to program. To use “boxes” and “wires” (a-la-Max) or traditional plain text. I personally prefer just a text editor but that because I’m a developer and can handle, the boxes-wires are almost universally in the software industry to break or lower the bar regarding the users of the platform. The problem is that creating a UI with dragable elements and connections in a way that is intuitive and easy to use is not trivial or a small task, meaning that development of this eurorack module with its full-fledged GUI with boxes is not viable as a product to sell. A good alternative could be to use an existing software platform (maybe like Axoloti Platform Patcher or even Max???) and integrate the module into it. In this case I would try to go with opensource always (avoiding Max).

2. Programming Languages: as for the PL I suggest going with some specific-domain one, like Processing. I would totally avoid using python in the same way that Criter&Guitari made on the ETC (I love python, I program in python almost every day at work…).

3. Hardware interface/uploading sketch: I would definitely want to have a direct USB-serial connection to upload the sketch into the module or similar, an alternative can be a SD card and swap it, but that will mean that as a programmer I will need a “renderer” on the computer.

4. GPU/opengl/glsl: If the module have a GPU we could even do hi quality shaders and other advanced video effects, that would be amazing.

5. Mac, Windows, Linux:, as a Linux user I have when software is Mac only compatible :(, but this is something I cannot complain too much as there are many reasons why developers (like me) choose proprietary platforms over opensource, and thats its because those platforms are more common among users that you to promote use of the software you are creating. Still there are multiplatform options.

6. Controls: standard controls for digital modules would be nice:

  • assignable control to variables inside the sketch
  • encoders with buttons rather than pots for more software flexibility each one with CV input+att.
  • 1 or 2 toggle buttons
  • maybe OLED? to select from the menu which program to load/save etc…

Please comment, even if you don’t know how to program (or are interested in it) because maybe the final module will be a hybrid to allow non-developer users to use it and be happy with it, but to let all the depth for exploring to more advanced users.


7. Live Performance: for live performance how this could be achieved:

  • with a laptop connected thru cable or wifi (I really dont like that module has wifi…)
  • no laptop, saved renders/shareds/rasters/scripts on SD and loaded on the fly
1 Like

I completely forgot to mention, something regarding point 3 that I think is relevant, there is very good support for javascript to do usb-serial transmission.

This means that if there are no plans to go with an existing platform (Axoloti, Max, PureData, etc…) it will feasible to make a simple GUI on html+javascript to create a browser plugin (firefox/chrome) to upload data into the Scriptable Module, there are good examples of this with some Drone racing configurator softwares like Cleanflight and Baseflight that allow you to updload configuration/software into a external device via usb-serial and thats a very good multiplatform integration.

This could also be potentially useful for upgrading firmwares to the new Orion series from the front usb of the module.

1 Like

I would prefer a “boxes and wires” GUI, because I’m used to that type of programming.
That would make it usable for a larger group of users too. I really like the axoloti / patchblock software!

Would this module spit out analog video?

Digital effects would be cool!


I dont know if will have digital effects… it will be probably to generate rasters (or vectorgraphics) I really dont know.

I totally understand your appeal to boxes-and-wires.

1 Like

I honestly love how the shbobo shnth handles this
you can write your code in a text editor if you want and it works perfectly
you can also use the app and program with nesting modules which I find easier than boxes and wires

as someone who has only ever programmed for fun I much prefer boxes and wires to text

this is a module I’ve wanted for a long time

wouldn’t the [](http://erogenous tones structure module) cover some of this territory?

and I’m not really familiar at all but the seems like a standalone version although it is python based

1 Like

Thanks for opening this discussion! Mainly I want to be a spectator at this point, and just get my finger on the pulse of what you all want or are imagining.

My thought is more of a devboard (like Raspberry Pi or Arduino), but with perhaps a module version, of what is currently the Memory Palace/TBC2 platform. There are several devboards out there with video in and out, but it is pretty much always HDMI only and the boards with more than a single input are few. What I think we could offer is the devboard with at least two external video input channels and an output that runs synchronous to the inputs. This means you have proper analog genlocked I/O, and the device can then be modular and expandable. We would put into place all the basic infrastructure in the FPGA for time base correction/frame sync, frame buffering, and all the drivers. So it would effectively be a user scriptable video mixer/effects processor. Add MIDI controller and code to make pretty much whatever dedicated machine you wanted.


TBH its fine ifs its a standalone, that means the board can be bigger if needed by design, and I can save some hp on my briefcase, and bring the board to gigs if I want.

The primary things I’m interested in knowing right now are: What are the use cases for you? What problems does this solve? What can you do with this that you can’t do with existing gear? How far is your imagination taking you when considering a scriptable environment?

We haven’t made the decision yet, but we may also open source the software portion of the Orion modules (the VHDL/FPGA portion would be fixed.) This would make each module in the series a reprogrammable video DSP focused on whatever video processing is going on in the FPGA core (in the case of Memory Palace, for example, there is a hardware accelerated texture-mapping unit, colorspace converter, chroma/luma keyer and alpha compositor.)

All of this stuff requires, at the least, clean code and documentation on our part. So we need to get through our first release cycles and let the firmware mature first. After that though, we may have a really great platform to springboard off of.

Check out the Pynq board… it’s a Python scriptable environment for Zynq (the architecture we’re using.) One option Ed’s brought up is to fork the Pynq platform and then implement device drivers for it for our various FPGA/DSP cores. That would provide a good basis for further extension.

Another selling point of a devboard is that it could be used as an OEM base for other people’s video synth/video mixer products – the same way we use the Raspberry Pi for our Andor 1 media player.


I’m reading on it and I’m already in love with the idea, seems like a very powerfull and dev-friendly platform.

The idea to embedd a jupyter notebook on the A9 cortex (linux) its very amazing.

Regarding what are the use cases I can think of:

  1. Machine Vision (Artificial Vision and Machine Learning) for tracking objects from the input video, and provide that information as analog signals to the modular, or to stick an tracking point or object into that position in the video, in real time!.

  2. Video Effects, to write filters like in ffmpeg (like this or this videos I made) or like, but again like in real time.

  3. Complex/Dynamic Key Generators, for video mappings but in real time?

So some of this can be done on computers but in batch process (non-real time). Most of this falls in the research areas of Computer Vision / Computer Science and its not very artsy. Or maybe it is! :slight_smile:

Here are some examples of video effects that are scripted thru filters in ffmpeg its very interesting oioiiooixiii: FFmpeg


I work a lot with Touch Designer, Quartz Composer and Isadora so my dream module would be something that includes a visual programming language. That would make it much more accessible to people form a non programming background.

1 Like


My point of view
What i like about eurorack is their simplicity / ergonomy : one module / one function and patch cable.
The best scriptable module i know is a laptop. If i miss a module it s something like Expert sleepers ES8 ( cv《》usb) with video rate frequency ability, to interface lzx with say pure data or jitter…


I agree with you partially, but there are stuff that its too complex to write for a GPU or it will work too slow on a CPU, so having a FPGA on board is a completely different thing. Because its basically reprogramable hardware, so you can change the hardware (make the physical logial gates change) based on your source code (known as VHL), this is super powerfull for highly concurrent and real time applications that, as I said before, can be solved in a Laptop but in a batch process, not in real time.

1 Like

right. i inderstand.
But what i m doing with jitter is enough for me , real time and an interfacing module would be great

as for me, I’d like a video effects unit that can do the things a Korg Entrancer can do, but then in full PAL format.

It would be great to be able to edit weird digital functions and then use them on analog video.
and automation come to mind.

(I’m an HPC / Scientific R&D person IRL.)

I’m happy to do anything up to and including writing code in Vivado (but I suppose that depends on the licensing…). I don’t care so much for visual programming for something like this, personally, but I understand the appeal as it is a natural analogue for patching. I would worry about the cycles eaten up by a framework that made things easier. I enjoy hybrid programming models - and I think the Zynq type parts are really exciting in that way.

The part of a video processing module that I would find the most daunting (and thus the greatest value add) is all of the integration and testing of the peripherals (jacks and thus ADCs, encoders, display, etc.) that are connected to the SoM.


That’s definitely the value add from our perspective too. There aren’t any existing Zynq or FPGA+MCU boards out there that provide extensive interfacing to analog video encoder/decoder peripherals, so that makes it hard to tinker.

While a visual programming language would be awesome, and it may even be possible to port existing ones to run on this board, we’re thinking more at the primitive level of just the platform with core drivers – so any visual programming language would probably be another layer on top of that, and on the user application side. Maybe there’s an OEM/devboard that’s more barebones and a visual scripting/reprogrammable interface module is something else in the Orion Series lineup. All the “Orion Class III” modules (currently TBC2 and Memory Palace) use the core architecture of Zynq SoM + loads of video related hardware peripherals.


I’d be pretty excited about any kind of devboard version of the MP/TBC2!

Pretty agnostic on language but I’d hope there would be both a high level and a low level way to engage with it. I have a C&G ETC and while I think they did a great job with that platform, PyGame doesn’t cover everything I want to do with a programmable system. I’d be interested in doing more demoscene-style low-level programming, ideally with a mix of CPU and GPU, 3D geometry, etc. I’ve also been daydreaming lately about a FPGA RNN/GAN implementation, so you could train models on a PC and then apply them with low latency in hardware.

My #1 request, though, would probably be a video module with a long sample buffer (whether an official Orion module or the devboard) that can participate in the monome i2c serial messaging system (teletype/ansible/mannequins/ER-301/Matrixarchate/etc.) I’d be most interested in some ways of scripting loop points on a sampler to recreate Raphael Montanez Ortiz style chopping & sliding buffer effects.


For me the most interesting point right now would be a Perlin/Simplex noise generator. It is an essential part of my digital work ( and would love to see how a staircase performs as an fbm. Lars asked me about this idea long ago and I’ll be happy to either take the challenge on developing it myself or helping others in the task.
A second point will be to gather in one module few of the effects present in devices like the roland entrancer. Flip-flop, slit-scan, pixelate, etc. In the direction of what Memory Palace would be able to do but in a smaller factor.


I’d like to see something that reacts to the video input and gives correlated CV/video outputs

so we could say get a static (if the picture is static) CV amount correlated to the amount of Green in the picture

building patches that react to the video output is something I’m interested in

I’m going to continue thinking of other things and re read this thread but I really enjoy the idea of programming some video synth tools