Let's Discuss Visual Storytelling + Video Synthesis: Patching Event Based Narratives

“Scene level control” is a must. But also tricky in a patchable context. We can either treat each individual connection as a node in the scene (in which case you end up needing a crossfader with a number of channels equal to the number of connections to morph between settings). But this quickly leads us back to the superiority of a digital system in this regard, where the cost of adding another parameter group is negligible.

So that’s a puzzle. I love giant modular systems, but I don’t think you should need a giant modular system to introduce higher order sequencing and scene control. Whatever we come up with for animation needs to be at least as accessible as the other components of the system. A powerful scene transition should be possible with only a couple of modules.

Pushing all of this into a purely digital control environment could potentially diminish the attractiveness of using a real time analog computing environment with a universal voltage standard for creating images, and get us thinking more about the results (frames), than about the individual components of shape and color which compose them (that is, to me, video synthesis!) There’s a place for it all, of course.

But there are two ways we’re trying to combat this design challenge. One is the concept of “embedding a transformation into the image itself.” If we can embed the spatial and scaling context of a transformation into XY vector images (aka HV ramps), and use those as the source to generate a pattern (using a separate module!), then we are able to transfer an entire group of parameter settings (x, y, width, height, lens distortion, etc) using only two cables! Now it is much easier to do a scene transformation – since to change scenes we can just crossfade between HV source ramps representing two separate compositions of spatial coordinates.

The second way is to make every generator also a processor for other generators. For example, if one module is creating a “space” then that space itself can be patched into another module and further modified, in a hierarchy of vector modifiers. So now if you can crossfade between the frontend parameters for the “parent module” you end up creating a cascading effect through every derivative branch of a modulation tree.

So this is exciting stuff for me; but describes more the structure of a patched transformation space, and describes what some of our new modules (ART3 especially) are designed for, and explains the lineage of thinking behind modules like Navigator and Shapechanger. I think there is room for a “spatial generator/processor” that is digital in nature, allowing a large variety of user saved presets, and fluid interpolation across their settings. TBC2’s internal generators will be an interesting place to start experimenting with this.

To add movement to that, we haven’t gone very far with it yet. Navigator is a ramps rotator and Diver is a ramps positioner – but there’s nothing yet for “the scene”. Coordinated motion across many objects on a singular timeline.

But I think we approach it as a hierarchy transformation as well. We need a master transport control of some kind, to start – a “low frequency ramp source” with the voltage of that ramp representing percentage of timeline length. And then that single low frequency ramp/envelope is the timeline for any number of subramps or envelopes created through waveshaping (easing curves! ideally an analog bezier implementation) and subdivisions through wavefolding. These modifications of an overall “timeline ramp” cascade to any subdivisions. We can stretch or modify the time scale of the entire parented animation simply by changing the frequency of our “low frequency transport ramp.” If we want to change the easing curve of the entire animation, we just need to bend the waveshape of that envelope.

To state it a bit simpler: we start with one envelope generator. You can Play/Pause/Stop that envelope with buttons. You can scrub it. You can change the overall speed of it and FM that speed. It shows you a frame count. It feels like a sequencer. It’s output though, is just an analogue linear ramp. It is a voltage constant for the timeline position itself. Next, perhaps we patch this ramp into a “3-act waveshaper.” This is like a very slow colorizer. Instead of color bands though, you have time segments as subdivisions of the transport ramp.

Perhaps “Act 1” is patched to slide a shape across the screen. “Act 2” is then patched to a module that makes the shape jump in scale, and “Act 3” fades the whole thing to black. Now we have a 3 part animation. The master transport ramp can start/stop/pause it, speed it up, slow it down, warp its timing, etc without touching the “3-act” controls at all.
How does it go from there? Another 3-act waveshaper. But instead of the transport ramp as it’s source, we use the Act 1 out from our first module. Now Act 1 has 3 more acts, and those subdivisions go to details of the animation. Maybe as the shape slides from left to right, it now wiggles up and down, then slows slightly, then bounces.

You get the idea.

So, I guess my current thoughts for an “animator” module lie in this area. “Clockless” timeline and subsegments through waveshaping techniques, inspired by the Scanimate. Perhaps an “event trigger” module designed more for singular events/envelopes that happen at time points (comparator thresholds) along the timeline. Needless to say, I think we’ll be doing at least 2 modules that fit together in this kind of workflow for Gen3 at some point. But this thread wasn’t really started to workshop product designs so much as just sit down and hear all your thoughts on the subject. It’ll be a minute before we can concentrate on these designs, so I want to listen and get the pulse of the community on the subject, so I have some time to think on it from different perspectives.

7 Likes