Let's Discuss Visual Storytelling + Video Synthesis: Patching Event Based Narratives

When LZX first started, I was approaching video synthesis not as an electronics engineer, but as a would be filmmaker and game designer in the turn of the millennium: the era of web based interactive animation and MiniDV short films. It was an era before universities had “New Media” departments (in the modern sense.) The interactive fiction computer games of the 1980’s and 1990’s were waning in favor of larger budget titles and rendering-heavy 3D graphics, and YouTube / web video were still in their infancy.

I was attracted to video art tools, not just in an aesthetic sense but also in the potential of a modular instrument to produce emergent narratives.

Video is a medium which has been highly constrained to the concepts of linear vs nonlinear editing tools. Whether we’re talking about edit decision lists, splicing 16mm film, or dragging around ingested clips in Adobe Premiere or Final Cut Pro, these techniques are all based on assembling an intentional sequence of frames.

So while it may seem like a dichotomy such as non-linear editing vs linear editing covers all territory, I feel like it doesn’t – not even close, not even still. Any attempt at uncovering an emergent, live narrative in cinema usually ends up reduced to choose-your-own-adventure-like experiments such as Netflix’s Black Mirror: Bandersnatch or with VJ tools focused on sequencing the playback of prepared clips.

If you wonder why we have iterated through so many generations of modules and products, it’s largely in pursuit of this concept, of a video machine capable of not just beautiful imagery, but also engaging narratives full of rising and falling tension and structured, event based acts. A machine the video synthesist uses in a manner more like navigating one’s own subconscious while dreaming – an environment full of the unexpected, yet personal and familiar. A place only the individual artist could have found, yet a place impossible to plan or curate in advance. I feel like, as human beings, the hardware control surface – the place you put your hands – the steering wheel – is essential for the cybernetic relationship required to accomplish this.

Approaching the designs of Chromagnon and Gen3 modular, I’m attempting to design a toolkit that encapsulates our work to date, and gets us closer to a holistic creative environment than ever before. But there are some missing pieces we haven’t touched yet – mostly related to motion, nested animations, and sequencing. I want to do that in a way more integrated and intuitive than just clip sequencing and interesting LFO waveshapes. I think there’s something more there; and that to find it we need to think outside the box of both the synthesizer world and the videography/editing world.

I miss making these discussions part of the community. It’s why, I hope, you’re here on this forum – to be a component of the process of exploration and development we’ve dedicated ourselves to.

So let’s get some talking done, let’s discuss. What are your thoughts on this topic? How are you already using your video art tools this way? What works? What reads for you, in the work of others? What experiences do you have personally that could provide further background for this discussion? What’s still out there? What should “narrative” modular tools look like, and how should we be interacting with them?

Some of you are highly proficient animators and video editors and digital designers – what is your relationship with a tool more immediate and instantaneous, even, than pen and paper or rapidly assembling ingested camera footage?

Some of you have decades of experience in live television, stage direction and improv – what is your relationship towards a more introspective tool like a video art machine, where you are both improvising director and conjurer of the world on screen?

Some of you are genius polyrhythm programming modular synth musicians that craft melodies from the intersection of maths and probability – how do you make videos the same way, but without just directly applying the same techniques used in composing songs or ambient soundscapes?

Maybe you have no experience at all with video, or even audio, beyond video art hardware? What do you see in this discussion?

Thanks for reading, I am excited to dig in.

25 Likes

Lot of thoughts on this but immediately I would say that my interest in all of this is to unhinge the dominance of singular ‘narratives’ and open up new avenues of understanding (itself another term for narrative).

I also think that the most dominant narrative in our lives is how we are presented with the idea of ‘our-selves’ via television and film, which is the most dominant world-building engine of the 20th century.

I do think there is something interesting about tinkering with that engine and these tools (LZX and other) let us disrupt the trend towards singularity.

It’s also just a lot of fun!

5 Likes

Oops. I think I missed your question. Please continue to make cool modules and instruments that help do the above!

5 Likes

I try to answer quietly, bringing my own experience when (feel like a century ago) I was trying to survive mostly with live gigs using my generatIve visuals made with VVVV.
At that time I was creating visual processes like particles, moving geometries, weird shaders feedbacks, audio reactivity, geometry deformation; everything controlled with a lot of knobs (BCR2000) giving me the opportunity of change a parameter slowly, switch a scene/effect using a toggle, swipe a value, everything with custom ranges on knobs (I used a custom patch in max/map and midi nrpn).
This to say that I always worked preparing “scenes” that blended into each other, like multiple render targets textured on surfaces that worked both as screen and that got deconstructed in other shapes, I’m realizing that the most valuable thing for me is to control the signal path.
I don’t have a big lzx system and I’m actively experimenting with analog synth since less that 2 years, in the last weeks I played a lot re-routing the cvbs signal between the mixer (with keyer), the Vidiot, the structure, and some glitch fxs.
I feel that I probably will invest more in an RGB workflow with routing capabilities, and probably with more than one video source.
Probably routing capabilities should be controllable not only with switches and pots.
I’d also like to see something to process the video buffer, I don’t have a MP but I think it’s essential in such flow (unluckily structure doesn’t output 1V).

In both studio recordings and live I feel that signal routing is essential to explore and create variety.

This is my stream of consciousness.
Bye

6 Likes

Thank you for introducing this topic Lars! Something I’ve been exploring the last couple years is how to make my live visuals relate and react to the music in the surrounding environment (either a band or a DJ) as much as possible, but by my own hands rather than automating too much with LFOs or envelope followers. While those tools do a great job of syncing synthesized visuals on a granular beat-by-beat basis or maybe across a couple measures in the music, I’ve been trying to develop patches on my LZX system that can more broadly follow the narrative of the music on a higher level, that I can react to in a hands-on interactive way in realtime. I want to follow the dynamics and moments of tension and release as much as possible in the moment as a performer reacting to the music, rather than just the technology having all the fun of doing the reacting.

Admittedly I still feel I have a long way to go with realizing this on my current LZX setup, and in parallel to that I’ve been also developing a setup based on the liquid light shows from the 60s, which provide a very hands-on organic, spontaneous, and precise method of relating visuals to music in the moment. When I’m working the liquids plates it also feels like I’m playing something akin to an actual visual instrument in the performing sense.

On my LZX setup I can create synthesized visuals with plenty of modulation, but in way still feel somewhat static in that you can tell there isn’t a human behind all of it reacting as dynamically as I would like. I’d ultimately like to interface with my modular in the same way that I interface with my liquid light plates. Right now I’m sending a camera feed from my liquids plates into the modular acting as a video processor which is working really well but with the right kind of control and precision over control voltages without the camera feed (maybe something like a giant Escher Sketch), I think the video modular could take things way further on its own.

6 Likes

Not at all! This is a very open ended prompt.

2 Likes

Fascinating topic.

Approaching video synthesis from a purely tool-based point of view, then there are no narrative limits. I.e., we can process any imagery and edit it in post to create whatever narrative we want. All kinds of happy accidents can happen along the way. That is one of the great strengths of video synthesis, its ability to surprise us with new imagery.

But if you want to talk about video sequencing, things get more complicated. It’s harder for an algorithm to generate interesting, unexpected patterns in time than in space. That added dimension of time means the system needs to work much harder/smarter. The problem of generating coherent narrative is non-trivial, and really can only be done by a sentient being that has subjective experience. Only then can the being actually model that experience to such a high degree of fidelity that it is accepted as genuine by another sentient being.

And so, moving generative art is pretty solidly in non-narrative territory. Machines have still not learned how to compose a story. Even the tech giants have not cracked this problem, despite the efforts of some very smart people and some very powerful hardware. So it’s probably a lost cause to hope that video synths can create narratives unaided by humans. The best we could do these days is program a complex graph of logic events that generates a pseudo-narrative structure. But seed that with some human intuition and you might have something. I.e, the human’s choice of clips/images to be fed into that logic graph would have a huge impact on its viability as a narrative.

When you start asking what makes something an interesting narrative in the first place, it turns out that it’s not the same for everyone. Sure, there are common threads… but people are wonderfully unique. Subject matter plays a massive role. Characterization is huge. A video synth can’t be expected to venture in this area. But from a purely structural point of view I think there are things that can be done.

If you’re looking at ideas for new modules, I would love to see something inspired by Buchla’s Source of Uncertainty. I am going to pick up one of the Tiptop clones when it’s available this spring. But that is just the jumping off point. Fractal algorithms are perfect for this. Multiparameter feedback with multiple recursion frequencies and amplitudes. Feedback channels going into one another. All of this to generate not just visual patterns but TIME patterns. When I have free time myself I am going to see what I can come up with on the computer using some visual programming environment like Resolume Wire or TouchDesigner.

Approaching video synthesis from the perspective of music, things get really interesting, I think. Instrumental composition is the original abstract pseudo-narrative. It’s all about patterns of tension and resolution, and yet it’s generally non-representational. There are exceptions, such as some Romantic composers, or musique concrete that incorporates sound effects. But by and large, instrumental music not only evokes a mood but also shifts that mood over time – all without words, pictures, or really representing anything but itself. This is my inspiration, an art that exists in its own right, makes reference to nothing outside itself, and yet somehow manages to tap into human perception and cognition to generate meaning.

9 Likes

I agree about the liquid light shows - playing it like an instrument. I have also done that as part of a live gig with a band. Perhaps in time I can learn to play an LZX modular synth like an instrument, creating the visuals in response to music and with the music rather than in mechanical sync.

4 Likes

I dream of a “macro controller” and event sequencer, that many signals can be routed through and manipulated, with set “scenes” that can be programmed then either switched through or played.

The closest example I can draw from is lighting controllers - where the scenes are programmed ahead of time and played live with the music or sequenced as part of show automation.

As a step towards this idea of macro control, once the video rack is built I’ll be working on a 6U skiff that has all my modulation on hand, 9 signals at at a time to be piped through a Doepfer A-138E then via a Worng Vector Space out to the rack.
One turn of each crossfader will sweep through 3 different CV signals, and when it hits the V.S it gets combined/manipukated to be a dense source of 17 related signals.

The dream would be this level of control with added matrix routing and additional automation to be programmed as a scene, and each scene to be cut or blended with button, crossfader or an event sequencer

5 Likes
Audio influences

I’m not going to talk too much about audio but from a young age I’ve considered my audio output to be my journal. I create snapshots in time of my feelings to come back to at a later date. This has acted as a means of time travel for me throughout the years.

I also used to play lots of bass and that practice got me thinking about music in a geometrical way from an early age. I used to have bass tabs hung up on my wall where the geometry was outlined for songs that I found particularly pleasing/interesting (I’m looking at you primus). At this point I play piano and apply this same geometrical thought process to that practice.

Nichole and I have been together for almost my entire audio journey. However they haven’t always been a part of that process. We did play in a fully improvisatory 4 piece band for several years when we lived in indianapolis. This ended up being a huge boost of confidence for me.
MFT :: Japatucky
“wonderful dreams of you” is a good example of the kind of things we did at the time.

These concepts of saving snapshots of time for later, seeing through the medium to its internal joke/structure and fully body improvisation have deeply influence the path we’ve taken toward our current workflows.

This would be a quick way of explaining these concepts that grew in my mind over the last 19 years of doing this without getting too far into the weeds.


Theatre influences

one of the biggest influences on our output/process this past decade has been our work together in/on our community theatre. We started this theatre together in 2013 with $500 from our bank account when we were dirt poor and we do every single aspect of production. When you do it all in a theatre it looks like costumes, set building/design/dressing, sound design, painting, lighting design, props, poster design, etc… all coming together at a specific time. Having big ideas at the start of a project and then slowly coming to terms with the reality of your situation/time frame as you get closer to opening night.

We do 6 shows a year where they are all two weekend performances.

This process of building up a larger output through careful work on individual parts and then piecing them together and letting them speak/breathe on a stage is absolutely the way we are approaching things at the moment.


Information influences

One of our big pushes over the last couple years for video synthesis has been information spreading. This came about from not being able to find a place where someone could find all the pertinent information about what we do in one place. If for nothing else it allows us to understand old workflows/ideas but it also lets other people come up more easily and hopefully contribute their own “ah-ha’s”. All glory became because of the wonderous humming CRT god.


Clip show introduction

in fact…
we just recently started sorting all the “clip show” videos that we’ve been saving as we found datamoshing videos. This will end up being a full length release.

These clips are things that needed more attention than a simple music video length datamosh. It could be that I saw narrative potential or that they illicit a certain feeling/time/space.

We are in the process of getting our very first new home and have had video things boxed up in the process of moving for a while. That has given us time to explore non hardware related video synthesis mostly through ffmpeg/touchdesigner.

Experiments with video interpolation/slit scanning/shape generation/projection mapping/physical object&light manipulation/lumia/datamoshing/feedback video looping etc have been piling up. Some short form some long but all about the process and evolution of technique over time.


this picture is the main folder overview for the release as it stands.
Naming convention is very specific and just as improvisatory as the artistic creation itself. In parentheses are the themes for the pieces. Lots don’t have final names because big work hasn’t been started on them yet, they are still amorphous.

Some combinatory themes some single mostly conceptual (water) rather than workflow based (projection mapping). Lots of audio has already been pulled and more will be made during/after.

one thing to note is that it is ok to be a bit cheeky if only for your own enjoyment. For instance the water theme I’ve got plenty of regular water clips/imagery but I also have an unreleased shbobo shnth tutorial that talks specifically about the “water” opcode. I never released because I didn’t close my mail app and a sex toy email came through. You can hear my train of thought coming to an abrupt halt while realizing this.


Trusting your carefully crafted instinct the narrative becomes emergent as you gather these seemingly disperate pieces together. If it doesn’t you don’t have the right pieces/order/theme throw it away and start again. This might mean putting things in the trash or it might mean putting the original videos back into the “pick me” pool to be sorted again at a later date. Don’t be afraid to lose things in the wash. The act of letting go can become a catlyst to understand how to move forward.


workflow TLDR

This is where we are at currently with our workflow.
Collecting pieces over time that represent snapshots of video synthesis growth & reprocessing in current workflows.
Collecting pieces that represent the “current” in your life which can mean many things to many people.
Writing about those workflows and sharing the “how”.
Sitting on those pieces and letting them roil/boil in the back your mind.
Organizing them into a loose theme.
final composite/reprocessing.

it becomes a collage of yourself over a given period of time.

Login • Instagram
here is a very short example from one of the pieces as it stands.


How does this work in modular?

We would need a module that could save clips/frames to nameable folders. Then being able to CV address the where/who/how of at least 2 (3 or 4 would be better) of those clips at once. Since I’m assuming each “track” wouldn’t necessarily have its own output (that would be rad though) I would love to see an assignable (assignable enough to say R channel from clip 1 G channel from clip 2 B channel from clip 3) “aux” lzx RGB output in addition to a regular main RGB out.

I’d like to be able to USB from that module to the computer and see it as at the very least a drive of those videos that could be taken and used however in a traditional editing environment.

being able to freeze frame per track with scrubbing (course, fine) would be a huge win

being able to resample the output as a new track easily would be huge as well

8 Likes

in order to ask a machine to tell a story
we need the machine to be able to describe video

Color

  • R%, G%, B% White% black% modulators

Frame

  • frame difference% modulator (fixed or variable delay time for differencing)
  • motion vector modulators (even just 4 quadrants with phase modulated ramps “showing” the movement direction per quadrant)
  • scene change detection modulator (could be based off frame difference but with a threshold for saying the scene has changed)

having modulations based on patchable video would be unbelievably fun.
you could probably just super slew a G channel to get an approximation of this effect but it certainly wouldn’t be the same.

having your machine speak back these details about your video so that they can end up controlling themselves/be controlled by us would go a long way toward making a generative narrator “sequencer”

2 Likes

Yes! “Eyes on the screen, hands on the knobs.”

It’s a big ergonomic difference, in using a video vs audio instrument – with audio you can stare at what your hands are doing while listening – with video, it’s more important you can manipulate the patch blind, while staring at the screen to see the results. With Gen3 and Visionary series, the knobs are all evenly spaced across modules – this is so that you can spread your fingers out and count them with your fingertips without looking, like piano keys, regardless of what order you are putting them in.

11 Likes

These replies bring me great joy and I really appreciate the way all of you are engaging this subject! I’m going to be replying with my thoughts as I can, and I’ll likely be re-reading this thread for a while.

To further some of my own thoughts – they echo this sentiment:

Namely, a structural point of view – the dispersion of dramatic energy across a narrative arc, rather than signifiers such as language or characterization. I’m interested in the structure of dramatic tension – how do we turn that into a patchable signal? What does that signal interact with exactly? Can pure “narrative signaling” trigger a reaction in the viewer, even if what we see on screen is not representative of a real object or person? Can that become compelling enough to engage a viewer’s inherent understanding of visual metaphor, such that concepts like characters and confrontations are not only imagined, but communicated by the artist with clarity? How do we give the user total control, while also introducing enough complexity for emergent outcome?

If anything modular synthesis has taught me, it is that we don’t need to make this complicated. The complication comes from the interaction between simple signal sources and modifiers.

So, I theorize there are some universal human visual cues. To give an example, almost any 1 year old child from any culture or time or place, will react with surprise and laughter at the game of peek-a-boo.

And peek-a-boo is something we can dissect – both from a narrative perspective (sequence of events) and a visual perspective (what we see on screen).

  1. There is intentional concealment of a hidden object of interest. It is obvious to the viewer there is something exciting being hidden from them. In fact, the viewer knows what is there already!
  2. There is the rising, sustained tension of not knowing exactly when the hidden object will be revealed.
  3. The reveal comes suddenly, with great visual impact. It’s full of detail and motion, interaction. To a baby, this is comedy gold! Laughter ensues.
  4. Before the viewer’s eyes have time to fully process the reveal, it’s concealed once again, and we repeat from step #1. Only this time; the dramatic impact is amplified. The indeterminable waiting time between concealment and reveal becomes even more tense. The performer can push the waiting time further and further – a choice which only further excites the viewer.

Peek-a-boo is a simple example, but an easy one to apply. We have a video synth pattern on screen, but are only seeing one component of it. A slight visual tease of an overall shape or texture. After a random length of time, we open the Keyer or whatever is concealing it, showing something with more color, or more geometric complexity, or both. Then we hide it again. You don’t even need a special modulator for this – just your fingers.

And from here, there are countless places to go.
What if you have two peek-a-boo games going on at the same time in overlapping intervals?
What if you slightly alter the revealed “object” each time you reveal it?
What if one time you reveal it and it’s drastically different? Woah, big narrative moment!

Peek-a-boo is just a start, though – what other “narrative structure” games can we play like this?

10 Likes

Like others here, I have enjoyed using modular video to push away from narrative. I had to struggle with editing my video synth stuff without turning it into too much of a narrative, and have tried to increasingly resist that.

What you describe feels like a parallel problem to modular audio synths. You can write techno on them, and meandering ambient, very easily. Music that has a strict narrative, that suddenly changes its global structure at agreed points (chord sequence shifts, time signatures) like a rock song does, are much harder to do in audio modular without some high-end sequencer action.

I would like to think about a module capable of global shifts while smaller patterns of a patch continue the same - which gives you song sections in the same ongoing key, or book chapters with the same progressing characters, etc. Switching the solarise or negative on my Visual Cortex would be basic examples of this, but it’d be nice to have those kind of overarching aesthetic aspects move in a less on/off way transition, or to imagine some new global parameters which could sit to one side of a patch.

In terms of narrative aspects, the other thing, aside from being ‘global’ (or at least multiple) parameters, is that they are slow parameters. I see a lot of folks, myself also, who are fans of very slow oscillators in video synths. I think this is because they have a more narrative purpose, for those slow changes that give a less immediately perceived change. And those oscillators function as one of those global parameters. I find these more useful than a sequencer for this sense of narrative change, but maybe a very slow sequencer would be the answer!

This is different to the way I also sometimes think about narrative progress, which is approaching this in terms of ‘scenes’ or switching between saved patches. That’s more what you get with VJ software I think, or the kind of narrative block created by a mixer wipe. The video synth is less amenable to this really, and it would be more about an individual operators’ patching strategy than anything actually embedded in the modules.

7 Likes

in terms of tools for this:

I think that utility modules are always useful for this sort of thing, both in audio and in video

passage was a great module for this, hopefully there will be a future module in this vein - maybe with cv control as well as knobs

switching modules also are a great addition - the doepfer a-151 works at video rates

macro controllers - more advanced than available at the moment - possibly something that could work on both a timer and triggers and cv and possibly a dial - possibly an advanced version of mutable frames - add a switch for range - and it could easily be sold to both audio and video users

this would feed brilliantly into the proposed video player - maybe something like a more advanced music thing modular radio music - lzx tv video? where you can switch channels (clips), scrub forwards and backwards, play forwards and backwards and position within the clip - all under cv control - maybe one that can do both component and lzx standard video output

9 Likes

OK, when I was talking about structure I should have said temporal structure. I think we can definitely create patches that perform chaotic transitions/events/timings that will keep the viewer more engaged. I.e. fractal/recursive variation in event timing. Unpredictable, surprising, and yet not unfamiliar. It should not be that difficult to sort this out, given the tools we have today.

But when you start talking about programming the peekaboo game – again, things get more complicated. The machine doesn’t have a sense of what sights are interesting. I guess you could build face recognition into a module but that doesn’t seem like a good idea to me. We’re back at needing human experience and intuition, at least in the choosing of clips/images, to generate a meaningful narrative. If the peekaboo program works I guess you could also program editing/compositing patterns for other scenarios, e.g. suspense.

However, I would be quite happy with a high level event generator with multiple LFO frequencies, variable feedback, cross-parameter feedback, and an easy-to-use system to store presets.

Thanks

4 Likes

Agreed – and I am talking more about the human performance element than the idea of programming intelligence into the patch – anything from guiding the frequencies of the modulators by hand, manually controlling the knobs with a steady hand, or transitioning between higher level parameter groups and scene programs.

To put it another way – can you perform an animated film on your video instrument in the manner a musician performs songs on a musical instrument?

I feel like there’s something in that space that isn’t directly analogous to anything already existing in the world of audio synthesis or in digital media – I feel like the Scanimate took a big step in that direction – frame counters and low frequency ramp transformations, and the idea of a trained animation team at the console. It was kind of like a band, rehearsing then performing an animation. (At least that’s how I imagine it in my head.)

6 Likes

As many of you have touched on certain directions which I like, I try to keep my addition brief.

I come from a modular synth environment, making electronic minimal dub and techno music of many kinds, plus sound plays with human voices which I also record and direct.
I started video synthesis with minimal knowledge of what is generated in what way, and had an almost 10 year background with audio synths and modular synthesis.

Video synthesis is quite tricky though. I have two approaches 1) the ‘painting’ approach where no animation is present, no transitions just final results (and versions of them).
2) or I have movement on the screen - sometimes I animate other peoples’ works (paintings, analog photography, etc).

I relied on generative patterns in modular synthesis in two ways:
a) audio-based events (envelope followers)
b) generating events with gesture (knobs, theremin, etc)
As these are the sources, the results would usually be aa) tonal or amplitude changes bb) rhythmic variations (clock divisions & multiplications, resets & combinations) & nothing else.
I think I never really generated ‘melodies’ for example.

A generative approach usually comes in handy in a third case, that is, interactive installations, where people with no background knowledge can control the instrument via theremin and their own voice. So generativity for me means either “spicing something up” or creating an environment for “interactivity”.

Bottom line: generativity for me is a means of achieving a result I had wanted as a person - the narrative vision had always been there somehow beforehand.

One more thing: a generative result can be “happy accidents” too (e.g. having an originally unintended color result in patching a multi-output video module into a mixer) - but this is mostly “getting lost in the maths of things” - I am a great Serge-enthusiast - you are left alone with functions and invited to do whatever you want with them instead of other synth modules which put you in a narrative tonal and functional context. Yet, I also like LZX’s naming convention in some of the series - they brought a lot of “Aha! I get it now!” moments in many of my workshops & performances.

I believe, “event-based” narratives also do not have to be very complex either. I like starving “the kitchen of the mind” & make it cook something wonderful with limited “resources” (i.e. visual inputs). Some of the users who saw my pictures came up with interesting reactions defining the “genre” of the “movie” they were looking at (let it be static video art or animated):

“Western” - (Arch & Topogram)
“Video Game”, “Sci-fi” - (Castle series)
“Retro” - (more “basic”, Cadet-based patches)
“Horror” “Suspense” (patches with feedback & darker colors)

You see, these are already “narratives” in themselves too! :slight_smile:

5 Likes

Yes, I’m with you. This is what I was talking about regarding a high-level event generator. The user specifies base timing frequencies for LFOs, gaussian noise functions, and random events. Each oscillator/operator has controls for frequency and amplitude of feedback, as well as routing and amplitude of cross-parameter feedback. The system outputs gates/triggers and continuous waveforms to effect transitions. This would be the brain for other modules that actually perform the transitions, basically you need a multichannel sequencer / switcher.

Regarding continuous transitions such as dissolves and wipes, sine waves are a good start. Slow out / slow in, one of the Disney 12 principles of animation, is built into a sine wave. But any animator will tell you that the default B-spline curve is boring, artificial, mechanistic, and barely better than a linear transition. So an event generator should have some mechanism for waveshaping in order to de-couple the slow-out / slow-in from the overall transition time. In animation terms, extend or contract the Bezier tangent handles.

And not just that, but for quality animation, the slow-out / slow-in needs to be asymmetrical. Sometimes that asymmetry needs to be VERY EXTREME. The canonical example is a camera zoom. Screen space is exponentially more “sensitive” than a linear parameter such as focal length. To get a zoom that looks linear and natural to the audience, the function curve needs to have a very short slow-out of the initial value, and an extremely long slow-in to the final resting value.

This is definitely an area that is ripe for development. Video hardware has a problem, the automatic transitions all look artificial. This is why I’ve always preferred manual transitions with a T-bar, rather than that awful “TAKE” button. I presume some current high-end switchers and DVEs have more options for transition waveforms, but I’d wager that these are still pretty lackluster compared to the human touch. An event generator module could improve on this by modulating the base transition wave.

6 Likes

Great reading through this, thanks for the stimulating discussion all. Still letting this soak in and don’t have a ton to say yet. But…

Shifts of scale and zoom seem to be similarly related to peek-a-boo in terms processes which can reveal things that were hidden prior to a shift in perspective. If one is presented with a field of dots seen from a distance, the movement of approaching the field for a closer look could have a sense of drama, perhaps even more so when pushing past or through the field to view things from the other side. Maybe on the other side they’re not dots at all but triangles. Human vision can really only see one direction at a time yet we “know” that there are things that are happening outside of our current gaze. We can see something different by shifting our gaze. Part of an artistic process can be playing with those known expectations (often binaries, either/or, front/back, top/bottom) and using the expectation to emphasize an unexpected result that isn’t constrained by perceived binaries…not this/that but something else. I think for me, confounding dualities has lots of application in modular synthesis where often, things seem, at a very basic level, off/on. This is where things get interesting to me with more chaotic circuits that are are not so predictable in that way. Another layer here that I appreciate with sources of uncertainty is setting up systems where the duality of me/instrument is undermined. Nothing I like more than not being able to see/feel my hand at work.

6 Likes