What you’re discussing in your reply to Agawell would indeed require some manipulation in a digital processing suite to prepare it for the video synth system. Lynda has an excellent Adobe AE VFX course that teaches multiple methods for generating moving masks. And you’re right about the crux of the issue, saving the transparency data after the treatment. Quicktime MOV and some other codecs/containers are capable of saving an alpha layer, but I’m not sure how that information would be conveyed via a media player into Visual Cortex/Vidiot/TBC2. Maybe the TBC2 will allow opacity channel output via the 1V DC Luma output? Maybe Luma output already serves that purpose in a sense and I’m a total scrub? All very possible.
That’s what has me arriving at the conclusion of a luma or chroma treatment for the background, which you can process for keying inside of the LZX system with any variety of modules. To use the alpha video channel approach, that’s where you’d want the second decoder input with synchronized play of a grayscale mask layer for the main video. That luma data could then be used in various keyers as opacity data.
I’d imagine the quality of your final output will rely on the processing work you do to prepare the masked video file. After that, there are multiple hard and soft keyers available in LZX modules to achieve different looks for your foreground. Visual Cortex is hard-keyer only, which might look a bit jagged or obvious for things like hair on a person’s head. Soft keyers will allow for partial transparency at edges to achieve a more natural look for photorealistic masked objects over a non-original background.