I am attempting to troubleshoot a signal flow that allows for two external sources via two visual cortexes to be patched throughout a system. I have introduced a sync generator and achieved the stated goal with genlocked external playback devices. However, when I introduce a genlock camera the output is not as expected.
Do any of you video friends use an external playback device and genlock camera in your workflow? Or perhaps you use two genlocked cameras with a sync generator? I would appreciate any feedback on the following signal flow sketches:
There’s a lot going on here. I will assume that the Aja is sending SD sync to the LZX, and the Ki’s and camera are outputting NTSC. Can you start by describing what DOES work (does a single Ki sync, does the camera sync, are the two cortex in sync with internal signals?) Have you tried it with a composite signal? Have you tried having one sync’d cortex drive the second cortex? (I have a pretty complicated sync chain, and this is how mine sync). Also, when you say, ‘sync is not as expected’ what specifically are you seeing, the output of the second cortex scrolling horizontally? EDIT: what is ‘cam out’ in the second diagram? To be truly stable in my system, the only signal the decoder can trust is a sync signal, this precludes using it as component input though).
The following configurations allow me to patch the decoded external input of each VC in a two VC configuration throughout the system without sync issues:
- Sending sync from the AJA GEN10 to two KiPros and two VCs but using the KiPros in playback mode only.
- Sending sync from the AJA GEN10 to a single KiPro and the XF105 and monitoring the XF105 output via the KiPro directly. (Pictured below.)
I have not tried any configuration with composite signals only. Which decoder switch setting do you advise and do you deliver sync on the front or rear when driving a cortex with a cortex? When I say that any out is ‘not as expected’ it generally looks like this:
These artifacts are not scrolling, but the horizontal banding visible in the bottom left quadrant above is mobile and seems to follow dark values on the left side of the screen. The XF105 is displaying a successful genlock indicator on the viewfinder, and I have tried correcting the phase adjustment in the genlock settings at maximum values. Phase adjustment does alter the horizontal alignment, but not enough to approximate the system’s output to the source.
I generally use the front panel Y input for sync as it’s the most flexible in my three or four case setup, but it’s functionally equivalent to the rear input when switched to ‘external decoder.’ Let’s step back before getting back into the details - what’s your intended use case? The photos suggest the two cortex system would be used as a live video switcher, which begs the question, why not use a video switcher, and use the the LZX as effects on other inputs of the switcher?
I would like to feed an external source via a KiPro into one VC and rescan and process the output of that rescan throughout the system via the second VC.
Recently, I have been sending an external source to the system while monitoring that source on a preview monitor. I have been rescanning the preview monitor with a defocused XF105 to generate a bloom. I am keying and filtering the luma signal of the defocused rescan camera and would like to composite the external source and its rescanned counterpart within the system.
The stroke in the following images was created by rescanning the external source via the preview monitor on the left, but the compositing was achieved outside of the system.
Thanks for the explanation - the result does look pretty cool (and with the rescan I can see how it adds something to the key that you wouldn’t get from a production mixer). IIUC what you want to do with the LZX involves a re-entrant signal (external signal gets processed in a loop to generate the stroke, then goes to the other cortex to get composited). I’m not sure how this will affect your sync chain. I’ve experimented with this before using my output mixer to sync a second cortex and it worked in some cases [Sync - > Sync cortex1, mixer → Sync cortex2] [Signals → cortex1 and cortex2]. So you might be able to get creative treating this as TWO sync’d systems and not one. You are in a brave new world here so I wish you luck! Please do share if you get it working. (The cheater way to do this is to use your rescanned signal as a DSK /Matte source as you have in the example above, still seems to me to be the way to get the desired effect without having to mix multiple generations technology form the SD/HD worlds.) EDIT: or add a TBC for the re-entrant signal, as there are going to be sync delays between all these devices.
Yeah, I’m also thinking TBC.
Though Genlock Camera should = TBC’d camera. Stabs in the dark: I’d alternately try making the Aja’s and the camera the sync source. Heck I’d drive the whole shebang from one of the Cortex and see what happened!