Why do some LZX modules need sync and others don’t?
For example, I get that a Prismatic Ray or a Visual Cortex need to sync. PR produces video signals, so does a Visual Cortex. But why does Navigator? And not Shapechanger?? I’m confused!
I don’t mean this to be a dig against design practices, not at all! I just want to understand how sync works better and whether I might be making a mistake putting some modules in a second case.
The reasons I know of why modules might need sync are:
To synchronize a clock or oscillator. (Such as with a video oscillator or a digital module)
To know when the sync/blanking is in order to blank normal signal then. (Such as in an output encoder)
To synchronize changes to be made in between lines or frames instead of in the middle of lines/frames. (Such as with a digital module)
To know when a line/frame begins in order to generate ramps based on that. (Such as with the Cadet ramp generator)
To know when a line/frame begins in order to generate output then. (Such as with a digital module)
Note: that might not cover everything, and some of those reasons are just different ways of looking at the same thing. Some modules might use sync for one of those reasons, while others might use sync for multiple reasons.
My guess for the Navigator is that it’s using sync to blank the signal during the sync/blanking times, in case any of the transformations spill into that time. Just a guess, based on what it does. I could be wrong. Or maybe the anchor functionality needs sync? Now that I’m trying to intuit what it uses it for, I’m really curious what the correct answer is!