There’s a lot of ground to cover here. I’ll assume @DesertMuseum and @48HourVideo that you’ve done the basic homework on why NTSC/PAL is interlaced and the associated challenges of de-interlacing.
I tend to look at the capture workflow from the perspective of A) ‘what looks best for my intended audience’ (that audience includes me with my artistic intent) and B) ‘what device will this be displayed on.’
A) What looks best is a bit subjective, but for me it is the highest resolution format that does not contain egregious conversion or motion artifacts.
B) The answer is in almost all cases: a modern progressive scan device (but sometimes a CRT).
If I’m going to stack of CRTs I won’t necessarly de-interlace or upscale.
Otherwise it’s all de-interlaced.
I have a couple different studio capture workflows. Most common is NTSC Y/C from a mixer or NTSC Component out of the Cortex into a hardware scaler (scaling to 720P or 1080i) then into a BM capture device via HDMI or SDI. Sometimes For NTSC I’ll capture straight to the BM Intensity Pro 4K.
This then becomes the archival material. I use Final Cut to master the files and convert 1080i to 1080P is needed.
Live I will run NTSC straight from the main mixer to my projectors (XGA) to minimize latency. If I have to integrate with a high def workflow I will use a scaler (or scaling mixer) and deliver 720p or 1080p. Haven’t ever delivered higher than 2K.
A note on scalers and converters: Many do a decent job of converting formats, but none except the very high end do a great job at making up pixels that aren’t there to begin with.