Interlace or DeInterlace on Capture (and associated methodology)

Wondering if anyone had any user experience or advise pros/cons/etc of capturing your video projects in an interlaced or deInterlace format. I know this can be done in software, but, I am hoping to set up something line with capture from the LZX interlaced output from Cortex into a blackmagic capture device.

Also, if anyone has any notes to offer on the potential benefits of keep your master project files as interlaced or deinterlaced for future projection, playback, etc. ?


I’m curious to learn more about this as well. On the way up the analog SD NTSC gets converted to SDI @ 525i and 59.94Field/29.97Frame Hz. With our Atomos converter/upscaler, we found the best upscaling settings for our displays retained that default SDI interlacing and multiples of the frame rate.

Definitely curious to know what impact interlacing may have on storage, hadn’t considered it. Thanks for the thought @DesertMuseum :ok_hand:

1 Like

There’s a lot of ground to cover here. I’ll assume @DesertMuseum and @48HourVideo that you’ve done the basic homework on why NTSC/PAL is interlaced and the associated challenges of de-interlacing.

I tend to look at the capture workflow from the perspective of A) ‘what looks best for my intended audience’ (that audience includes me with my artistic intent) and B) ‘what device will this be displayed on.’

A) What looks best is a bit subjective, but for me it is the highest resolution format that does not contain egregious conversion or motion artifacts.

B) The answer is in almost all cases: a modern progressive scan device (but sometimes a CRT).
If I’m going to stack of CRTs I won’t necessarly de-interlace or upscale.

Otherwise it’s all de-interlaced.

I have a couple different studio capture workflows. Most common is NTSC Y/C from a mixer or NTSC Component out of the Cortex into a hardware scaler (scaling to 720P or 1080i) then into a BM capture device via HDMI or SDI. Sometimes For NTSC I’ll capture straight to the BM Intensity Pro 4K.

This then becomes the archival material. I use Final Cut to master the files and convert 1080i to 1080P is needed.

Live I will run NTSC straight from the main mixer to my projectors (XGA) to minimize latency. If I have to integrate with a high def workflow I will use a scaler (or scaling mixer) and deliver 720p or 1080p. Haven’t ever delivered higher than 2K.

A note on scalers and converters: Many do a decent job of converting formats, but none except the very high end do a great job at making up pixels that aren’t there to begin with.


The only downside of always deinterlacing (and upscaling) during capture that I have ever personally found is that sometimes I prefer the look of interlaced stills. (Which isn’t even a problem, really, because I almost always prefer a photograph off a CRT over any digital still, interlaced or not.)