Incorporating SD and HD systems

,

now that even more people will combining two different resolution systems together I think it would be good to start discussing the creative possibilities of that kind of workflow.

so a good starter question is
how are you thinking about scaling SD to fit with HD?

I’m assuming the following for these examples
SD = 640 x 480
HD = 1920 x 1080

ffmpeg snippets included in case you want to try yourself.




no scaling, pad to 16:9 screen center

ffmpeg -i soul.mp4 -vf "scale=-1:480:flags=lanczos,setsar=1,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" soulU.mp4


scale up to 1280 x 960 then pad to 16:9 screen center

ffmpeg -i a.mp4 -s 1280x960 -vf "scale=-1:960:flags=lanczos,setsar=1,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" aU.mp4


scale up to 1440x1080 then pad to 16:9 screen center

ffmpeg -i a.mp4 -s 1440x1080 -vf "scale=-1:1080:flags=lanczos,setsar=1,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" aU.mp4


640x480 cropped to 640x360 upscaled to 1920x1080 Fullscreen

ffmpeg -i b.mov -vf "crop=640:360,scale=1920:1080:flags=lanczos,setsar=1" bU.mp4


there are plenty of other options I’d love to hear any thoughts

you also have the other direction for creativity…

HD>SD

1920x1080 > center crop @ 640x480

1080 center crop

ffmpeg -i a.mov -filter:v "crop=640:480" -c:a copy aC.mov


1920x1080 > 4 quadrants @ 640x480 centered

ffmpeg -i gc.mov -filter_complex "[0]crop=640:480:380:60[tl];[0]crop=640:480:1020:60[tr];[0]crop=640:480:380:540[bl];[0]crop=640:480:1020:540[br]" -map "[tl]" topleft.mp4 -map "[tr]" topright.mp4 -map "[bl]" bottomleft.mp4 -map "[br]" -c:a copy bottomright.mp4


Since TBC2 will allow for upscaling I’m wondering what kind of options we will have.


The V440 has been a great playground for SD<>HD workflows

https://www.instagram.com/p/CmmTFrMJIun/

this is a way of combing SD and HD together we hadn’t tried much before this week. We got a bunch recorded and are going to send it back through the system when TBC2 lands!

5 Likes

My videos are usually totally abstract patterns of light, so I just use “scale=hd1080” to do it. Then I can trivially mirror vertically and horizontally to get higher resolutions without compromising quality.

While that requires a moderately complex filterchain, it’s obviously cheating. :slight_smile: If I was working with a camera and familiar objects, like your examples, your filterchains would be very useful. Thanks.

I do sometimes use other people’s videos just for testing filters. BTW, I find the fillborders filter to be useful, particularly for cleaning up the edges of a capture from my VC and ESG. So ffmpeg always plays a role in my videos, no matter how simple.

Thanks.

PS: here’s the kind of filterchain I use for capture:

-filter_complex “pan=stereo|c0=c0|c1=c1” -filter_complex “setpts=PTS-STA
RTPTS, fillborders=mode=smear:left=18:top=18”

4 Likes

One of the things I like doing with the quadrant outputs is to process them separately and then recombine into a full image. When I’m back at my main computer I can post a version I did using regular and predictive datamoshing on different quadrants.

1 Like

Speaking of… Quadrants 2022-11-30 UHD2160

1 Like

yes!!! now this is what I’m talking about what a cool way to think about upscaling

I’ve got to give this a shot

I’d like to try a 2x1 version where you have two 960x540 videos on top of each other on the left side and one 1920x1080 under the whole thing to show through on the full right side.

1 Like

Here’s a simplified version of the bash script I use to generate quadrant videos. PROCESS is the variable for any additional processing. QUADLEN is DUR/4 where DUR is the total duration of the input video. I think you can guess the rest.

Do not use the trim filter for this job! It will consume too much memory and fail.

nice $FFMPEG -hide_banner \
-ss $START -i “$INPUT” \
-ss $(($START+$QUADLEN)) -i “$INPUT” \
-ss $(($START+$QUADLEN*2)) -i “$INPUT” \
-ss $(($START+$QUADLEN*3)) -i “$INPUT” \
-filter_complex "
[0:v] $PROCESS hflip [t1]; \
[1:v] $PROCESS copy [t2]; \
[2:v] $PROCESS hflip [t3]; \
[3:v] $PROCESS copy [t4]; \
[t1] [t2] hstack [h1]; \
[t3] [t4] hstack, vflip [h2]; \
[h1] [h2] vstack
" \
$POSTOPTS \
$OUTPUT

1 Like

So I have been thinking about this from a LZX hardware perspective and I think I have a workflow that could work.

First, you need to have essentially two dedicated systems (SD/HD) that it would be nice to composite together at some point. For this workflow I am going to make the assumption that we want everything to end up HD which means the SD signal will need to be upconverted at some point.

HD Signal flow needs to include a TBC (for this example as HD House Sync) and an ESG. Add whatever you want that handles HD in between. We will use the first channel of TBC2 to bring in an external source (camera or whatnot).

SD Signal Flow: This could all be handled from a VC or if you have a Cadet system that could work. Basically a SD Decoder and Encoder and fun stuff in between.

How to upcovert SD: Well, you could take the component output (or composite, sVideo) out of the VC and use channel two of TBC2 to upconvert your SD signal flow. Then your SD and HD are all HD and you could use a combination of SMX3 and/or Marble Index and/or FKG3 to composite the two signals before they eventually reach ESG for encoding.

@creatorlars does that all make sense? Without having TBC2 there may be some assumptions above.

3 Likes

That makes perfect sense to me. I’m doing this with 2 12U cabinets. Cab1 is Gen2 & some Gen1 modules, with a VC. Cab2 is pure Gen3 and HD. So when I get a TBC2, I’ll feed one channel with the VC component output. That’ll leave the other channel free for whatever.

For budget reasons, I’ll be unable to fill Cab2. It’ll exist only to upscale from SD and add an HD layer over whatever Cab1 produces. Some modules in Cab2 can process HD signals, but I’ll have no way to sync any of my PR modules to HD. My budget won’t support a second TBC2, even if I sell my VC.

I don’t know what my “endgame” will be, but it won’t include all the Gen3 modules. I’ll need to be very selective. So, no duplicate modules. I could make an argument for more than one DWO, but that will require sacrificing something else.

Anyway, the one DC Distro I have looks like it’ll support everything I can put into Cab2. 5A will be plenty for me.

YMMV.

Sounds like a good plan!

2 Likes

Sony v440 HD, v40 HD, and v4ex are all good options for mixing SD and HD, probably some other brands that can do it too, maybe a Panasonic? (Panasonic AG-HMX100 etc) Note the v4ex only has 720 and 1080 input on ch4 and mixes in SD then upscales again to your output setting. And v440hd only supports 1080i (but keyer is much better than v40). For these reasons I got a v40 recently, as more and more 1080p devices are on the way, and it supports component, composite, and hdmi input. Wish there was a mixer with the effects of the v4ex, and built in screen, and supporting all those input types and 1080p output. Been beta testing Gravity Waves and it’s awesome, but it’s not going to replace a v4ex. One work around could be using a newer Roland v-8hd hdmi mixer with a bunch of converters, but for some odd reason they no longer have mirror mode effects, and it would be a lot of cables, convertors, and power supplies back there depending on how many SD, or HD Component inputs you need.

2 Likes

Roland VR-50HD and VR-4HD also have component and composite inputs to supplement their mostly hdmi inputs, so are interesting options, but also take up a lot of space with their audio mixing panels.

1 Like

I’m wondering whether it would cause any damage or reduction in working life to have HD sync signals going through SD modules?

Could I safely set up one long sync chain and simply ignore the SD only modules when I choose to work in HD?

Definitely no electronic harm in sending HD sync through modules designed before HD sync was available. Likewise you should be fine leaving your HD sync patched through your older modules when the SD-optimized modules are not in-use

7 Likes