[WIP] Blur/Sharpen

Dring Dring… any news…?
I need simple sharp effet, it’s terrible for camera feedback squids, I use an old cheap Hama video processor only for the sharp knob, but it’s big and heavy. I dream about only this knob in a small but central place, but reverse the schematic is hard (totally analog but strange pcb design).
any tips…?

You can, but it’s not the same as an isotropic blur, by using a video crossover @ hsync rate, applying separate vertical and horizontal filters, and then combining the results. This is what War of the Ants does to control the noise’s vertical and horizontal blur separately.

What’s needed for spatial blur algorithms and sharpening is a frame buffer based convolution matrix filter, where each pixel is modified according to the values of the 8 pixels surrounding it. A couple scanline buffers would do it if you were okay with some phase slip.

2 Likes

the sharpen module design is not ready yet.
I kept adding features and ran out of space on the pcb.

I may just do a simple design, just to have something to use with feedback, as you say
thanks for reminding me :slight_smile:

1 Like

Teletect Dual Enhancer is a sharpening function. (high pass filter at a fixed cutoff with adjustable gain)

You could also use an IP Differentiator (multiple fixed cutoff high pass filters summed into one output) and sum its non-inverted output with the source using any ol’ wide bandwidth mixer.

You can print the PCBs yourself with either of these. Unfortunately some unscrupulous folks are building/selling them on Reverb which discourages R&D from actual designers so I’d recommend not buying from them and enabling behaviors that are detrimental to the video synth community.

1 Like

the basic version I have ready in Eagle has a highpass filter with switchable ranges and a summing mixer after that.
manual control over the amount of edge signal, dry signal and edge gain.
input & output jacks

ps: I have a few left over teletect dual enhancer pcbs/panels (I ordered 5 and only need 2 of these),
pm me :slight_smile:

2 Likes

I would rather have this than a one frame delay from a frame buffer. But that’s just me.

If you’re doing convolution kernels already, doesn’t that mean you can run shader code? At that point, “blur” is an app on a graphics engine, rather than a module per se.

1 Like

Not necessarily. Convolution is just (weighted) averaging of some values in an array. Shaders (as the term is commonly used) means implementing OpenGL (or similar). It’s possible to implement convolution without implementing shaders. The easier implementation means it could be done on simpler hardware (like Lars noted: a few scanline buffers) than you’d need for shaders.

3 Likes

Just one last note on this, since this isn’t in the right thread for it – what LZX is interested in doing with digital infrastructure is RTL based implementations of functions that are typically found in software shaders accelerated by GPUs. An RTL based implementation on an FPGA isn’t as generically powerful as an NVidia GPU, it’s more like a custom mini GPU designed to run a specific real time processes – more analogous to DVEs or game consoles from the 80’s and 90’s with custom ASICs than to modern PC GPU environments. That’s the difference between Memory Palace and any software/GPU based device that emulates something similar for example. A convolution matrix kernel is for example, a great candidate for a real time FPGA module (but something you could also do with shaders and GPUs.) Neither method is “superior” at all, it’s just different approaches to instrument design.

4 Likes