Project 2

Fun with Filters and Frequencies

Overview

In this project, we explore two apparently unrelated concepts - filters and frequency!

As we progress through this project, we will see that these are actually closely related ideas.

My recreation of orapple

Part 1.1: The Finite Difference Operator

Sometimes we want a way of quantifying where the largest changes in the image are happening. If images were continuous, we might consider taking the derivative. However, since an image is discrete, the best we can do is take a delta between pixel values over a specified period.

One way of doing this is through the Finite Difference Operator. By convolving the image with the mask [1, -1], we get a new matrix representing the deltas in the horizontal direction. We can do a similar thing with its transpose to get the differences in the y direction.

Photo 1 Original Image
Photo 2 dx
Photo 3 dy

By taking the pixel-wise gradient magnitude using the two masks from the last step, we can produce a gradient magnitude image, which represents the discrete gradient at every point in the image.

Lastly, we may find it useful to assign a cutoff frequency to the gradient magnitude image produce an edge image with only 2 states: 255 for an edge, 0 for not an edge.

Photo 2

Gradient Magnitude Image

Photo 3

Edge Image

The filter is effective at outlining strong edges such as the man against the sky and the tripod, while cutting out irrelevant noise, such as blades of grass.

Part 1.2: Derivative of Gaussian (DoG) Filter

Often, very small localized areas of change can erroneously be misclassified as edges. Additionally, you can see areas in the man's coat above where the edge is very thin or even broken in places. It would be better if our edge detection method was a little more robust to noise.

We can use a gaussian filter to blur the image, which will reduce the effect of very localized noise:

Photo 1

Gaussian blur applied to original image

Photo 2

Edge detection applied to blurred image

Now we can see that the edges are much more thick and robust compared to the original edge image. The only place where the man's coat is not highlighted as an edge is where the background blends perfectly into it, such as the bottom left corner of his coat. There are very few false positives.

There is one last modification we can make to create the Derivative of Gaussian filter. Because convolution is a commutative operation, we can apply the gaussian blur directly to the dx and dy filters. It is more efficent to do it this way, as applying a convolution to the smaller dx and dy filters is much cheaper than convolving with the (potentially very large) image.

The x and y DoG Filters, as well as the result of using them for edge detection, is below.

Photo 1

X direction DoG

Photo 2

Y Direction DoG

Photo 3

Result of using DoG for edge detection - identical to the image from above

Part 2.1 Image "Sharpening"

If gaussian filters work blur images, and they work as a low pass filter, then it is intuitive that subtracting the result of a gaussian filter from an image acts as a high pass.

Adding these high frequencies back to the image can make it appear "sharper". We call this the unsharp mask filter. The k value below represents what multiplicative factor we used when adding these frequencies to the original image.

Photo 1

k = 0 (Original)

Photo 2

k = 1

Photo 3

k = 5

For this image, it worked reasonably well. You could argue that the k = 1 image looks better than the original.

Now, lets try adding our own blur, and then attempt to recover the image.

Photo 1

Original

Photo 2

Blurred

Photo 3

Restored

For this image, it also works ok. You can make out a little more detail in the restored image than the blurred one, particularly in the leaves on the tree, but it is far from perfect.

Photo 1

k=0 (Original)

Photo 2

k=1

Photo 3

k = 5

For this image, the unsharp mask filter was not very effective. Even the k = 5 image is still very blurry, yet it is even more uncomfortable to look at. The unsharp mask filter is definently not magic.

Part 2.2: Hybrid Images

By adjusting cutoff frequencies, it is possible to merge 2 images together such that it looks like one image up close, and a different image from further away. This is because the high frequencies dominate when visible, but as the viewer looses the ability to make out the fine details, the low frequencies appear. Therefore, by averaging the result of a low pass filter on one image and a high pass on another, you can create a hybrid image.

If you have trouble seeing the result, try openning the image in a new tab and zooming in and out.

Photo 1

Tiger (Low Pass)

Photo 2

Jaguar (High Pass)

Photo 3

Tiguar (Hybrid Image)

Here is the fourier magnitude plot for each of the following images:
Photo 1 Jaguar
Photo 2 Tiger
Photo 3 Jaguar Filtered
Photo 3 Tiger Filtered
Photo 3 Hybrid Image
As you can see, some of the values near the origin of the filtered jaguar plot are attenuated, whereas values far from the origin in the filtered tiger plot are attenuated. The combined plot show characteristics of both of the filtered plots.
Photo 1

Smiley (Low Pass)

Photo 2

Orange (High Pass)

Photo 3

Smorange (Hybrid Image)

This is an example of a failed hybrid image:
Photo 1

Apple (Low Pass)

Photo 2

Another Apple (High Pass)

Photo 3

Apple ^ 2 (Hybrid Image)

I suspect that it failed because the apple (company) logo does not have a high enough diversity of frequencies. As such it is always more visible than the fruit, even at long distance.

Part 2.3: Gaussian and Laplacian Stacks

Laplacian stacks are a way of seperating an image's frequencies into different layers. They are a prerequisite to the blending we will do in the next section.

Photo 1 Level 1
Photo 2 Level 2
Photo 3 Level 3
Photo 3 Level 4
Photo 3 Level 5 / Bottom of gaussian stack
Photo 3 Reconstructed Apple - 5 previous levels summed together
Photo 1 Level 1
Photo 2 Level 2
Photo 3 Level 3
Photo 3 Level 4
Photo 3 Level 5
Photo 1 Level 1
Photo 2 Level 2
Photo 3 Level 3
Photo 3 Level 4
Photo 3 Level 5 / Bottom of Gaussian Stack
Photo 3 Reconstructed Orange - 5 previous levels summed together
Photo 1 Level 1
Photo 2 Level 2
Photo 3 Level 3
Photo 3 Level 4
Photo 3 Level 5

Part 2.4: Multiresolution Blending (a.k.a. the oraple!)

Sometimes, we would like to merge images together so they look coherent. However, as you can see below in the space image, simply stitching the images together makes the seam very apparent.

Instead, we can do something called multiresolution blending. The key idea is that we can blend frequencies between the images at different rates - so high frequencies are less blurred than low frequencies. This retains the structure of both images, while tricking the eye into beleiving they look coherent.

Photo 1 Image 1
Photo 2 Image 2
Photo 3 Filter
Photo 1 No Blending
Photo 2 Multiresolution Blending
Photo 1 Image 1
Photo 2 Image 2
Photo 3 Filter
Photo 1 No Blending
Photo 2 Multiresolution Blending
Here is the stack for the blending of these two images:
Photo 1 Layer 1
Photo 2 Layer 2
Photo 3 Layer 3
Photo 3 Layer 4
Photo 3 Layer 5
Photo 3 Layer 6
Photo 1 Image 1
Photo 2 Image 2
Photo 3 Filter
Photo 1 No Blending
Photo 2 Multiresolution Blending
This image doesen't look quite as good as the first two - I suspect it is because one of the pictures is very clearly in the foreground, and it actually doesen't matter that the lighting is different as this is what you would expect from two objects that are very far apart. However, you can see that the poor cropping job is much less noticable in the right image, especially around the satellite dish in the top left, and around the chin of the face.