## Difference of Gaussians

While reading about image recognition algorithms, I learned about a method of band-pass filtering I hadn't seen before. The Difference of Gaussians method can be used to band-pass filter an image quickly and easily. Instead of convolving the image with a band-pass kernel, the Difference of Gaussians methods uses two low pass filters and subtracts the two.

You start by blurring the image using a gaussian kernel, then subtract the blurred image it from a second less blurred version of the original. The result is an image with only features between the two blur levels. The two levels of blur used in the subtraction step can be varied to give different band pass limits.

This method can be effectively used for edge detection because it cuts down high frequency noise by subtracting a less blurred image. That means that noise in the image doesn't get treated as an edge. Apparently there are common blur levels that cause the Difference of Guassians method to approximate response of ganglion cells (light sensing nerve cluster in the eye)to light that falls on or near them.

## Why Transform?

I've just recently had an epiphany about signal processing. It's kind of embarassing that it's taken me so long to realize this, but all the transforms that I've been doing in classes are just to make the signal separable from the noise in my data.

That seems pretty simple, so let me back up and explain why it took me so long to realize this. I've been taking signal processing classes off and on for about five years now. The classes mostly have focused on a few transforms (fourier and wavelet mostly) and how they can be used to filter an incoming signal. We've made low pass filters, high pass filters, and everything in between. It was never quite clear to me why you use the tranform though. You can just do everything in the time domain.

I didn't put too much thought into that because computations can be easier to do in the frequency domain. What's convolution in time corresponds to multiplication in frequency. It can be faster to do some calculations in the frequency domain because of that correspondence. I understood that, and thought that I was using some transforms that brilliant people had invented just to speed up their computations. I had no intuition for how they could have devel0ped  the transform. How could they have known the transform would make calculations faster? I put it down to Laplace and Fourier just being more brilliant than me.

What I've recently come to realize is that, while Laplace and Fourier were indeed brilliant, their transforms serve a different purpose altogether. The speed up that I got in filter calculations is almost an afterthought to the real purpose of using a transform.

Filters only let through the frequencies that you want. This is obvious when you see plots of filters in the frequency (fourier) domain. I was clear on this from the outset. You use the Fourier transform to select frequencies, gotcha.

For some reason, this knowledge didn't generalize like it should. I went around saying to myself that filters select different frequencies, and that convolution in time was multiplication in frequency, but I didn't get that this was the whole point of the transform in the first place. Noise in the time domain is hard to separate from a signal, but in the frequency domain it can be very easy to separate.

And that is the key behind transforms. The real reason you do the transform isn't so that you can do fast multiplication instead of slow convolution. The real reason to transform a signal to a new domain is because the new domain can make the parts of the signal you're interested in easier to separate from everything else. That just happens to make the calculations faster too.

This separability comes up in all kinds of signal processing, pattern recognition, and machine learning. A transform may help anywhere where you want to separate one type of thing from another. Making it easier to separate the wheat from the chaff is why you would calculate features before feeding your data into machine learning algorithms.

My understanding of signal processing now revolves around three steps.

1. Transform the incoming data so that the components you're interested in are easy to separate from the components you're not (separate the signal from the noise).
2. Do whatever calculations you need to in order to get the output that you want.
3. Transform the output to the domain you need it in; the new domain is usually, but not always, the same as the domain the data had in the first place.