Previous | Next --- Slide 36 of 64
Back to Lecture Thumbnails
ironman

Not strictly a mathematics question, but why does the derivative measure the edges of the image? Is it because that's where the image contains the highest levels of "change" (with regard to pixel location)?

MySteak

Yeah. The way you usually take the 'derivative' of an image is to take a matrix that looks something like:

[1 1 1 ; 0 0 0 ; -1 -1 -1]

and convolve it with your image matrix. In convolution, you place this 3x3 matrix on every 3x3 patch in the original image matrix and the pixel in the middle becomes the sum of the products of each pixel in the original image and the corresponding pixel in the 3x3 matrix.

Essentially after convolution with this matrix, you will have a new image where each pixel is the sum of the three pixels above it minus the sum of the three pixels below it in the original image. Therefore the pixels with the largest absolute value will be the ones where the difference between the values of the above and below pixels is large i.e. horizontal edges. To get vertical edges you convolve the image with the transposition of the given matrix.

ambientOcclusion

I noticed that not all the lines in the plaid came through. The colors are pretty evident to my eye, but was the algorithm that was used "colorblind", so to speak, and only looked at a grayscale version of the image? Or was is it just that those edges didn't have a derivative larger than whatever threshold value as chosen? A combination of both, perhaps?

keenan

@ambientOcclusion Hey, sharp eye. Yes, that's true: there were some other filters applied to the image before we took the derivative. That's a pretty common story for real-world edge detection filters, i.e., you don't just take the difference of adjacent pixels directly, but try to apply a filter that gets rid of the unimportant stuff. (These operations can still be linear, but are not always!)