I was confused about how this works until I realized that a derivative measures the change at a point. So places where the image's color changes from very light to very dark or vice versa have a large change, so the norm of the derivative is large, so it shows as white on the processed image.
Right, exactly. Another confusing thing here is that we have a color image, but a black and white "edge map." We'll spend a lot of time talking about color in this class, but for now it's perhaps easiest to imagine that the input image is also somehow converted to black and white, so that you can just take the usual derivative.
The idea of taking derivative of an image is fascinating to me. You mentioned in your previous comment that the image is converted to black and white before taking the derivative. Is it possible to take derivative of a color image directly? Would the end result also be in color? And by taking derivative, from my understanding, we are measuring how abrupt shades change in the image. How would an edge of two colors with very different hue or saturation, but similar brightness show up in the derived picture?
Sure, there are lots of ways to take a derivative of an image. In terms of color, one way to think about it is to separate out into several color channels (e.g., red-green-blue (RGB) or cyan-magenta-yellow-black (CMYK)), viewing each as a black-and-white image. Then you still have the question of which derivative to consider: do you take the gradient? Or some other combination of directional derivatives? Or what? Lots of possibilities; Wikipedia has a decent introduction to edge detection. In fact, working in standard RGB or CMYK color spaces may not be the wisest idea, since human visual perception works in a very different way. We will spend plenty of time talking about color perception and color spaces later on in the course.
Thank you so much for your reply! Looks like there's a lot to explore. I'll let you know if something further comes up.