In my AP CS class in high school one of our assignments was coming up with an algorithm to find edges based on the difference between the average intensity of pixels, like this.
pw123
This might be more of a CV question, but is this how edge detection algorithms work, by measuring the norm of the derivative?
jacheng
This reminds me of computer vision, where an assignment was to identify the contours in an image, using edge kernels. From a graphics perspective, it seems like you could represent an image with a 'smaller' norm with less bits so it would be easier to represent?
marshmallow
Why/how does measuring the norm of the derivative capture edges?
xiaol3
Its interesting to think that the images can be measured in a mathematically concrete way.
rgrao
@marshmallow, an edge in an image is often characterized by a sharp change in the intensity of the pixels (either a transition from high to low, or from low to high). As a result, the derivative or gradient of the pixel intensity values provides a good measure of the edges, especially locations where it is the highest. You would see that in computer vision, edge detection filters are often just simple differencing operations along the X and Y coordinate axes for the image. And numerical/discrete differencing is equivalent to taking a derivative.
marshmallow
@rgrao makes sense! Thanks so much for explaining!
evannw
Cool seeing CV example of norm! Enjoyed seeing linear algebra properties applied to different mediums.
L100magikarp
There are also some tricks that you can do to make edge detection work even better than with differencing filters (eg Sobel filter). Since it's usually desirable to get thin lines, non-maximum suppression is used to find the center of the edge where the intensity change is largest. There also tends to be a lot of noise in real world images, so hysteresis is a technique that connects nearby detected edges. These ideas help make the Canny edge detector so effective
Shell
Oh, this reminds me of the pixels hw that was done in 122, where we had to alter the image, and one of them was to find the areas where the difference between pixels was greater than a given value
rlpo
Is measuring the norm of the derivative of an image equivalent to a high-pass filter?
diegom
I loved this example and I couldn't help but wonder if more specific features such as corners lead to better ways of measuring a picture's norm.
mkmm
It's interesting when CV based examples are used since they seem to fit together nicely as somewhat opposites to graphics topics
In my AP CS class in high school one of our assignments was coming up with an algorithm to find edges based on the difference between the average intensity of pixels, like this.
This might be more of a CV question, but is this how edge detection algorithms work, by measuring the norm of the derivative?
This reminds me of computer vision, where an assignment was to identify the contours in an image, using edge kernels. From a graphics perspective, it seems like you could represent an image with a 'smaller' norm with less bits so it would be easier to represent?
Why/how does measuring the norm of the derivative capture edges?
Its interesting to think that the images can be measured in a mathematically concrete way.
@marshmallow, an edge in an image is often characterized by a sharp change in the intensity of the pixels (either a transition from high to low, or from low to high). As a result, the derivative or gradient of the pixel intensity values provides a good measure of the edges, especially locations where it is the highest. You would see that in computer vision, edge detection filters are often just simple differencing operations along the X and Y coordinate axes for the image. And numerical/discrete differencing is equivalent to taking a derivative.
@rgrao makes sense! Thanks so much for explaining!
Cool seeing CV example of norm! Enjoyed seeing linear algebra properties applied to different mediums.
There are also some tricks that you can do to make edge detection work even better than with differencing filters (eg Sobel filter). Since it's usually desirable to get thin lines, non-maximum suppression is used to find the center of the edge where the intensity change is largest. There also tends to be a lot of noise in real world images, so hysteresis is a technique that connects nearby detected edges. These ideas help make the Canny edge detector so effective
Oh, this reminds me of the pixels hw that was done in 122, where we had to alter the image, and one of them was to find the areas where the difference between pixels was greater than a given value
Is measuring the norm of the derivative of an image equivalent to a high-pass filter?
I loved this example and I couldn't help but wonder if more specific features such as corners lead to better ways of measuring a picture's norm.
It's interesting when CV based examples are used since they seem to fit together nicely as somewhat opposites to graphics topics