It looks like there are lots of ways to take the norm of an image. In what scenarios is it useful to look at the norm of an image and is it immediately clear which norm to use?

siliangl

Why does the darker picture have smaller norm than the brighter one?
Is it because the black color has smaller norm than white?
I have this confusion because I think both of them are high definition pictures and would have similar norms.

keenan

@siminl We gave one example of a different image norm on this slide, with the idea that (perhaps) you might be more interested in the edges in an image than the absolute color values, since the presence of many images could indicate that there's interesting stuff going on in the image. Incorporating derivatives (such as those used for edge detection) into a norm is quite common in signal processing and numerical analysis, since two signals that are close in value but different in (spatial) derivatives can have very different behavior. In general, it takes some thought to decide which norm is most appropriate in which setting; this is just something you build up more intuition for as you see more examples and applications.

keenan

@siliangl As discussed above, and on this slide, there are a lot of different norms you could use, depending on the context. Here, for instance, the rough idea is indeed that if the numerical value used to represent pixel intensity is proportional to the number of photons hitting the sensor at that location, then the bottom image will have much larger norm than the top image. If on the other hand you pick a norm based on some other information, the relative magnitudes of the two norms could be reversed.

because I think both of them are high definition pictures and would have similar norms

Not sure what you mean here about 'high definition.' If you're referring to the image resolution (i.e., the number of pixels in the horizontal and vertical direction), then yes it could be that they have the same resolution. But the number of pixels is more like the dimension of the vector space, rather than the norm on that vector space. I.e., images of equal resolution can be expressed using the same number of coordinates.

sjip

@siliangl I think the norm is calculated based on the rgb values of all the pixels in the image. A black pixel has rgb value of (0,0,0) while a white pixel has rgb (255,255,255). Since all the pixels in the image of sun has higher rgb than those in the image of the cave, the norm of former image is higher.

But if what i am saying is right than the norm of a complete white image should be higher that the norm of the image of sun.

It looks like there are lots of ways to take the norm of an image. In what scenarios is it useful to look at the norm of an image and is it immediately clear which norm to use?

Why does the darker picture have smaller norm than the brighter one? Is it because the black color has smaller norm than white? I have this confusion because I think both of them are high definition pictures and would have similar norms.

@siminl We gave one example of a different image norm on this slide, with the idea that (perhaps) you might be more interested in the edges in an image than the absolute color values, since the presence of many images could indicate that there's interesting stuff going on in the image. Incorporating derivatives (such as those used for edge detection) into a norm is quite common in signal processing and numerical analysis, since two signals that are close in value but different in (spatial) derivatives can have very different behavior. In general, it takes some thought to decide which norm is most appropriate in which setting; this is just something you build up more intuition for as you see more examples and applications.

@siliangl As discussed above, and on this slide, there are a lot of different norms you could use, depending on the context. Here, for instance, the rough idea is indeed that if the numerical value used to represent pixel intensity is proportional to the number of photons hitting the sensor at that location, then the bottom image will have much larger norm than the top image. If on the other hand you pick a norm based on some other information, the relative magnitudes of the two norms could be reversed.

Not sure what you mean here about 'high definition.' If you're referring to the image resolution (i.e., the number of pixels in the horizontal and vertical direction), then yes it could be that they have the same resolution. But the number of pixels is more like the

dimensionof the vector space, rather than the norm on that vector space. I.e., images of equal resolution can be expressed using the same number of coordinates.@siliangl I think the norm is calculated based on the rgb values of all the pixels in the image. A black pixel has rgb value of (0,0,0) while a white pixel has rgb (255,255,255). Since all the pixels in the image of sun has higher rgb than those in the image of the cave, the norm of former image is higher.

But if what i am saying is right than the norm of a complete white image should be higher that the norm of the image of sun.

@sjip Not if you encode the image in a high dynamic range format. ;-)