Has there been an attempt to capture the direction/concentration of these edges when computing the inner product of images? You could imagine a similar count of strong outlines for images that are radically different, and I was wondering what research has gone in to finding more representative inner products.
It seems like the derivative norm would be pretty high for random static, which isn't really "interesting stuff"; I'd say the Scottie picture is interesting because it has some sharp edges, but enough patches of sameness to keep the image coherent. Is there any kind of norm for images that captures this balance?
Would these examples calculate the inner product over the functional representation or an approximate vector representation of signals and derivatives? Are there specific applications where you would want to use functions vs vectors?
Is there a way to "decide" which norm is best to use besides just experimenting with many different norms to see what results they give?
Is the derivative the same definition as the mathematical derivative? If so, what variable does it derivate with respect to?
How does the derivative of an image manage to "wipe away" a lot of the details and just capture things like texture/brightness? Is it because, in a mathematical sense, the derivative represents change in direction?
Are these kinds of straightforward algebraic rules (e.g. differentiate the image, then take the norm) still used in graphics? It feels like the standard "modern approach" would be to just throw machine learning at the problem.
Do many linear algebra libraries work well with flexible or customized inner products?
What optimization on the linear algebra side can be done in real-time video processing?
What does derivative refer to in this context? How to mathematically define it?