I feel confused about where the g comes in this function. There is only one argument, f, taken in F.

Dalyons

I also am confused about g. I get that we define the L2 gradient to be for all functions u, so does that mean here that our gradient would be in terms of f, g, and an arbitrary function u? Or is g our u for this example?

YutianW

Is g a free parameter here? Or is it considered a 'fixed' value here?

ShallowDream

I understand that this is important for optimization, but what does gradient of a function of a function represent? Is there an intuitive explanation of what it is?

Midoriya

Why are we adding a little bit of g to f each time? Shouldn't it be better if we add a little bit of (g-f) to f each time?

Mogician

Do we have any alternatives to determine how well two function "aligned"?

air-wreck

We've seen that grad <x,y> = y for both the Euclidean inner product and the integral inner product that we defined on functions. Does this hold in general for arbitrary inner products as well?

I feel confused about where the g comes in this function. There is only one argument, f, taken in F.

I also am confused about g. I get that we define the L2 gradient to be for all functions u, so does that mean here that our gradient would be in terms of f, g, and an arbitrary function u? Or is g our u for this example?

Is g a free parameter here? Or is it considered a 'fixed' value here?

I understand that this is important for optimization, but what does gradient of a function of a function represent? Is there an intuitive explanation of what it is?

Why are we adding a little bit of g to f each time? Shouldn't it be better if we add a little bit of (g-f) to f each time?

Do we have any alternatives to determine how well two function "aligned"?

We've seen that grad <x,y> = y for both the Euclidean inner product and the integral inner product that we defined on functions. Does this hold in general for arbitrary inner products as well?