In the Gram-Schmidt algorithm, the second step, what does it mean by saying "subtract any component of the 1st vector from the 2nd one" ? According to the pictures below, does it mean that subtracting the projection of the 2nd vector on the normalized 1st vector from the 2nd vector?
johncena
Yes, the wording may be a bit unclear. To obtain the second basis vector, one should remove any "influence" of the first basis vector, which is easily derived by taking the projection of the second onto the first. Subtracting this projection gives a vector that is orthogonal to the first vector.
keenan
Right, the language is indeed a bit murky. With reference to the picture, it's saying to remove the component of $\mathbf{u}_2$ in the direction of $\mathbf{e}_1$ from $\mathbf{u}_2$ (which is how $\tilde{\mathbf{u}}$ is defined).
ninkamat
If I am understanding the fourth point, in case of a three dimensional space and three vectors, would you subtract projection of the third vector on each of the two previously established orthonormal basis? Is that correct?
qiuyij
Yes, that's correct.
ooCast
I think explicitly showing how to find the third vector of orthonormal basis (figure might be complex, but equations are clear) can help better understanding of the process and then induction for higher dimensions. Or the sigma notation can be used here.
By the way, if anyone would like to verify their calculations, this might come in handy: http://m.wolframalpha.com/input/?i=gram+schmidt+{{1,1,1},{2,1,0},{5,1,3}}&js=off
mattlkf
Why is this not a good algorithm when we have nearly parallel vectors? I was wondering if it had something to do with loss of precision when we subtract the component of one vector from another nearly-parallel vector and get a very very small result, but I'm not sure.
keenan
Cancellation is one issue. Another is that rounding errors in the sum may result in a vector that is not orthogonal to the ones that precede it; Wikipedia provides a pretty nice description, plus a discussion of alternatives.
In the Gram-Schmidt algorithm, the second step, what does it mean by saying "subtract any component of the 1st vector from the 2nd one" ? According to the pictures below, does it mean that subtracting the projection of the 2nd vector on the normalized 1st vector from the 2nd vector?
Yes, the wording may be a bit unclear. To obtain the second basis vector, one should remove any "influence" of the first basis vector, which is easily derived by taking the projection of the second onto the first. Subtracting this projection gives a vector that is orthogonal to the first vector.
Right, the language is indeed a bit murky. With reference to the picture, it's saying to remove the component of $\mathbf{u}_2$ in the direction of $\mathbf{e}_1$ from $\mathbf{u}_2$ (which is how $\tilde{\mathbf{u}}$ is defined).
If I am understanding the fourth point, in case of a three dimensional space and three vectors, would you subtract projection of the third vector on each of the two previously established orthonormal basis? Is that correct?
Yes, that's correct.
I think explicitly showing how to find the third vector of orthonormal basis (figure might be complex, but equations are clear) can help better understanding of the process and then induction for higher dimensions. Or the sigma notation can be used here.
For the third one you have
$$ \tilde{\mathbf{u}}_3 := \mathbf{u}_3 - \langle \mathbf{u}_3, \mathbf{e}_2 \rangle \mathbf{e}_2 - \langle \mathbf{u}_3, \mathbf{e}_1 \rangle \mathbf{e}_1 $$
then
$$ \mathbf{e}_3 := \tilde{\mathbf{u}}_3 / |\tilde{\mathbf{u}}_3|. $$
(Can you come up with a general expression and/or algorithm?)
@keenan, I think the third basis vector should be $$\mathbf{e}_3 = \tilde{\mathbf{u}}_3/|\tilde{\mathbf{u}}_3|$$
Gram-Schmidt process: $$\mathbf{e}_1 := \mathbf{u}_1/|\mathbf{u}_1|$$
For $i = 2, \ldots n$: $$\tilde{\mathbf{u}}_i := \mathbf{u}_i - \sum_{j=1}^{i-1} \langle \mathbf{u}_i, \mathbf{e}_j \rangle \mathbf{e}_j$$ $$\mathbf{e}_i := \tilde{\mathbf{u}}_i/|\tilde{\mathbf{u}}_i|$$
Thanks—fixed!
By the way, if anyone would like to verify their calculations, this might come in handy: http://m.wolframalpha.com/input/?i=gram+schmidt+{{1,1,1},{2,1,0},{5,1,3}}&js=off
Why is this not a good algorithm when we have nearly parallel vectors? I was wondering if it had something to do with loss of precision when we subtract the component of one vector from another nearly-parallel vector and get a very very small result, but I'm not sure.
Cancellation is one issue. Another is that rounding errors in the sum may result in a vector that is not orthogonal to the ones that precede it; Wikipedia provides a pretty nice description, plus a discussion of alternatives.