Previous | Next --- Slide 41 of 55
Back to Lecture Thumbnails
dchen1

So how do computers usually solve these big systems of linear equations? Is it literally just making a big matrix and solving it that way? I would imagine that because of matrix multiplications it would be around O(n^3). Is there some kind of structure of these equations that allows for more efficiency?

keenan

@dchen1 Yep, there are (at least) two ways to solve large linear systems on a computer:

  1. [Direct] Express your linear system as a matrix, and analyze the matrix to obtain a solution (e.g., perform some kind of factorization like Cholesky, LU, or QR, then apply backsubstitution).

  2. [Iterative] Provide the computer with a black box routine that effectively implements matrix-vector multiplication. Internally, this routine does not have to actually build a matrix (for instance, if you're multiplying by a diagonal matrix, you could just scale each element of the vector by the appropriate diagonal value). The computer can then run an iterative method like the conjugate gradient method to find a solution.

In both cases, it's extremely important to take sparsity into account. In the first case, you'll want to use a sparse matrix data structure, like the one we used to encode incidence matrices when we talked about data structures for meshes. In the second case, you only need to perform multiplication by nonzero values. For many of the linear systems that arise in graphics, you can avoid $O(n^3)$ complexity---and even get pretty close to $O(n)$ for things like solving equations involving the Laplace matrix (super common in graphics, as you'll see in the lecture on PDEs!).