In this algorithm, to calculate the distance between points and the camera we use a simple subtraction. This gives us vectors from the camera to points on the cube which we can use to project the points in 2d. How might we get a 2d projection of the cube from a different angle, such as a camera at (5,5,5) pointing towards the origin? Would we simply find the vectors from that camera to each of the points with a basis change?
@acpatel You're totally on the right track: to get a more interesting arrangement of the cube and the camera, you have to apply more general transformations of the data, such as rotations. You'll see some of this discussed in your linear algebra review, and even more in class when we talk about various kinds of geometric and viewing transformations.
@keenan Thanks for the response! Is it more common to have a fixed camera with transformed geometry data, or more fixed geometry data and a moving camera? Or a mix of the two? It seems from your response that the math may be simpler by transforming the data.
@acpatel Good question. Short answer: there's really no difference! Or more precisely: applying a transformation to the geometry is equivalent to applying the inverse transformation to the camera (and vice versa). For instance, there's no way* to tell whether I moved my geometry forward and kept the camera fixed, or kept the geometry fixed and moved my camera backward. In some graphics APIs, this transformation is even called the "modelview" transformation, since it captures both effects.
(*On an tangential but extremely interesting note: this same sort of principle applies in physics, where it's called Galilean invariance, and is sort of a lead-in to Einstein's theory of relativity...)