Previous | Next --- Slide 39 of 64
Back to Lecture Thumbnails

By doing the subtraction, we are regarding the (x, y, z) the vector starting from the origin (0, 0, 0) and pointing to the vertices; So why are we assuming the distance between the projected picture and the origin (the camera) is 1?


Can we also achieve this more easily by multiplying the points by a projection matrix?


why dividing by z?

If the camera is oriented differently, like 90 degree rotated, shouldn't the operation of dividing by z end up giving out a non-intentional result?


Are similar sorts of algorithms used to render graphics in things such as video games, of course with more complex parameters such as camera angle and such?


How do we transfer this method to draw curves instead of straight edges?


I felt that this is similar to ray tracing. The calculation seems relatively easy, but why is real-time ray tracing advertised by Nvidia to be something really hard.


Doesn't this algorithm flip the image, just like how we saw the top of the tree on the bottom of the pinhole camera in the previous slide? Would a more practical algorithm also multiply everything by -1 in step 2 in order to correct the image orientation?


If we had a second cube on the other side of the camera, would both cubes show up? It seems like the camera is looking in the -z and +z directions at the same time.


How do we adapt this to handle various focal lengths?


What actually happens if we put the camera inside the object? So, let's say we put c = (0, 0, 0)?


How does this algorithm deal with spherical objects(or anything with no distinct vertices)?


What are the axes of the graph representing? Are they the x and y coordinates?


By subtracting camera c from vertex (X,Y,Z) to get (x,y,z), do we assume that (x,y,z) is in the camera's coordinate (instead of an external reference window)?