Take as an example, a computer program that tries to predict how a scene will look from a certain perspective. This program would most likely want to use every model that it can, in order to make the graphics look realistic. However, there should be a limit for how realistic it can achieve if certain "good" models are missing. As an extreme example, if the program does not have a model to predict lighting, the image will probably not be realistic... How do we continuously learn about new models? Which other areas of studies contribute to computer graphics?
Given that we have a limited amount of computing resources. If we try to incorporate all models that we can, the graphics will be very realistic; however, the approach itself may not be realistic... (the rendering would be slow)
So there must be priority in the importance of different models when it comes to rendering. And such priority seems like an important subject that must be continuously improved. How was the order defined traditionally? How are we looking to improve the order in the future? (Is it something that we will cover in the class?)