Previous | Next --- Slide 44 of 63
Back to Lecture Thumbnails

What are some examples of when we might want to only capture a small light field? Would it only be for efficiency or approximation reasons?


Are bigger cameras on phones shifting to light-field cameras? I remember iPhones can change the focal lengths after the picture is taken.


If the object is too close, will some of the camera capture nothing or noise? If so, how can we address this issue when combining information


Are there any simple ways to capture bigger parts of the light field?


Cost must be a factor with these multi-lens cameras. What is the cost of creating these cameras?


How exactly is the camera on the left capturing a bigger slice of the light field than a standard camera? Do the newer smartphones with more cameras essentially do the same thing?


The "re-focusing" is quite amazing. I'm wondering if these techniques have been used in real applications like phones?


What are the cameras on smart phones doing with light field nowadays?


Is there a minimum number of additional photos/angles that a lightfield camera must take from a scene to do a proper refocusing?


How many images are usually combined in such a process?


Why would a small slice of the light field ever be advantageous over capturing the bigger slice and recombining after? Is it just due to efficiency that we wouldn't always need this bigger slice and it would be a waste to always capture it?