Previous | Next --- Slide 8 of 63
Back to Lecture Thumbnails
jzhanson

I can imagine that a depth buffer would look very similar to a diff view of two pictures of the same scene taken at very slightly different angles/positions --- reminded me of Stereo Magnification: Learning View Synthesis using Multiplane Images which featured in the SIGGRAPH 2018 Technical Papers preview we watched in class.

I skimmed the paper and it seems like they do exactly that: they take two images (gathered from real estate Youtube videos using a semi-automated visual SLAM) of the same scene that are slightly apart and they can pick out the layers of depth of the image, "stacked" on top of each other just like in the above depth buffer, which they can then use to generate new images (partly using alpha blending/compositing with weights outputted by their model).