Previous | Next --- Slide 44 of 46
Back to Lecture Thumbnails

Is there a case in which this detail loss via repeated resampling can be useful? For example, you could get a basic shape of an animal from this over resampled cow and then use it as a base for a different animal (although I am not sure you would want to).


@mzeitlin You are suggesting that downsampling can act as a low-pass filter, which is absolutely true. It is fairly common in geometry processing (especially in a real time context) to downsample the geometry and replace the high-frequency detail with a texture map (which may get interpreted as the surface normals, or perhaps as a local perturbation of the surface, i.e., a displacement map). In a sense, one is "downsampling then upsampling," i.e., downsampling to get the base domain, then upsampling to render the displacement map. But that's about as far as it goes in terms of classic graphics algorithms. More recently, this kind of thinking may show up in the learning context, e.g., an autoencoder would effectively downsample then upsample in order to learn a low-dimensional shape representation. I've seen this kind of network used for point clouds, but not (to my knowledge) for meshes.


Just as a side note, the technique of mapping data to lower dimension and then reconstructing it is widely used in computer vision and machine learning. Many generative models such as VAE is based on this.


@anonymous_panda Yes, though it is not clear how to best implement a VAE for geometry, since, unlike images, not all shapes can be stored as a signal on the same domain (e.g., a single long vector containing all the pixel values). Some people use point clouds with a fixed number of points, but then have to deal with permutation invariance and the lack of any kind of spatially adaptive sampling. You can parameterize a surface over an image but then have to deal with distortion as well as parameterization invariance. Or you can use a voxelization on a 3D grid but now you’re using O(n^3) storage and computation to do something that should cost O(n^2). No easy answer for neural nets on geometry, unfortunately.