Exercises 11 (Digital Geometry Processing)

Meshes and Matrices

Our last lecture provided an introduction to the problem of geometry processing: how can we think of a shape as a signal, and how can we translate operations on common signals like images (upsampling, downsampling, compression, etc.) into analogous operations on geometry?

So far we've performed all of these operations via direct manipulation of mesh data structures, but a lot of real-world systems for geometry processing are based on translating between meshes and matrices. Much as graphics processing units (GPUs) enable one to take advantage of an existing, high-performance implementation of the rasterization pipeline, translating meshes into matrices enables one to take advantage of existing, high-performance libraries for numerical linear algebra (that have already been vectorized, parallelized, etc.) rather than optimizing each geometry processing algorithm from scratch.

We already saw in the previous lecture how the mesh connectivity itself can be encoded as a matrix, using for instance signed incidence matrices. But we can also store mesh attributes in matrices and vectors---and then use matrix algebra to implement geometry processing operations. Let's give it a try.

  1. The basic idea of a matrix is that it's an indexed array of values, i.e., we access data by specifying a row and column index. Hence, to store data from our mesh in a matrix, we need to assign a unique index to each mesh element. Depending on the task, a "mesh element" could be a vertex, and edge, a face---or maybe something more exotic like a halfedge or a triangle corner. The order of these indices doesn't matter, as long as there is a 1-to-1 relationship between elements of the mesh, and indices in the matrix. Just as a warm-up, assign some consecutive vertex indices to the mesh depicted below, starting at 0.

    Triangulation of a pyramid with a square base

  2. Suppose now that we have some data assigned with each vertex---for instance, the numbers depicted in the figure below. Write out a column vector b that encodes this data, using the same indexing scheme as in the previous question.

    Vertex values on a triangulation of a pyramid with a square base

  3. Using a standard halfedge mesh data structure (like the one in Scotty3D) what's a short snippet of code that assigns unique indices to each vertex? Remember that order doesn't matter.

  4. Ok, so far so good: we assigned indices to the vertices, and can store data according to this indexing scheme. Now let's see how we can translate some basic formulas into matrix operations. Suppose for instance that we want to compute the average of all of our vertex values, i.e., we want to evaluate the formula

    a equals sum of bi from 1 to n divided by n

    where in this case n is the number of vertices in the mesh. How can we write this expression as a matrix-vector product Ab with some matrix A? What are the dimensions of this matrix, and what are its entries?

  5. Suppose that instead we want to compute the average value in the neighborhood of each vertex. More precisely, for each vertex i we want the quantity

    ai equals sum of bj over edges ij divided by degree of i

    where the sum is taken over all edges ij touching vertex i, and the degree deg(i) is the number of such edges (equivalently: the number of neighboring vertices). Write this expression as a matrix-vector product Ab (for a different matrix A than in the previous question).

  6. Suppose we now want to store an RGB color at each vertex, rather than a single number. What should the dimensions of b be?

  7. Suppose we evaluate the expression

    a equals A to the kth power times b

    where A is the matrix that computes the average in each vertex neighborhood, the superscript k means that we multiply k times by A, and b is the matrix representing our vertex colors. What do you think a will look like as k increases?

  8. Imagine that the vector b encodes the XYZ positions of the vertices, rather than RGB colors at each vertex. What do you expect to see if we again perform k multiplications of b by A, and use the resulting values a as the new vertex positions?

  9. Now let's compute a slightly different quantity: the difference of each value bi from the average value at the neighboring vertices:

    ai equals bi minus sum of bj over edges ij divided by degree of i

    Write this expression in terms of your matrix A from the previous question, and another matrix. This composite matrix is known as the graph Laplacian.

  10. Imagine that we have a very high-resolution mesh, and the vector b encodes a greyscale image on the mesh. If we apply the matrix A from the previous question to b, what do you think you will see?

Flattening Meshes

Triangle mesh of the head of the Stanford bunny with the single boundary loop highlighted

So far, when we've talked about texture mapping, we assumed that we were given texture coordinates for each vertex (a.k.a. "UV coordinates"). But where do UV coordinates come from in the first place? The previous question hints at one possible use case for the graph Laplacian. Another very different application (which is a big part of geometry processing!) is to compute texture coordinates for a mesh. In particular we'll look at how to compute UV coordinates via a so-called Tutte embedding.

If we inspect some existing UV coordinates, we can make a simple observation: each vertex looks like it's placed somewhere in the "middle" of its neighbors. We've already seen what happens if we average 3D vertex positions (XYZ) repeatedly, so why not do the same thing in 2D to get UV coordinates?

  1. Consider a disk-like mesh with a single boundary loop (like the bunny head depicted above) and let b be a matrix encoding UV coordinates at each vertex. What are the dimensions of b?

  2. Suppose we initialize b with random values, and repeatedly apply the averaging operator A to b. What will happen? Will we get nice UV coordinates?

  3. The Tutte embedding theorem essentially says that if (i) the boundary vertices make a convex polygon and (ii) each interior vertex is at the average of its neighbors, then we are guaranteed to get UV coordinates where no triangle overlaps any other. First of all, how might we compute locations for our boundary vertices so that they define a convex polygon? Any convex polygon is fine. (Note: a polygon is convex if a line segment between any two points inside the polygon never leaves the polygon.)

  4. Now that we have nice locations for our boundary vertices, how can we modify our vertex averaging operator so that it only moves the boundary vertices? There are a couple ways we could do this.

  5. Suppose we now repeatedly apply this modified averaging operator (with fixed boundary vertices) to random initial UV coordinates b. What do you think will happen?

  6. Finally, what's a different way---still using matrices---that we could find UV coordinates where each interior vertex is equal to the average of the neighbors? What are some pros and cons relative to the averaging approach?