What's Going On

In this case, I would start with i=-1, j=-1. Then we would be interpolating between colors of a different set of pixels. If f00 is the lower left hand corner of the image, we'd have to think about what we do if we run off the boundary of the texture. In this case, let's assume we just copy the nearest color when we are outside the boundary. In that case, we have an f_{-1,-1}, f_{-1,0}, f_{0,-1}, and f_{0,0}, all of which are the same color. It wouldn't matter in that case, however, just to finish the example, we would set s=0.1-(-1+1/2)=0.6 and t=0.6 and carry through the interpolation with (1-t)((1-s)f_{-1,-1} + sf__{0,-1}) + t((1-s)f_{-1,0}+sf_{0,0})


weichehs commented on slide_018 of Textures, Depth, and Transparency ()

So it would be the case that if (u, v) = (0.1, 0.1). Then if we calculate the i, j according to the formula: i<0 j<0, so it seems we should assign s=1 and t=1 and (u, v) would be the color f11, right?


We had a question in class about why turning on supersampling (e.g., 8x, which should be taking 8 times the number of samples) does not tend to cause a corresponding decrease in runtime. The short answer is that there is some cleverness going on under the hood that cuts down on the number of samples that have to be taken in total to get each individual frame on the screen. The following is one reference for how some of this is done and includes a small set of timing numbers: https://www.pcgamer.com/pc-graphics-options-explained/2/

From the posting, we also see that MLAA (Morphological Anti-aliasing) is a postprocessing step on the entire image, and so it will not have the same scaling behavior, because it does not require taking more samples. Instead, it looks for edges in the image and based on the edge geometry applies an appropriate blur using image processing techniques to make them look better.


Ok, so what does closed under composition even mean here --- in this case, it means that we should end up only with colors that are a mix of the colors we started with. Specifically, if we take a bunch of "bright reds" and perform the "over" operation on them, we should only ever end up with more "bright reds." The non-premultiplied alpha fails in this case, because it takes a group of things (well, 2 things) with RGB value (1,0,0) and returns a new thing with RGB value (0.75,0,0), whereas the premultiplied alpha returns another (1,0,0).

I'm not sure how useful this observation is in the general case, because if you start with a set of different color values (e.g., (1,0,0) and (0,1,0)) you can get different mixes of those colors depending on the relative alpha values, and this is true for both premultiplied and non-premultiplied alpha (although the exact mixes of red and green will differ for the two methods). So, to my mind, I just treat this example as an interesting failure case that shows yet another reason why premultiplied alpha is probably a good idea.


besieger commented on slide_012 of Textures, Depth, and Transparency ()

Examples: tiles, grass, trees (far away)...


I changed this slide (and the following one) to include more details (and both variations for aligning the camera).


nsp commented on slide_045 of Course Introduction ()

All modern displays are raster displays. However, that hasn't stopped the interest in vector based representations, due to their many advantages in situations where you want to at least have a representation of your image that doesn't have built-in aliasing (think fonts).

Research in vector graphics continues, and there has always been an undercurrent of discussion of whether we may want vector displays for some applications. Check out this recent paper for some perspectives: https://people.csail.mit.edu/tzumao/diffvg/


nsp commented on slide_009 of Math Review Part II ()

We probably do not want to use a cross product operation to try to get rotation by arbitrary theta. That seems to become a chicken and egg problem because the result from a cross product is always going to be orthogonal to both vectors, and so we have to find a vector orthogonal to the one we want to start.

One way to answer this question (not the only way!) would be to use the basis vectors we already have.

So, for example, cos(theta)u + sin(theta)(Nxu) where u and (Nxu) are orthogonal basis vectors in the plane.


nsp commented on slide_033 of Math Review Part II ()

Yes that makes sense.


nsp commented on slide_059 of Drawing a Triangle ()

At pretty much any sampling rate it is a good idea to do this kind of stratified sampling where you subdivide a pixel into equal regions and sample randomly in each region. The initial subdivision helps guard against bad random number pulls, which could lump samples in a single part of the pixel by chance, and the "jittering" within each subregion helps guard against aliasing which may occur at this finer level (e.g., imagine a high frequency checkerboard where sampling in the centers of each region pulls out only the whites and skips the blacks).


One of the goals of this process was to turn the perspective projection into a matrix operation that could be done as part of a long sequence of matrix ops that transform the geometry of your scene -- i.e., move them into position, put them in the camera's frame of view, do the projection. If we can create one single matrix to do all of that, then we form the matrix once and apply it (possibly in parallel) to a large number of triangles. The GPU is highly optimized to do all that.

Given our goal of making it a matrix operation in the first place, we need to copy z into the homogeneous coordinate because there is no way to do division through matrix multiplication. The division comes from the "trick" of dividing everything out by the homogeneous coordinate at the end of all the matrix operations.

Does that answer your question?


besieger commented on slide_029 of Drawing a Triangle ()

I think different colors represent different triangles.


One property about rotations is that it must be kept at the origin to remain a linear transformation. If the rotation is performed at point x, we will experience issues with the additivity [f(u+v)=u+v+x while f(u)+f(v)=u+v+2x] and homogeneity property [f(au)=au +x while af(u)=au+ax]. Hence, the rotation won't be performed correctly if translation does not occur.


compgraphicsluvr123 commented on slide_047 of Coordinate Spaces and Transformations ()

Could I get some further explanation on the third bullet point? I'm not sure why we need a matrix to copy the z coordinate into the homogeneous coordinate. Why can't we be satisfied with representing the 2D coordinate (e.g. (u,v) or (x/z,y/z)) as its homogeneous coordinate: (x/z, y/z)?

Thanks!


weichehs commented on slide_059 of Drawing a Triangle ()

Does it seem that if the sample rate is low, it is not a good strategy to put the sample in the random position of sub-pixels? And if the sample rate is high it would get a better result than just putting the sample in the centers of sub-pixels...


besieger commented on slide_056 of Coordinate Spaces and Transformations ()

If we rotate without translating first, the rotation will be done about the origin, which produces a wrong result.


besieger commented on slide_045 of Course Introduction ()

I have read about other types of displays that do not use pixels (https://en.wikipedia.org/wiki/Vector_monitor). I image the computer graphics pipeline will be different on these devices, but they are no longer used anyway.


duck commented on slide_029 of Drawing a Triangle ()

This is a little confusing for me- does the light grey and dark grey represent how much of the pixel is covered? If so, why is the very top right triangle only light grey around the left edge? Shouldn't it be dark grey since it's a left edge?


thuspake commented on slide_033 of Math Review Part II ()

Just want to double check my understanding of everything. Is it true that maxarg(F) = g ie that to maximize F we'd pass in input g. I think this is true because nothing can be more aligned with g than itself but I want to double check.

edit: I want to take that back. I think that the maxarg wouldn't be defined because we could always pass in cg for some $$c \in \mathbb{N}$$ to get a larger output. That'd be an input perfectly aligned with g but scaled and that'd scale the output. ie `<<cg,g>> = c<g,g>


thuspake commented on slide_009 of Math Review Part II ()

Just to check my understanding. Would the solution to the second question be that the n not be a normal and instead by at angle theta to u. I'm not sure though is the resulting vector would still be in the plane. I think so.