What's Going On
keenan commented on slide_056 of Drawing a Triangle ()

@pkukreja You may not be able to draw more pixels than what you have on the screen, but you can certainly store more pixel values in memory. For instance, you could first draw everything into a "virtual screen" (or "supersampling buffer") that's 2x as wide and 2x as tall; then to get the final pixel values, you would average each little 2x2 block (as depicted on the slide).

In real hardware, things get a lot more complicated! One observation, for instance, is that super-sampling color values are often not as important as super-sampling coverage values. So there may be special tricks that are used to get a better approximation of coverage, while still evaluating the color only once. This trick is especially useful if the color value depends on some complicated calculations (e.g., what's known as a "fragment shader" on the GPU).


pkukreja commented on slide_056 of Drawing a Triangle ()

I don't understand how we could cut down the granularity of sampling? Pixel is the smallest possible render-able unit on the screen. But if we could render something smaller, we would already be working on that! Why would we ever show suboptimal results?

Or were we populating 4 pixels at a time with the same color when there was no supersampling?


pkukreja commented on slide_065 of Drawing a Triangle ()

This Ei basically represents a cross product of vector P-P1 and P-P0. (Considering any arbitrary point P which we want to categorize as lying inside, outside or on the edge).


keenan commented on slide_021 of OpenGL Tutorial ()

@rasterize Yes, exactly. "Last in first out."


rasterize commented on slide_021 of OpenGL Tutorial ()

By 'Transformations are stacked(LIFO)', does it mean that in the red-case, translate is performed first and then rotate ?


keenan commented on slide_009 of OpenGL Tutorial ()

@CacheInTheTrash Yes, though when this gets implemented on a graphics card, it's hard for the GPU to throw an exception back to the CPU. I would guess the OpenGL spec says that the behavior is unspecified, meaning that different vendors can implement it in different ways, with no guarantee about what happens. In practice I suspect the vertices just get ignored (i.e., the GL driver tosses them out).


CacheInTheTrash commented on slide_009 of OpenGL Tutorial ()

I think a reasonable thing to do would be to throw an exception.


rasterize commented on slide_009 of OpenGL Tutorial ()

What really happens, when the number of vertices are not 3n ?


joel commented on slide_046 of Drawing a Triangle ()

@cma We oftentimes like to think of edges in images as borders between areas of different intensities/colors, which our visual system perceives as lines. We also think of high frequencies as corresponding to high rates of changes (just as a quickly oscillating wave changes value rapidly), and low frequencies the opposite. Going by these definitions, a drastic change in color (from #ffffff to #000000) will form a much stronger and better-defined edge than a minor change in color (from #ffffff to #fffffe, which is barely noticeable). The drastic change in intensity/color values will also correspond to our definition of a high frequency. On the other hand, a minor change in intensity/color corresponds to a low frequency and a less "strong" edge. In short yes, the color difference on both sides of an edge makes a difference, especially since the color/intensity differences are what we often consider to make up edges.


cma commented on slide_046 of Drawing a Triangle ()

Does the color difference on the two sides of the edge make a difference in the frequency measurement? e.g. will an edge where the two colors on either side are #ffffff and #fffffe have a different frequency than an edge where the two colors on either side are #ffffff and #000000?


keenan commented on slide_015 of Vector Calculus ()

@yingxiul Here's one explanation though I don't find it particularly clear! Would love to see someone write up a (much) simpler version of this explanation, with nice pictures. ;-)


keenan commented on slide_010 of Vector Calculus ()

@pkukreja Yep!


keenan commented on slide_029 of Vector Calculus ()

@aabhagwa You're correct; perhaps this should have simply said, "functions of matrices."


a4anna commented on slide_029 of Vector Calculus ()

The x & y on the left side are column vectors (matrices having 1 column). Therefore the expressions on the left are all actually matrix operations


pkukreja commented on slide_010 of Vector Calculus ()

So that <u,v> is equal to <v,u>


yingxiul commented on slide_015 of Vector Calculus ()

I understand that when we apply the Lagrange's identity onto the expression we can get 0. However, in terms of geometry, the expression add 3 vectors starting from each vertex with the same direction as the altitude going across that vertex. Why the sum of these 3 vectors is the 0 vector, as we don't know the angle between them and I'm not sure the relation between their length....


aabhagwa commented on slide_029 of Vector Calculus ()

Why are these called matrix-valued expressions? It seems like these are just scalar expressions that depend on a matrix in some way.


keenan commented on slide_001 of Vector Calculus ()

@THINK These are terrific questions. Yes, keeping normals consistent can be a pain, and can cause nasty rendering artifacts (and simulation artifacts, and geometry processing artifacts, ...) if not done right. One way to do it is to make sure that all the faces in a polygon mesh have vertices in a consistent "winding order," i.e., clockwise or counterclockwise. But this is just kicking the can down the road: how do you now ensure this ordering is consistent across the whole mesh?

Fortunately, if you think about it for a bit, you realize it's not too hard to fix either problem. Basically you can start with one face and do a breadth-first (or depth-first or whatever-first) traversal of the mesh, at each step picking an orientation of the new normal that is consistent with the predecessor in the traversal. The question is: what calculation can you do to check whether or not it's consistent?

Rather than give the answer... think about it! ;-) (Maybe someone else in class can post an answer here too.)


keenan commented on slide_023 of Linear Algebra ()

@aluk Right, that's a good way to think about it. The next question is: how do you encode a color value? Unfortunately this one is not so easy, since there are a huge number of possible color spaces (RGB, CMYK, XYZ, ...), each suited to a different purpose (display, print, compression, ...). We'll spend a fair bit of time talking about representations of color later in the course.


keenan commented on slide_014 of Linear Algebra ()

@grantwu Yep, and that's a particularly relevant perspective when thinking of functions as elements of a vector space (who cares about the arguments?). It's always a trade-off between being explicit-but-verbose and implicit-but-concise.


keenan commented on slide_007 of Linear Algebra ()

@Tee-Dawg. Nice. Yeah, bugs have been part of computer graphics for a long time. ;-)


keenan commented on slide_054 of Linear Algebra ()

@vik Yeah, that's the right idea. Big dense systems are definitely hard to solve, but even very large sparse systems also take a considerable fraction of the time in many modern graphics algorithms. For instance, in a basic fluid solver it's perhaps the most expensive part (though in general things like tracking the liquid surface may also cost a fair bit).


keenan commented on slide_044 of Linear Algebra ()

Oh yeah? Why? :-)


fengyupeng commented on slide_037 of Linear Algebra ()

Thank you so much for your reply! Looks like there's a lot to explore. I'll let you know if something further comes up.


Shuze commented on slide_044 of Linear Algebra ()

yes, it is.


vik commented on slide_054 of Linear Algebra ()

Correct me if I'm wrong, but I think the slide doesn't mean to imply that only sparse systems are hard to solve, but rather they are the most common when trying to model graphics. So it's not really the sparseness that causes the bottleneck, but the thing that causes the bottleneck just so happens to usually be sparse. Although I'm not sure about the computational difference between sparse and dense systems, so I might be interpreting the slide incorrectly.


Tee-Dawg commented on slide_007 of Linear Algebra ()

Really liked the story on about cartesian coordinates and the fly! Read up a bit more the history of coordinate systems and though it was interesting that after Descartes, Newton came up with 10 different kinds of co-ordinate systems including the polar co-ordinate system!


-________- commented on slide_005 of Linear Algebra ()

I find vectors in graphics really cool. For instance Photoshop is raster(pixel)-based, so when you zoom in the image pixelates. On the other hand, Illustrator is vector-based, so objects can be resized without changing the resolution. Really excited to learn how graphics programs work.


grantwu commented on slide_014 of Linear Algebra ()

I can't say I'm a big fan of the f(x) notation for functions. I think it's probably more consistent to just call them by their single letter names, i.e "f" or "g". Writing them this way makes it easier to treat them as just regular mathematical objects without introducing spurious (x)s everywhere that can be a bit confusing. It can get especially confusing when you have, say, x+y as inputs to a function that's being notated as f(x), and then one needs to mentally rename the parameter of f before doing the substitution... almost got me a few times on the homework.


mhthomps commented on slide_030 of Linear Algebra ()

That would make sense. Every pixel in the negative would be the opposite of the corresponding pixel in the original photo, including properties such as brightness, color, etc. It follows that no matter how we define the vector for an image, its negative would be exactly opposite and thus produce a vector exactly opposite the original.


tsanmigu commented on slide_054 of Linear Algebra ()

Why is it that a "sparse" system of linear equations would cause a computational bottleneck? Wouldn't a "dense" system (ie. many variables appearing in a large number of equations) also be an issue? In other words, what about the sparseness causes the computational bottleneck?


asmodak commented on slide_030 of Linear Algebra ()

In that sense professor, Will the vectors of image and its negative be at an angle of 180 degrees?


aluk commented on slide_023 of Linear Algebra ()

During the lecture, we constantly talked about how an image would can be simply described as a function. Are there multiple ways of doing this or will it simply be the color value as a function of the (x,y) coordinate values?


THINK commented on slide_001 of Vector Calculus ()

It looks like our homework touches on this, but it has us manually choose the direction of the normals. One option would be to only try to ensure that the normals are consistent (i.e. all facing the right direction or the wrong direction) and let the user flip them if you get it wrong, but even keeping the normals consistent seems challenging. I was thinking you could choose the normal that is closest to parallel with the adjacent normals, but it it not hard to construct simple geometry that would break this.


snimmala commented on slide_031 of Linear Algebra ()

According to the definition of well-aligned, does it not mean that inner product with the vector itself must mean maximum alignment? How does inner product quantify that?


keenan commented on slide_037 of Linear Algebra ()

Sure, there are lots of ways to take a derivative of an image. In terms of color, one way to think about it is to separate out into several color channels (e.g., red-green-blue (RGB) or cyan-magenta-yellow-black (CMYK)), viewing each as a black-and-white image. Then you still have the question of which derivative to consider: do you take the gradient? Or some other combination of directional derivatives? Or what? Lots of possibilities; Wikipedia has a decent introduction to edge detection. In fact, working in standard RGB or CMYK color spaces may not be the wisest idea, since human visual perception works in a very different way. We will spend plenty of time talking about color perception and color spaces later on in the course.


keenan commented on slide_047 of Linear Algebra ()

I agree with @PlanteurJMTLG! (Though find their handle pretty hard to type! :-))


keenan commented on slide_014 of Linear Algebra ()

Yep, there will be plenty of visuals as we continue... this is computer graphics, after all! :-)


keenan commented on slide_053 of Linear Algebra ()

@sickgraph It depends on how you decide to generalize the Fourier transform. For instance, one common approach is to use eigenfunctions of the (real) Laplace-Beltrami operator as your analogues of Fourier modes. In this case there is not a particularly easy way to talk about phase (and yet this approach is perhaps the most common for 3D geometry processing applications). Complex operators may provide other possibilities.


keenan commented on slide_030 of Linear Algebra ()

…Or perhaps one might say that they're very similar, and differ only in a superficial way! (By a sign). Just depends on what you're doing.


sickgraph commented on slide_053 of Linear Algebra ()

I'm probably getting a little ahead here. On calculating the fourier transform of an image, we get an amplitude map and a phase map. The phase map seems to be (more?) important for visual perception. I wonder if this would also be the case for 3D?


cvaz commented on slide_014 of Linear Algebra ()

Having visuals for the examples helped a lot. Continuing this throughout the class would be greatly appreciated.


PlanteurJMTLG commented on slide_047 of Linear Algebra ()

@Cake It is equivalent. If for instance $e_{n-1}$ and $e_n$ are not linearly independent, $span(e_1, \dots, e_{n-1}, e_n) = span(e_1, \dots, e_{n-1})$, therefore $\dim(span(e_1, \dots, e_{n-1}, e_n)) \leq n-1 < \dim(\mathbb{R}^n)$.


connorzl commented on slide_030 of Linear Algebra ()

When you take the inner product between two vectors that are 180 degrees apart, the result has a negative sign and large magnitude to indicate that the vectors line up quite poorly - they have completely opposite directions!


fengyupeng commented on slide_037 of Linear Algebra ()

The idea of taking derivative of an image is fascinating to me. You mentioned in your previous comment that the image is converted to black and white before taking the derivative. Is it possible to take derivative of a color image directly? Would the end result also be in color? And by taking derivative, from my understanding, we are measuring how abrupt shades change in the image. How would an edge of two colors with very different hue or saturation, but similar brightness show up in the derived picture?