What's Going On
theyComeAndGo commented on slide_057 of Perspective Projection and Texture Mapping ()

When the texture is smaller than the image, which I think would be the most common case, won't L be smaller than 1?


What should we do if the red point is one the side (or corner) of the mipmap? That means we cannot find 4 sample points around it


theyComeAndGo commented on slide_057 of Perspective Projection and Texture Mapping ()

What would happen if d is negative?


@ChrisZzh @jzhanson We can indeed find the barycentric coordinates from the point-in-triangle equation terms. Check out this slide:

Correction: The second bullet in the above should say "Divide by distance of b from line ca (height)".

Notice, now that the first bullet (distance from x to line {ca,ab,bc}) is precisely the term given by the term $E_{a / b / c}(x)$ from the half plane test.

This is why the half plane test is useful here!


I don't really understand the sentence "Importantly, these same three values fall out of the half-plane tests used for triangle rasterization!" and why is that the case. Can someone give an explanation? Thanks!


What's the difference between doing displacement mapping vs having the 3D object model be bumpy in the first place? Would it make certain computations faster or result in smaller file sizes? Even then, I think there must be a threshold at which it makes more sense to just have more detailed models, since otherwise we could just have all objects be cubes and use displacement mapping to get any shape we want, which doesn't seem right.


I wasn't very clear on why we get barycentric coordinates out of the half-plane tests that we use for checking whether a point lies within a triangle, so I thought about it a bit and the best explanation I could come up with was that the Point-in-triangle equations from this slide somehow gives us the distance (or a quantity related to the distance) between x and one side of the triangle, which we can use to compute the area of the sub-triangle and therefore find the proportion of the area of the whole triangle that the sub-triangle takes up?


theyComeAndGo commented on slide_028 of Perspective Projection and Texture Mapping ()

Can't we interpolate by $\phi_i(x) = \dfrac{d_i}{(\sum_{i = 1}^{3} d_i)}$?


Cool! Thanks! @silentQ


When it comes to rendering a surface, there is more than just color to consider. An important factor is the way light shines on it, which is affected by texture, albedo, ambient occlusion, just to name a few (plus if the object is translucent that's a whole other story). So each point on the object needs to know first what color to be, then what angle it is (based on the shape's geometry AND the normal/displacement map), then what light is shining on it from which directions, and how much it needs to reflect that light. There's actually a lot more than that, and I definitely don't know all of it.

When I first started working with shaders in Unity, I found this (https://docs.unity3d.com/Manual/StandardShaderMaterialParameterNormalMap.html) guide very helpful with understanding. It's Unity-specific, but still illustrates the general concept well.


Why is the texture of objects in an image treated differently than just the rgba values of the pixel at that point in the image? Why does it have its own formulae for calculation, etc. ?


Yeah you can apply it to 3D. This is actually one way to do the quiz.


So to properly rotate something, we would first have to translate the points to the origin, perform the rotation, and then translate them back out?


During the class, I was confused by the q-xq calculation because of considering x as times instead of the vector of the point to rotate. Thus this is just a note to remind if someone has the same problem.


Sleepyhead08 commented on slide_030 of 3D Rotations and Complex Representations ()

This feels like just a change of basis... I'm curious as to how this simple concept is applied to all shapes.


Does a quaternion rotation rotate its target in place (as though the axis were placed at or through the object)? Or does it treat the axis vector as passing through the origin and rotate about that? The resulting orientation should be the same, but the position would differ.


Are there other solutions beside SLERP interpolation? Wonder if quadratic ones would perform better for interpolations


Is this the same as the singularity in robotics? If so, then no matter how we represent we will always reach this state right? Or the question might be, why does this state only appear in one certain representation?


So, to clarify, the quaternion product uv would then have scalar part -dotp (u,v) and vector part crossp (u,v)?


It's interesting how in an n-dimensional space we seem to have transformations characterized by a single dimension (translation and scaling along a single axis), two dimensions (shearing, defined by the axis being sheared and the one linearly combined with it), an arbitrary number of dimensions (reflection through any lower-dimensional object), and n-2 dimensions (rotation around such an object). It's weird to think about 4D rotation about a 2D axis (what does that look like?), but it seems to hold based on how we think about rotation in 2D and 3D.

Are there any other common transformations categorized by, for example, 3 dimensions, or n-1 dimensions?


zbp commented on slide_037 of 3D Transformations ()

One thing that I think is powerful about this idea is that it allows us to abuse relative perspective: In a 3D scene, we can peform tranformations that effectively move the camera, or we can perform modified transformations that move the scene instead but still produce the same image.


Can we apply similar logic in 3D? Say for example we assume some set of arbitrary rotations w.r.t any of the axes and then use the transfomed coordinates of the basis e1,e2,e3 to find the combined rotation matrix. This would give us the combined rotation matrix which one would otherwise find by multiplying 2-3 matrices together.


Quaternions has other advantages as well apart from no Gimbal lock and less computational time.

It also has better keyframe representation: keyframes in 3D animations mostly contain 3D rotations. Interpolating between these keyframes is very important. Using Slerp Quaternion interpolation causes good results in interpolating between two key frames. Actually Slerp selects the shortest arc in all possible paths in the rotation space (a 3D sphere with a radius equal to the magnitude of the difference of the source and destination points) to rotate one point to another and this is what we need most of the times, specially in character animation. The video link below shows the difference between 2 interpolations. The box on the left side uses Euler XYZ and the one on the right side uses Quaternion selrp for rotation interpolation. Both of them use 2 similar key frames.

https://www.youtube.com/watch?v=QxIdIZ0eKCE


I think this is a product of several other matrices. See slide 11.


The axis of rotation should always pass through the origin. Otherwise the rotation wouldn't be a linear transformation anymore (e.g. f(0) != 0). To rotate around an arbitrary axis, I believe you'd need to sandwich the rotation with translations on both ends.


anonymous commented on slide_063 of Drawing a Triangle ()

@jkalapos from Ei, it's just rearranging terms and give coefficient another name...


@BellaJ @Asterix Does the axis always pass through the origin?


When using the axis-angle representation, does the axis always pass through the origin? Can it be an arbitrary vector/direction?


@BellaJ I think you are thinking it in the right way. But if we have a condition when the axis of rotation does not pass through the body then all the points of the body will move in a circular motion about that axis.


I think robotics is another good application of quaternions because robots also have to be designed to work in 3D and have good state estimation. This is probably accomplished with quaternions because I can definitely see using euler angles would have some bad consequences in some situations as we talked about in class.


anonymous_panda commented on slide_009 of 3D Rotations and Complex Representations ()

Another example is in video game, once you are pointed 90 degree to the sky, moving your mouse left and right will only have the person spinning around.


Well, for one, they're much easier to understand and wrap your head around :). Actual answer - In the real world, they're directly measurable via sensors like the gyroscope. As far as I know, there's no sensor that can directly tell you the axis you're rotating around. Wikipedia has a section on other applications if you're interested: https://en.wikipedia.org/wiki/Euler_angles#Applications


how was this matrix derived?


If rotations are the main use for quarternions, what are the other applications of them, in and out of graphics?


These slides (https://www.essentialmath.com/GDC2012/GDC2012_JMV_Rotations.pdf) contain a nice example of interpolating Euler Angles being undesirable: say you want to go from (0, 90, 0) to (90, 45, 90). What you would want the midpoint to be is (90, 22.5, 90), but interpolating directly gives you a midpoint of (45, 22.5, 45), causing the shape to swing "out" to one side rather than following the line you would expect.

This is visualized nicely in this video: https://www.youtube.com/watch?v=QxIdIZ0eKCE


@cou This seems helpful. The three Euler angles are mapped to the u axis, i.e. i, j, k. https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles


intelligentDungBeetle commented on slide_031 of 3D Rotations and Complex Representations ()

I guess the mandelbrot set on the left is made from 2D complex numbers, resulting in a 2D fractal. Is it possible to generate something similar with quaternions? Is that what the image on the right is?


I'm not certain I concretely see how quaternions map 3D rotations. I see on a high level that a unit 4D quaternion gets mapped to a sphere in 3D, and we can describe them with axis and angle like polar coordinates -- but what's u exactly in this instance? Would it be possible to work out an example?


theyComeAndGo commented on slide_025 of 3D Rotations and Complex Representations ()

@yongchi1 $u \times v$ can be interpreted as a linear combination of $i, j, k$, and $-u ยท v$ makes up the real part.


cou commented on slide_069 of Drawing a Triangle ()

Does skip sample testing here mean that the edges of the block would be tested first, and if the edges do not include the triangle, then we would not test the inside of the block?


Why would interpolating Euler angles yield strange results? How do quaternions help?


We discussed that the order in which we rotate about each axis maters, so how is a rotation done if we input the three angles at the same time in the panel on the left of the screenshot?


If Euler angles have so many problems, why do people still use them?


theyComeAndGo commented on slide_026 of 3D Rotations and Complex Representations ()

Is $\overline{q}xq$ equivalent to a change of basis transformation?


One thing I found myself consistently wanting to think about was also having a point to rotate around (in 2D) or an axis (for 3D). Clearly a rotation about the origin for some geometry will be different than a rotation of equivalent magnitude around a different point. But for the context of rotations in graphics, will we generally be thinking of it as a rotation of the geometry about the origin? (Which is I guess equivalent to rotating the coordinate system itself)