What's Going On
msfernan commented on slide_033 of Depth and Transparency ()


keenan commented on slide_001 of Radiometry ()

Here's an awesome interactive webpage that illustrates many of the concepts discussed in this lecture.

VegitableChicken commented on slide_029 of Optimization ()

Are we able to use a combination of IK and FK?

VegitableChicken commented on slide_023 of Dynamics and Time Integration ()

Very interesting

Lockbrains commented on slide_050 of Introduction to Animation ()

I like architecture of Zaha Hadid :) She's so brilliant.

Lockbrains commented on slide_004 of Dynamics and Time Integration ()

Though surprisingly simple we still need numerical integration LOL

Ace commented on slide_025 of Optimization ()

Isn't this the basis for neural networks?

Ace commented on slide_037 of Dynamics and Time Integration ()

What do the equations look like to describe this?

Lockbrains commented on slide_005 of Monte Carlo Rendering ()

Modern games and graphics (the newest RTX 20x series) are supporting raytracing recently, so is this actually a very new topic and field to be developed? I found ray tracing a very advanced technology when those corporations introduce their new products.

Lockbrains commented on slide_055 of Variance Reduction ()

So this diagram means that lighter place takes more "importance" so we should do more samples over these area?

Lockbrains commented on slide_037 of Dynamics and Time Integration ()

Symplectic looks kind of complex to me, and I agree with the difference mentioned by outousan.

jasonx commented on slide_027 of The Rendering Equation ()

Blender uses "Principled BRDF" https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

wenere commented on slide_027 of Optimization ()

Comment from last semester quite solves the problem of how Hessian inverse comes: http://15462.courses.cs.cmu.edu/fall2019/lecture/optimization/slide_027

siqiwan2 commented on slide_036 of Dynamics and Time Integration ()

Are the conclusions of stability here only right in ODE like this? What about other generic ODEs?

wenere commented on slide_042 of Introduction to Animation ()

I'm not so sure about why natural spline does not have locality here. Maybe someone can give an example?

wenere commented on slide_035 of Variance Reduction ()

Is it somehow like Russian Roulette?

Lavender commented on slide_026 of Optimization ()

Thanks! The video is very helpful

msfernan commented on slide_041 of Color ()

"The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks "subtract" the colors red, green and blue from white light. White light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow"- from Wikipedia

dont understand why its multiplicative

Lavender commented on slide_030 of Monte Carlo Rendering ()

We implemented this in A3 and uniform sampling has worse visual appearance than cosine weighted sampling

Lavender commented on slide_061 of Variance Reduction ()

This idea makes sense because most furnitures have diffuse lighting

Lavender commented on slide_029 of Introduction to Animation ()

This idea of interpolation avoids repetitive work done by artists :)

Lavender commented on slide_004 of Dynamics and Time Integration ()

Surprisingly simple animation equation

WJM commented on slide_016 of Digital Geometry Processing ()

But there is so much that could be considered "good." Some could be faster to render, or faster to edit. Some could be more easy for a user to interface with and edit, while others being more compressible and take less file storage. Some Meshes could be manifold so that they could be 3D printed and others could not be to look better in a render maybe.

WJM commented on slide_007 of Meshes and Manifolds ()

Apparently some hexagonal hexagonal photo-sensors exist. One I found is called Hawksbill. It would be interesting to see if this eventually takes off some day.

WJM commented on slide_027 of Introduction to Geometry ()

I would say yes that .dae is explicit. And the HalfEdgeMesh is also explicit. I would even argue the listed examples are also data structures, just HalfEdgeMesh might be a bit more specific as to the implementation.

WJM commented on slide_056 of Depth and Transparency ()

I wonder how different the integrated GPU pipelines are from the discrete cards. I know the discrete cards can utilize a lot more more, heat-capacity and die size. But are integrated gpu's just scaled down implementations, or completely different to meet the constraints (and use the advantages) of being on the same die as the CPU?

Looking this up, I found that texture mapping unit (TMU) at one point was a discrete processor, but know is typically implemented as a stage in the gpu pipline. I think it just boils down the need to do a very large matrix multiply. And I imagine there are a lot of hardware shortcuts this unit can take since pure accuracy is not a requirement for texture mapping, speed is typically a higher priority.

WJM commented on slide_030 of 3D Rotations ()

I'd be interested to see if fractals could be a way of compression (by defining the image as the fundamental math property and building from there).

siqiwan2 commented on slide_050 of The Rendering Equation ()

Nevermind. I might misunderstood the concepts. I treat solid angle more like an " unit area" in sphere than a set of "direction". In this case, I was confused by which sphere is this "unit area" refers to.

siqiwan2 commented on slide_050 of The Rendering Equation ()

Shouldn't this dwi is the solid angle of the hemisphere around the light source? But according to the equation, it is the solid angle of photon hit point.

SnackMixer commented on slide_027 of Optimization ()

Hessian is used in second order derivative while doing optimization, and Hessian inverse is also used in second order derivative.

@wenere I think this is a holdover from CRT TVs when they when 'draw' the screen from top left. I think computer graphics held onto this convention.

WJM commented on slide_054 of Intro to Sampling ()

@tiffany2 I found this paper that goes into parallelizing super-sampling, specifically using the method JCDenton suggested of only performing on all edge pixels. https://ieeexplore.ieee.org/document/651518

WJM commented on slide_015 of Vector Calculus (P)Review ()

Very useful slide, and explains this well

WJM commented on slide_058 of Linear Algebra (P)Review ()

I found that a lot of the matrix operations created more readable and reusable code. But once you reduce the math, I think there is not a difference in performance. One example was doing transformations with matrices was easier with a simple matrix multiply.

WJM commented on slide_045 of Introduction ()

To add to justaddwater's comment: I also wonder if current graphics implementations do this today. Are there programs that consider the monitor outputs design and change the algorithms accordingly.

siqiwan2 commented on slide_014 of Color ()

The previous slides say color is frequency, so why his spectrum is measured by wavelength? If human eye decide color by frequency, it is possible that same wavelength observed as different color in different media, let's say, water and vacuum.

Fjorge commented on slide_051 of Color ()

If you look at the background, there are artifacts there too, it's just that the artifacts are most noticeable with abrupt changes in color

Fjorge commented on slide_037 of Spatial Data Structures ()

If we have a massive scene to render, would it be possible to use a combination of data structures to represent the meshes? For example, the scene might have a lot of non uniformly distributed primitives, but includes the mountain in a separate region. Would representing just the mountain using a uniform grid and everything else with a bvh be beneficial? Or would it be too costly to figure out what regions are and aren't uniformly distributed to create the grid? Or is this just a dumb idea to begin with haha

JCDenton commented on slide_027 of Dynamics and Time Integration ()

I tried keliu's approach with the quiz, and found it effective.

JCDenton commented on slide_019 of Optimization ()

Yeah, I've used these for SVM optimization.

JavaSwing commented on slide_050 of Introduction to Animation ()

It seems like the computation would get more expensive as we increase the "dimension" of our points. What do we do for very complex systems?

JavaSwing commented on slide_058 of Variance Reduction ()

Is it guaranteed that you will only need two things per column? What if we had 1 column which was much less than the mean and the rest were only 1 unit above it?

JavaSwing commented on slide_027 of Monte Carlo Rendering ()

If this were to be a point light rather than an area light, would importance sample lead to 0 noise even at 1 light sample since there would be no randomness?

Ace commented on slide_034 of Introduction to Animation ()

Oh wait, that's a B-spline, haha

Ace commented on slide_034 of Introduction to Animation ()

Can you add an error term to the second condition, so that you can have the interpolation equal the key point, up to some error, E?

msfernan commented on slide_029 of Optimization ()


More intuition about forward and inverse kinematics.

Forward Kinematics (angles to position) What you are given: The length of each link The angle of each joint What you can find: The position of any point (i.e. it’s (x, y, z) coordinates)

Inverse Kinematics (position to angles) What you are given: The length of each link The position of some point on the robot What you can find: The angles of each joint needed to obtain that position