Are we able to use a combination of IK and FK?
I like architecture of Zaha Hadid :) She's so brilliant.
Though surprisingly simple we still need numerical integration LOL
Isn't this the basis for neural networks?
What do the equations look like to describe this?
Modern games and graphics (the newest RTX 20x series) are supporting raytracing recently, so is this actually a very new topic and field to be developed? I found ray tracing a very advanced technology when those corporations introduce their new products.
So this diagram means that lighter place takes more "importance" so we should do more samples over these area?
Symplectic looks kind of complex to me, and I agree with the difference mentioned by outousan.
Blender uses "Principled BRDF" https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
Comment from last semester quite solves the problem of how Hessian inverse comes:
Are the conclusions of stability here only right in ODE like this? What about other generic ODEs?
I'm not so sure about why natural spline does not have locality here. Maybe someone can give an example?
Is it somehow like Russian Roulette?
Thanks! The video is very helpful
"The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks "subtract" the colors red, green and blue from white light. White light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow"- from Wikipedia
dont understand why its multiplicative
We implemented this in A3 and uniform sampling has worse visual appearance than cosine weighted sampling
This idea makes sense because most furnitures have diffuse lighting
This idea of interpolation avoids repetitive work done by artists :)
Surprisingly simple animation equation
But there is so much that could be considered "good." Some could be faster to render, or faster to edit. Some could be more easy for a user to interface with and edit, while others being more compressible and take less file storage. Some Meshes could be manifold so that they could be 3D printed and others could not be to look better in a render maybe.
Apparently some hexagonal hexagonal photo-sensors exist. One I found is called Hawksbill. It would be interesting to see if this eventually takes off some day.
I would say yes that .dae is explicit. And the HalfEdgeMesh is also explicit. I would even argue the listed examples are also data structures, just HalfEdgeMesh might be a bit more specific as to the implementation.
I wonder how different the integrated GPU pipelines are from the discrete cards. I know the discrete cards can utilize a lot more more, heat-capacity and die size. But are integrated gpu's just scaled down implementations, or completely different to meet the constraints (and use the advantages) of being on the same die as the CPU?
Looking this up, I found that texture mapping unit (TMU) at one point was a discrete processor, but know is typically implemented as a stage in the gpu pipline. I think it just boils down the need to do a very large matrix multiply. And I imagine there are a lot of hardware shortcuts this unit can take since pure accuracy is not a requirement for texture mapping, speed is typically a higher priority.
I'd be interested to see if fractals could be a way of compression (by defining the image as the fundamental math property and building from there).
Nevermind. I might misunderstood the concepts. I treat solid angle more like an " unit area" in sphere than a set of "direction". In this case, I was confused by which sphere is this "unit area" refers to.
Shouldn't this dwi is the solid angle of the hemisphere around the light source? But according to the equation, it is the solid angle of photon hit point.
Hessian is used in second order derivative while doing optimization, and Hessian inverse is also used in second order derivative.
@wenere I think this is a holdover from CRT TVs when they when 'draw' the screen from top left. I think computer graphics held onto this convention.
@tiffany2 I found this paper that goes into parallelizing super-sampling, specifically using the method JCDenton suggested of only performing on all edge pixels. https://ieeexplore.ieee.org/document/651518
Very useful slide, and explains this well
I found that a lot of the matrix operations created more readable and reusable code. But once you reduce the math, I think there is not a difference in performance. One example was doing transformations with matrices was easier with a simple matrix multiply.
To add to justaddwater's comment: I also wonder if current graphics implementations do this today. Are there programs that consider the monitor outputs design and change the algorithms accordingly.
The previous slides say color is frequency, so why his spectrum is measured by wavelength? If human eye decide color by frequency, it is possible that same wavelength observed as different color in different media, let's say, water and vacuum.
If you look at the background, there are artifacts there too, it's just that the artifacts are most noticeable with abrupt changes in color
If we have a massive scene to render, would it be possible to use a combination of data structures to represent the meshes? For example, the scene might have a lot of non uniformly distributed primitives, but includes the mountain in a separate region. Would representing just the mountain using a uniform grid and everything else with a bvh be beneficial? Or would it be too costly to figure out what regions are and aren't uniformly distributed to create the grid? Or is this just a dumb idea to begin with haha
I tried keliu's approach with the quiz, and found it effective.
Yeah, I've used these for SVM optimization.
It seems like the computation would get more expensive as we increase the "dimension" of our points. What do we do for very complex systems?
Is it guaranteed that you will only need two things per column? What if we had 1 column which was much less than the mean and the rest were only 1 unit above it?
If this were to be a point light rather than an area light, would importance sample lead to 0 noise even at 1 light sample since there would be no randomness?
Oh wait, that's a B-spline, haha
Can you add an error term to the second condition, so that you can have the interpolation equal the key point, up to some error, E?
More intuition about forward and inverse kinematics.
(angles to position)
What you are given:
The length of each link
The angle of each joint
What you can find:
The position of any point
(i.e. it’s (x, y, z) coordinates)
(position to angles)
What you are given:
The length of each link
The position of some point on the robot
What you can find:
The angles of each joint needed to
obtain that position
This will help for the intuition behind gradient descent. The step size will influence the oscillation. https://www.youtube.com/watch?v=rIVLE3condE
I'm also confused about the Hessian inverse. Some additional explanation would be good.
If we were to constrain the joint angles, how would that be implemented in a standard gradient descent procedure?