Previous | Next --- Slide 51 of 55
Back to Lecture Thumbnails

Guess that would be the reason why it's so hard to have real-time rendering for characters in-game. Basically, we could not "touch" other people in the game.


A bit unrelated: in real world when constructing the girl, do we have to start from the mesh? Or from the cube man and on top of it add details?


I could not match how primitives are animated through what we learned in previous slides. How can we determine which part of the object is animated through one function and others use another?


When animating a character rig as shown above, I know sometimes the position of i.e. a hand is relative to the body, but other times it could be keyframed as relative to an outside object, like stationary on a tabletop while the other parts of the body move. How would interpolating between the relative position switch work on the computational side?


How much of the shapes do we want to keep constant so that the shape doesn't fluctuate unintentionally?


In order to animate the movement of the hair in a kinematic chain, do animators have to animate it as an object with transformation (like the other objects), or do they use some sort of physics-based simulation to capture the movement?


How do we generalize this for multiple character interaction? Do we just isolate each character and interpolate those separately?


In industrial world, do companies that do animation a lot (Disney, for example), have special generic model for humans/animals so that when creating animations, some simple modifications would be sufficient?


When animating for something big like a movie, do the animators do this individually for each character, or do common animation sequences like walking have a sort of template to work off of?