Previous | Next --- Slide 42 of 46
Back to Lecture Thumbnails
tarangs

We implemented a version of the auto differentiation framework in 11785(11485) in the first few assignments. The goal was to mimic the autograd framework from Pytorch. It took a lot of time to make sense of the idea and also implement it correctly. (Totally agree with the cons mentioned in the lecture, too many operator overloads!!). But once it was working properly, it was surprising and satisfying to see how convenient the backward pass(gradient/derivative calculation) became due to the autograd!