Previous | Next --- Slide 18 of 70
Back to Lecture Thumbnails
dvernet

I'm confused as to how the Dirac delta can have a nonzero integral value if it's only nonzero at a single point in R. Consider the following: $$ \int_{-\infty}^{\infty}\delta(x)dx $$ $$ = \int_{-\infty}^{0}0\cdot dx + \int_{0}^{0}\delta(x)dx + \int_{0}^{\infty}0\cdot dx $$ $$ = 0 + 0 + 0 $$ $$ = 0 $$ I understand that we've defined it to have an integral value of 1, but how can we justify that?

kmcrane

The short answer is: because the Dirac delta is not a function: it is a distribution, and the statement about "integrating to one" really only makes sense in terms of Lebesgue integration (which is a bit different from the usual "area under a curve," i.e., Riemann integration). The notation we used in this slide is merely suggestive of how a Dirac delta behaves, and is a fairly standard abuse of notation. A formal discussion of distributions is (far) beyond the scope of this course, but if you're really interested you might take a look at 21-621 or 21-720.

At an intuitive level, it's probably not too harmful to think about a Dirac delta as a thing that is zero almost everywhere, but shoots up to infinity at the origin. A better way, perhaps, is to simply think of it as a thing where you feed it a function and it spits back out the value of that function at zero.