If Monte Carlo integration can be made to be O(n^1/2), then why isn't it used over the trapezoid rule in the 1D or 2D cases? (at least, in most math classes)
Arthas007
@small_potato_ I guess randomization is hard to be processed by human kinds. Also in most math classes we will not handle such large number N (like on the magnitude of 10^6 to 10^9)
rlpo
@small_potato_ Also, I believe if we do uniform sampling above a certain rate (Nyquist rate) the error is also bounded well enough. But empirically, especially in imaging experiments such as SPAD imaging, Monte Carlo Integration usually yields better results.
evannw
How many random samples are generally taken? Is there a heuristic we use?
idontknow
Is the random nature of monte carlo integration the reason why rendering with a low sample rate can create a noisy image?
@small_potato__ There's a minor typo on this slide: the error in a Monte Carlo estimator is $O(n^{-1/2})$, i.e., it goes like $1/\sqrt{n}$, not $\sqrt{n}$. This makes sense: the more samples you take, the less error you should have.
In low dimensions, you can generally do a lot better with fixed quadrature points, as given by the trapezoid rule. The power of Monte Carlo is that this error rate is independent of dimension. So as you go up in dimension, trapezoid rule (say) will become asymptotically worse, whereas Monte Carlo will remain the same.
You may not think high dimensional problems really matter (we live in 3D, after all!), but "dimension" really has to do with the number of parameters used to define the quantity of interest. So for instance, in path tracing you may have many, many parameters that are used to specify a path with multiple bounces. So, you're really integrating radiance over a super high-dimensional space. That's one (of many) reasons Monte Carlo is so powerful for rendering.
If Monte Carlo integration can be made to be O(n^1/2), then why isn't it used over the trapezoid rule in the 1D or 2D cases? (at least, in most math classes)
@small_potato_ I guess randomization is hard to be processed by human kinds. Also in most math classes we will not handle such large number N (like on the magnitude of 10^6 to 10^9)
@small_potato_ Also, I believe if we do uniform sampling above a certain rate (Nyquist rate) the error is also bounded well enough. But empirically, especially in imaging experiments such as SPAD imaging, Monte Carlo Integration usually yields better results.
How many random samples are generally taken? Is there a heuristic we use?
Is the random nature of monte carlo integration the reason why rendering with a low sample rate can create a noisy image?
@idontknow, yeah exactly! You often need a pretty large number of samples to render a better quality image. Alternatively many recent graphics cards implement denoising algorithms which first do Monte Carlo and then denoise the resulting image. An NVIDIA publication on this: https://research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising. Their implementation of this in production software is shown here in Optix: https://developer.nvidia.com/optix-denoiser
@small_potato__ There's a minor typo on this slide: the error in a Monte Carlo estimator is $O(n^{-1/2})$, i.e., it goes like $1/\sqrt{n}$, not $\sqrt{n}$. This makes sense: the more samples you take, the less error you should have.
In low dimensions, you can generally do a lot better with fixed quadrature points, as given by the trapezoid rule. The power of Monte Carlo is that this error rate is independent of dimension. So as you go up in dimension, trapezoid rule (say) will become asymptotically worse, whereas Monte Carlo will remain the same.
You may not think high dimensional problems really matter (we live in 3D, after all!), but "dimension" really has to do with the number of parameters used to define the quantity of interest. So for instance, in path tracing you may have many, many parameters that are used to specify a path with multiple bounces. So, you're really integrating radiance over a super high-dimensional space. That's one (of many) reasons Monte Carlo is so powerful for rendering.