At what point do we start getting a diminishing returns situation like we had with our discussion on anti-aliasing and taking many more samples to get a color that differs by smaller and smaller amounts?
jefftan
How do we mathematically express the idea of "getting the right image" for various ray sampling strategies?
ant123
How is it possible that we don't have any aliasing issues whatsoever with large samples? Is it just that the difference gets smaller with more samples (to the point where we can't distinguish between the real image and the reconstructed version)?
dab
Tracing each random sample out seems like it could take a long time, and the Law of Large Numbers only tells us that it will eventually converge. What makes this technique good in practice when we would have to perform so much extra work? Is it only because we don't want to/ can't evaluate hard integrals?
ml2
What is the threshold in terms of a good sample of different types of light paths/
anon
How is it determined when a 'right' image has been reached? Is there a threshold for an acceptable range of error?
Coyote
How many sample do we realistically need to take before our image is close to correct? I feel like there's a certain point where this would just be inefficient to use since running the algorithm a million-plus times must take a while.
embl
I think this similar to previous questions -- how do we determine a 'large' enough number?
At what point do we start getting a diminishing returns situation like we had with our discussion on anti-aliasing and taking many more samples to get a color that differs by smaller and smaller amounts?
How do we mathematically express the idea of "getting the right image" for various ray sampling strategies?
How is it possible that we don't have any aliasing issues whatsoever with large samples? Is it just that the difference gets smaller with more samples (to the point where we can't distinguish between the real image and the reconstructed version)?
Tracing each random sample out seems like it could take a long time, and the Law of Large Numbers only tells us that it will eventually converge. What makes this technique good in practice when we would have to perform so much extra work? Is it only because we don't want to/ can't evaluate hard integrals?
What is the threshold in terms of a good sample of different types of light paths/
How is it determined when a 'right' image has been reached? Is there a threshold for an acceptable range of error?
How many sample do we realistically need to take before our image is close to correct? I feel like there's a certain point where this would just be inefficient to use since running the algorithm a million-plus times must take a while.
I think this similar to previous questions -- how do we determine a 'large' enough number?