Are there ever cases in graphics where the trapezoidal method is preferred over Monte Carlo?

dshernan

^ I'd imagine the slow execution for higher than 1 (or maybe 2) dimensions means that the trapezoid method is rarely preferred over Monte Carlo

dshernan

Do we generally assume that "bad" functions don't ever really occur in practice because we're using this to render objects and most of what we see in the real world are generous/practical shapes?

Joshua

I think the sampling points impact the monte-carlo results a lot, what would be a good strategy to sample good points as initialization?

Midoriya

If it only gives the correct value on average, should we run the algorithm multiple times to get a nice result? It doesn't seem very efficient if we have to repeat it.

norgate

How do we determine how many samples to use?

daria

How do we determine which points to sample?

Starboy

But since the samples chosen are random, shouldn't the error estimate be an expectation value? If I chose n really bad samples, like all n samples are close to each other, I suppose that the error would be possibly very large?

Benjamin

Are there different methods of integration that also use random sampling? Are there any algorithms that work well for integration in higher dimensions which do not use random sampling?

goose_r_s

Is there an intelligent way to randomly place samples? Or is a uniform distribution commonly used in practice?

Coyote

For this being correct on average, is there something that determines how many times the algorithm should be run to achieve an acceptable result? I'm sure error goes down each time you combine previous results with new results, but how much does it go down each time?

manchas

How do we determine the weights/ranges to bias each sample by? seems like you could choose these in multiple ways which would affect the result.

dab

Are there functions that are hard to evaluate at a point but easy to integrate? If so, that would make Monte Carlo inefficient.

Concurrensee

To what extent will Monte Carlo Integration introduce some unwanted artifacts? Will this impairs integration quality greatly?

TejasFX

I still don't understand why we would like to use Monte Carlo Integration over something like the trapezoidal rule or sampling evenly across some interval. Does it have to do with the structure of the functions we try to integrate over? I can imagine Monte Carlo (as it is random) could be bad in certain situations because unlucky integrations could lead to a bad picture.

minhsual

If we are dealing with the average effect of sampling values, does that mean we will have to take more samples than using the Traoezid rule?

Are there ever cases in graphics where the trapezoidal method is preferred over Monte Carlo?

^ I'd imagine the slow execution for higher than 1 (or maybe 2) dimensions means that the trapezoid method is rarely preferred over Monte Carlo

Do we generally assume that "bad" functions don't ever really occur in practice because we're using this to render objects and most of what we see in the real world are generous/practical shapes?

I think the sampling points impact the monte-carlo results a lot, what would be a good strategy to sample good points as initialization?

If it only gives the correct value on average, should we run the algorithm multiple times to get a nice result? It doesn't seem very efficient if we have to repeat it.

How do we determine how many samples to use?

How do we determine which points to sample?

But since the samples chosen are random, shouldn't the error estimate be an expectation value? If I chose n really bad samples, like all n samples are close to each other, I suppose that the error would be possibly very large?

Are there different methods of integration that also use random sampling? Are there any algorithms that work well for integration in higher dimensions which do not use random sampling?

Is there an intelligent way to randomly place samples? Or is a uniform distribution commonly used in practice?

For this being correct on average, is there something that determines how many times the algorithm should be run to achieve an acceptable result? I'm sure error goes down each time you combine previous results with new results, but how much does it go down each time?

How do we determine the weights/ranges to bias each sample by? seems like you could choose these in multiple ways which would affect the result.

Are there functions that are hard to evaluate at a point but easy to integrate? If so, that would make Monte Carlo inefficient.

To what extent will Monte Carlo Integration introduce some unwanted artifacts? Will this impairs integration quality greatly?

I still don't understand why we would like to use Monte Carlo Integration over something like the trapezoidal rule or sampling evenly across some interval. Does it have to do with the structure of the functions we try to integrate over? I can imagine Monte Carlo (as it is random) could be bad in certain situations because unlucky integrations could lead to a bad picture.

If we are dealing with the average effect of sampling values, does that mean we will have to take more samples than using the Traoezid rule?