Previous | Next --- Slide 34 of 64
Back to Lecture Thumbnails
silentQ

I can't help but think that at this point, machine learning could actually be useful in making smooth, clean images. If not to help choose which rays to trace, you could use it to polish an otherwise grainy rendered image.

I have used Pix2Pix for things like this in the past with great success. On a fairly small dataset of "grainy" and "smooth" images (which could well be photos), it can learn to smooth pictures almost perfectly. (It can also do more complicated things, like "unblur" a censored face or increase the resolution of a low-quality image--I am a big Pix2Pix/CycleGan fan).

Anyway, is this sort of thing (maybe a different network) done in practice? If not, why not? It seems like it might be easier than some of these more complicated rendering strategies.

jzhanson

@silentQ That makes a lot of sense, I'm also curious to know whether it's done in practice or if that could be something to explore in the future.

I could imagine that the availability of training data for different types of scenes might influence the effectiveness of smoothing pictures—perhaps a very wide and varied dataset would be required to train a reliable machine learning smoother. Of course, this brings to mind an idea of using some of these renderers to generate both a grainy and a smooth picture on any given scene to build a dataset, then training (or fine-tuning an existing model) on that specific scene.

Then, the pipeline for a "renderer" might look something like this: for a given scene in a game or a movie, train a smoothing model as above, then, in "real-time", render a grainy image and use the machine learning model to smooth it over. I wonder whether running the machine learning model would cost more or less computationally than a more complex renderer...