Wow, this is the one of the first real applications of Markov chains I've seen since I took PnC 3 semesters ago. This is pretty cool. One thing I'm not completely clear on though is that when I learned about Markov chains, I was taught to think of it as states and probabilities between those states. So are the "states" here the location of the different samples? And how do we get the probabilities for those? And given that there's so many samples, I'm assuming that this isn't implemented with a big matrix of all the state transition probabilities. I could be wrong
Bananya
I think the state here is defined by the function f, it can be the darkness of the pixel at position x (like the example in the next slide).
Wow, this is the one of the first real applications of Markov chains I've seen since I took PnC 3 semesters ago. This is pretty cool. One thing I'm not completely clear on though is that when I learned about Markov chains, I was taught to think of it as states and probabilities between those states. So are the "states" here the location of the different samples? And how do we get the probabilities for those? And given that there's so many samples, I'm assuming that this isn't implemented with a big matrix of all the state transition probabilities. I could be wrong
I think the state here is defined by the function f, it can be the darkness of the pixel at position x (like the example in the next slide).