What is the difference in <u,v> and <texel_u,texel_v>? Is it that the former has ranges [0,1],[0,1] and the latter has ranges [0,W],[0,H]? Or am I not understanding this?

kayvonf

@jmrichar. I'm sorry, I initially misinterpretted your comment. You are exactly right. On this slide (u,v) is in the [0-1]^2 range, and then texel_u, texel_v is in the [0-W,0-H] range. I rewrote the slide to be texel_u, texel_v (rather than tu, tv to make it jump out more.)

ShiaLaBeouf

I'd like to clarify my understanding of tri-linear filtering.

Here's my understanding of tri-linear filtering:

1) We have several mip-map levels, with various amounts of blurring/filtering in each level.

2) For every screen sample point (x, y) in our image, we need to calculate the maximum distance (called L) between the texture coordinate (texel_u, texel_v) corresponding to (x, y), and the texture coordinates of the 4 other screen sample points in the image that are closest to (x, y). So we calculate these 5 texture coordinates, and get the distance L as required.

3) Once we have L, we can calulate d as log2 of L. The floor and ceiling of d represent the mip-map levels we want to be looking into.

4) We perform bilinear filtering the normalized texture coordinate (u,v) corresponding to (x,y), using both the mip-maps we determined to use in step 3. Each mip-map gives us a result.

5) We take the 2 results of the bilinear filtering obtained in step 4, and combine them, using the value of d as the measure of how to interpolate these 2 results. We then color the screen sample (x, y) based on these results.

Is my understanding correct?

aznshodan

what is evaluation of attribute equation?
And what is the difference between <u,v> and <texel_u, texel_v> other than their difference in range? All the slides I've seen seem to be using <u,v> with range [0,1].

kayvonf

@asnshodan. Your understanding about <texel_u, texel_v> is correct. I use <u,v> throughout the lecture but in this slide I wanted to point out that to ultimately access the actual pixels, at some point the normalized coordinate needs to be converted into a image coordinate.

w.r.t "What is evaluation of the attribute equation?" That is the computation of the texture coordinate at the current screen sample point. See slide 25.

kayvonf

@ShiaLaBeouf. Your explanation is correct, with one minor correction in step 2.

As shown on slide 49, and as you mention, to determine the miplevel we need to access, we need to compute the distance in texture space between screen adjacent samples. To perform this computation we consider the location of three (not five) screen sample points in texture space:

(x, y)
(x+1, y)
(y, y+1)

By comparing (x,y) and (x+1, y) texture coordinates, we get: du/dx and dv/dx.

By comparing (x,y) and (x, y+1) texture coordinates, we get: du/dy and dv/dy.

What is the difference in <u,v> and <texel_u,texel_v>? Is it that the former has ranges [0,1],[0,1] and the latter has ranges [0,W],[0,H]? Or am I not understanding this?

@jmrichar. I'm sorry, I initially misinterpretted your comment. You are exactly right. On this slide (u,v) is in the [0-1]^2 range, and then texel_u, texel_v is in the [0-W,0-H] range. I rewrote the slide to be texel_u, texel_v (rather than tu, tv to make it jump out more.)

I'd like to clarify my understanding of tri-linear filtering.

Here's my understanding of tri-linear filtering:

1) We have several mip-map levels, with various amounts of blurring/filtering in each level.

2) For every screen sample point (x, y) in our image, we need to calculate the maximum distance (called L) between the texture coordinate (texel_u, texel_v) corresponding to (x, y), and the texture coordinates of the 4 other screen sample points in the image that are closest to (x, y). So we calculate these 5 texture coordinates, and get the distance L as required.

3) Once we have L, we can calulate d as log2 of L. The floor and ceiling of d represent the mip-map levels we want to be looking into.

4) We perform bilinear filtering the normalized texture coordinate (u,v) corresponding to (x,y), using both the mip-maps we determined to use in step 3. Each mip-map gives us a result.

5) We take the 2 results of the bilinear filtering obtained in step 4, and combine them, using the value of d as the measure of how to interpolate these 2 results. We then color the screen sample (x, y) based on these results.

Is my understanding correct?

what is evaluation of attribute equation? And what is the difference between <u,v> and <texel_u, texel_v> other than their difference in range? All the slides I've seen seem to be using <u,v> with range [0,1].

@asnshodan. Your understanding about <texel_u, texel_v> is correct. I use <u,v> throughout the lecture but in this slide I wanted to point out that to ultimately access the actual pixels, at some point the normalized coordinate needs to be converted into a image coordinate.

w.r.t "What is evaluation of the attribute equation?" That is the computation of the texture coordinate at the current screen sample point. See slide 25.

@ShiaLaBeouf. Your explanation is correct, with one minor correction in step 2.

As shown on slide 49, and as you mention, to determine the miplevel we need to access, we need to compute the distance in texture space between screen adjacent samples. To perform this computation we consider the location of three (not five) screen sample points in texture space:

By comparing (x,y) and (x+1, y) texture coordinates, we get: du/dx and dv/dx.

By comparing (x,y) and (x, y+1) texture coordinates, we get: du/dy and dv/dy.