Question : How do we model objects and light sources that are not "in" the scene? For example, the fourth wall in this scene is probably far away. Is it pragmatic to have the ray tracer consider objects that are very far away from the camera? How about object that are infinitely far away like the clouds, sky or the sun?
In this particular image, there most likely is no fourth wall, which is apparent by looking into the reflection of the sphere on the left, but in versions of the Cornell Box with a 4th wall (Like the one found here) the 4th wall is in the reflection without occluding the entire scene from the camera because its normal faces away from the camera and only triangles whose normals face incoming rays are treated as valid intersections.
In triangle drawing systems the world is clipped by a view frustum. Similarly in Ray Tracers there may be objects so far away that their lighting contribution is negligible under attenuation, or whose lighting contribution is fully occluded due to the geometry of the scene. I would say that a pragmatic ray tracer will try to spend as much time as possible evaluating the geometries that will have higher contributions to the final scene. Geometries that are found to be negligible should be ignored or not incorporated into the scene in the first place.
Clouds, skies, and suns could be defined by geometries far away from the camera, but many people implement them in terms of a spherical texture mapping where rays that do not hit any geometry in the scene and fly off to infinity are mapped to the texture based on the direction they are going.