Is it possible to do the upsampling at render time instead of on the mesh? Like project the points into 2D and then apply the smoothing. You would need to interpolate the normals to make the lighting work.
@HelloWorld You can certainly do adaptive subdivision at render time, though performing subdivision operations in screen space does not (to me) seem to have an obvious advantage: the amount of computation needed to do it in world space is essentially identical, and points averaged in 3D will almost surely be more accurate than averaging projected coordinates. That being said, there has been a lot of work done on adaptive subdivision, where the level of tessellation is adapted to the viewer, including implementations on the GPU. See the first video on Pixar's OpenSubDiv page, or a lot of great work on real time subdivision and spline rendering; for instance, there are some good references in this paper.