I remember seeing this type of artifact a lot when I was younger (e.g. digital camera video and early YouTube streaming), but I don't recall noticing it much in recent years. Is this just a reflection of processing / memory capacity / bandwidth improving (thus allowing for less downsampling to begin with or better filtering), or are there new techniques that are being used to more accurately scale down and restore video color data?
I remember seeing this type of artifact a lot when I was younger (e.g. digital camera video and early YouTube streaming), but I don't recall noticing it much in recent years. Is this just a reflection of processing / memory capacity / bandwidth improving (thus allowing for less downsampling to begin with or better filtering), or are there new techniques that are being used to more accurately scale down and restore video color data?