Why is the texture of objects in an image treated differently than just the rgba values of the pixel at that point in the image? Why does it have its own formulae for calculation, etc. ?
When it comes to rendering a surface, there is more than just color to consider. An important factor is the way light shines on it, which is affected by texture, albedo, ambient occlusion, just to name a few (plus if the object is translucent that's a whole other story). So each point on the object needs to know first what color to be, then what angle it is (based on the shape's geometry AND the normal/displacement map), then what light is shining on it from which directions, and how much it needs to reflect that light. There's actually a lot more than that, and I definitely don't know all of it.
When I first started working with shaders in Unity, I found this (https://docs.unity3d.com/Manual/StandardShaderMaterialParameterNormalMap.html) guide very helpful with understanding. It's Unity-specific, but still illustrates the general concept well.
Cool! Thanks! @silentQ