Pixels: Biological video compression

After reading my "stochastic" discussion, another reader responded, "Nonsense. You're comparing CGI with live footage!"

My response:

On the contrary, "computer generated" images ("CGI") versus life is the entire topic of debate. In an earlier post above, you said a sound card was better than the live audio
your ears can hear, so what's wrong with the comparison of CG versus Live? For audio, if what you say is all there is to the story, computer generated audio should win.

Interestingly, audiophiles who are experts in excellence of sound still disagree with one another on this--many contend that sampling and quantization at any current level seems to
remove something that remains present in analog recordings made with cruder technologies.

Here's something few people have heard since the studies are quite recent: the moving video image our mind creates of the world around us is not what is transmitted to our brain from our retinae. Turns out our nerves carry from 10 - 12 separate and relatively basic movies, that our mind reinterprets into what we consider to be "real". A set of ganglions, for example, may detect a movie depicting only high contrast edges. Another set carry a movie of only broad colors--similar to the "hue" on old color TV--separate from and much lower resolution than the crisper contrast movie. Think of it as a sort of biological video compression technology, for shuttling megabits of video from our eyes to our minds by using 11 simple channels that interact based on the mind's rules to recreate complexity.

Since we've only begun learning very recently how we really see, is it so surprising that we haven't yet learned what, exactly, a digital sensor must capture for a picture to evoke reality?

** Originally posted at DPReview.com at 11:14 PM, Monday, October 17, 2005 (GMT-5)