In two-factor authentication, customers must confirm their identities not only through something they know, like a PIN or password, but also with something they physically have, like a hardware token with numeric access codes that change every minute.
Llloyds TSB in the UK is currently piloting a system where in order to log in to the site, the customers will need to enter a 6-digit code generated by the token, in addition to their username and password. For certain transactions, such as bill payments, they will have to enter another code.How much money is being lost to "phishing"? Apparently, last year UK banks lost only £12m to Internet banking related fraud, compared to £500m in credit card fraud. Back in 2003, the annual amount lost to online banking fraud in Europe pales in comparison to the combined budgets banks devoted to prevention.
> Cartier Bresson would have given up his nasty little film camera
> and stuck to drawing
Following my comment with this remark gives the impression I think film cameras are nasty. On the contrary, my post (suggesting reasons digital sensors may not yet capture reality effectively) doesn't suggest Bresson should not use film. Nor am I suggesting digital sensors are not effective.
My theme here is that the interaction of reality, photons, and the biology of our brains, cannot be represented by megapixels alone. Something else is needed, something elusive, and hopefully this is what top end manufacturers are working on now.
My earlier lengthier posts comment on why analog representations (such as film) may seem to do a better job capturing reality than comparably sized digital sensors. Eg., skim my earlier post on stochastic placement of film grain versus grids of pixels. And don't forget, film makers are enjoying advancements in technology too, for example, Fuji's "nano placement" of flim grain:
Something to note when thinking about film vs digital in the movie world: digital film will sense the same pixel value over and over for each frame, while movie film with chaotic grain will capture an always slightly different rendering of the scene. Run those chaotically different renderings through playback, and our brains compile them into a complexity far beyond what may be represented by any individual frame. Unfortunately, this idea doesn't help the print photographer, but it is why hollywood can get away with cheap 35mm color film projected onto a 100 foot screen.
** Originally posted at DPReview.com at 2:59 PM, Tuesday, October 18, 2005 (GMT-5)
On the contrary, "computer generated" images ("CGI") versus life is the entire topic of debate. In an earlier post above, you said a sound card was better than the live audio
your ears can hear, so what's wrong with the comparison of CG versus Live? For audio, if what you say is all there is to the story, computer generated audio should win.
Interestingly, audiophiles who are experts in excellence of sound still disagree with one another on this--many contend that sampling and quantization at any current level seems to
remove something that remains present in analog recordings made with cruder technologies.
Here's something few people have heard since the studies are quite recent: the moving video image our mind creates of the world around us is not what is transmitted to our brain from our retinae. Turns out our nerves carry from 10 - 12 separate and relatively basic movies, that our mind reinterprets into what we consider to be "real". A set of ganglions, for example, may detect a movie depicting only high contrast edges. Another set carry a movie of only broad colors--similar to the "hue" on old color TV--separate from and much lower resolution than the crisper contrast movie. Think of it as a sort of biological video compression technology, for shuttling megabits of video from our eyes to our minds by using 11 simple channels that interact based on the mind's rules to recreate complexity.
Since we've only begun learning very recently how we really see, is it so surprising that we haven't yet learned what, exactly, a digital sensor must capture for a picture to evoke reality?
** Originally posted at DPReview.com at 11:14 PM, Monday, October 17, 2005 (GMT-5)
Have a look at the reviews for various image sharpening tools, and how they struggle to deal with the problems that arise when an "edge" (such as the line between a watch's black hands and the watch's white face) falls in the center of a pixel.
Most of us can certainly see the difference between RGB(255,255,255) and RGB(255,255,254). Those differences cause "banding". Look at The Luminous Landscape's excellent review of Kodak's monochrome professional camera (the 760m, if I recall). 12-bit color is better than 8-bit, but it's not enough. Neither is 16-bit color.
There's a difference between "precision" and equations. An equation can define the number "pi", but no amount of decmial precision can define it accurately.
JPEGs look blocky because their cosine formulae cannot capture the rich subtleties of the real world. Fractal images formats are somewhat better, because the equations they use to compress data can represent more of the chaos detail inherent in reality.
DPReview uses photographs of a watch face to illustrate advances in photographic sensors. Look closely, and the loss of information due to "precision" is always apparent. Find a real watch face, and study the edge of one of the hands. Examine with a magnifying glass the shadow that slender hand casts on the face. No matter how closely you look, you will not see pixels, or banding, or differences in shades of color. Put the same scene under ever increasing powers of magnification, and you will unveil ever richer amounts of detail.
This is not to say that computers will never capture and render reality. Far from it. Ray Kurzweil writes, in his book "The Singularity is Near", that computers may become smarter than humans, and in fact may completely model and emulate the human mind, in the next 30 years.
But virtual reality won't be achieved through precision. It will be reached by understanding and interpreting the "rules" of how reality appears to our mind through our senses, and modelling that ever more closely.
You can recognize a friend from a single line describing her profile drawn by a sketch artist. MIT has demonstrated we can recognize age, gender, mood, and even individuals, if we are shown only 13 points on that individual's body in motion as they walk (dots at the shoulders, elbows, hands, hips, knees, and feet, and one more for the head).
The endeavour to capture and convey reality with an imperfect abstract sounds to me like "art". I hope this art is Nikon's goal, not some quixotic quest for megapixel precision.
** Originally posted at DPReview.com at 7:37 PM, Sunday, October 16, 2005 (GMT-5)
Megapixels and lines per inch don't tell the whole story, or the right story. The world's geometry is chaotic, and digital representations need to capture or recreate some of that chaos to look real.
For an example outside of photography, compare any state of the art video game (say, "Far Cry") on the best state of the art video card* (say, Nvidia or ATI) on a 19" LCD, with an S-VHS tape of an old TV food commercial (say, a mid-eighties salad dressing ad showing glistening water drops on lettuce and tomatoes) on a <20" TV, and which one looks more lifelike?
The analog TV resolution is comparable to 320x240 (VHS, 0.08 megapixels) or 720x480 (DVD, 0.35 megapixels) but looks far better than a game with 1600x1200 (2 megapixels). The difference is the fuzzy (organic?) edges and analog color curves. Kind of like good bokeh. :-)
Anyway, this is why I like the Nikon D2X vs the Canon 1DS MII. Nikon's smaller sensors have noise in the grey, instead of the noise Canon has in the color channels. Canon certainly has less noise, mathematically, but to my eye, the Nikon luminance noise scattered across smaller sensors looks more like film grain than the chroma noise.
Another example of this is in color printing technology for printing presses. "Lines per inch" term comes from halftone screening, which under a loupe looks like newsprint photos seen with naked eye. I worked in printing industry in late 80's, and "stochastic" printing got a lot of attention then. The idea was that instead of halftone screens with dots following a grid, you'd use a random placement of dots to better approximate the real world colors. It worked, and looked fantastic, but was very hard to control. Here's an article about that:http://www.kpgraphics.com/white_papers/archive/stochastic.html
The interaction between camera and lens isn't just about math. Looking at Nikon's 4MP from a D2Hs compared to 6-8MP from other vendors, looking at Nikon's 105 portrait lens that allows you to adjust the appearance of out-of-focus objects, I think Nikon "gets it". I think they're actually trying to look less digital, not win a MP race. And if they offer new glass for the D2X or future D sensors, I'll be first in line if it's about capturing an analog world more aesthetically, not an oscilloscope metric of resolution.
* Footnote: All video cards for the past decade have offered "anti-aliasing" which manufacturers say eliminates the jaggies aka the digital look. But look closer, because the AA is only operating on edges within solid shapes (smoothing the curves approximated by polygons on a car or torso) and not between the shapes and the background (compare edge of car to background, it's still jaggy). Next generation cards are supposed to be able to AA between foreground shapes and the background. Supposedly the XBOX 360 can do that, for example. This may finally close the gap between 1950's technology TV and 2005's video games. It's not the megapixels, it's what you do with them to make them look more lifelike.
** Originally posted at DPReview.com at 11:11 PM, Friday, October 14, 2005 (GMT-5)