Usability week ending October 3rd

Tuesday, 28th September, 4:08 PM
Understand how colors interact with each other to establish a heirarchy of information in your #typography: http://j.mp/dANmUD #ux #design

Monday, 27th September, 2:01 PM
Do you know what your customers actually remember about your site? Try Clue's 5 second test: http://j.mp/9j4MUQ #ux #tdd #webdesign

Sunday, 26th September, 12:08 PM
To make backend infrastructure software usable, focus on the linguistic elements of the design: http://j.mp/dlkj2x

via twitter.com/terretta

I'm Google's monkey boy, and so are you

Yahoo's recent paper that examines the possibility of using web search data to predict consumer behavior sparked interest.  It's covered everywhere, and Yahoo Search Blog has posted a followup, "What Can Search Predict?", that shares the research team's thoughts on it:

It’s easy to see why search data has the potential to predict: Consumers looking to buy a new camera may search the web to compare models, movie-goers may search for the opening date of a new film or to find movie theaters showing it, and travelers planning a vacation may search for places of interest, airline tickets, or hotel rooms. By aggregating the volume of search queries related to retail, movies, or travel, we might be able to predict collective behavior around economics, culture, or politics.

Their conclusion:

Ultimately, search can be useful in predicting real-world events, not because it is better than other traditional data, but because it is fast, convenient, and offers insight into a wide range of topics.

This article startled me into checking its publication date.  The claim is so clearly obvious for the outlined applications (camera purchases, vacation bookings, movie grosses), it seems unreasonable to think nobody in the search business was doing this long ago.

So I, um, Googled.

I found an essay on "trendspotting", suggesting the concept came into vogue in the late '90s, and I found an article citing analysis of AltaVista query logs in 1998 (and other query studies since). But nothing in those two mentioned using search engine data for consumer trends.

Perhaps because that was so hot at the time, folks I worked with in e-commerce in the mid to late 90's assumed this is what search engines were doing with their data—data-mining it and selling the intelligence.  Our reasoning was: "Well, if the grocery stores can predict what to stock on the shelves and what coupons to mail, surely the search engines can predict macro/economic/consumer trends, and that data must be incredibly valuable.  That must be where they're getting their real revenue..."  

But maybe they weren't.  A month ago, YCombinator's Paul Graham, in "What Happened to Yahoo", suggested Yahoo was making too much money from ads to bother improving commerce related results with his "Revenue Loop" technology.  

Perhaps that lack of revenue innovation held true of all the search engines?  At least the idea was out there before this Yahoo research coverage.

This USA Today article from January 2003 opens with the line "Can online surfing habits accurately predict real-world economic behavior?"  It goes on to say economists scoffed, but is covering overall online behavior, not just searches.

In August 2009, Google discussed whether search trends could be predicted, and felt they could be, a year ahead.  This seems focused on the queries themselves, rather than the real world impact.

Earlier, in April 2009, Google talked about predicting the present with Google Trends.  The idea was forecasting the "current" level of activity, and considers "turning points" in economic time series.  Google then asks "a million monkeys" to go to their keyboards to work out if this is useful.

Hmm.  Maybe Google is just a ruse to collect data for a hedge fund management AI, and we're the million monkeys in front of a million computers giving them the data and supplying the cash with every click.

It has been said that if you put a million monkeys in front of a million computers, you would eventually produce an accurate economic forecast. Let's see how well that theory works ...

Hope Google isn't too mad at Yahoo for tipping their hand.

UPDATE: Corrected "real world analysis of google’s webp versus jpg"

UPDATE: The author, Jacob Miller, has added the following information:

The tool used for encoding the images was gimp with the default settings. I updated the blog post with this information, as well as the resolution that each image was at (as some people thought that the smaller preview size in the blog was what I was testing at).

As for your test, were you using the full resolution image or did you shrink it down to the blog preview size like joelypolly (which was a communication issue on my part, as I didn't list the resolutions in the stats)? Here's the web directory that has all the full resolution images: http://jjcm.org:8081/webp , if you can get better results, let me know and I'll update the blog with your jpgs.

In particular, he's saying he compressed the 1920x1200 image, at that full resolution, and he got this result, at 45,592 bytes. Then he resized to the image we see on the blog, which exhibits the terrible banding issues.

So, I went back to Photoshop. I kept the full 1920x1200, set Photoshop's quality slider to zero, and saved this JPEG with 44,962 bytes (less than his).  Again, this is the full 1920x1200 resolution, but since Posterous resizes this image and doesn't let you view the full resolution, you can grab it from imgur.

This image definitely exhibits a moire pattern of blockiness.  To me, it doesn't look like his banded image, but it's certainly worse looking than the 580x363 images at 45KB.  The blog, though, represented a horribly banded image—is this really that bad?

I reopened this 1920x1200 full resolution 45KB compressed JPEG and resized to the resolution of his blog images, then saved as a 24-bit PNG, with this result:

For comparison, the original blog's representation of compressing 1920x1200 to 45KB JPEG in GIMP:

It seems to me Miller may have demonstrated that GIMP's JPEG compression may be less than optimal for high resolution images compressed to very small file sizes, but I am not at all familiar with GIMP, and don't know if it was the software, or if it was user error, that introduced the dramatic banding.

Updated with this 1920x1200 resolution example, I still feel the blog's images as presented are not representative of typical JPEG compression tools and usage.

Corrected "real world analysis of google’s webp versus jpg"

UPDATE: The author has commented elsewhere that his 45KB images weren't 580x363, but 1920x1200.  I've added full resolution 1920x1200 examples.

The Real world analysis of google’s webp versus jpg over at English Hard is getting a lot of attention today, comparing Google's new WebP image compression to JPEG compression, using a few "real world" images.

While the post presents the image compression quality visual results in PNG (with accompanying numeric error level analysis) for fairness, the JPEG compression images' visual presentation is seriously flawed.  Setting aside that a numeric noise ratio is known to be a poor metric for human perception of image quality, what you see in this post is even worse.

This is what the post shows for a 44.6KB "webp" image:

This is what the post shows for the 44.5KB "jpg" image:

I have never seen banding like this produced by a JPEG compressor, no matter how bad the compressor.   Was this really true?

I used a 1920x1200 PNG image source, opened it in Photoshop CS5, chose "Save for Web and Devices", resized to 580 x 363, and set a quality of 75.  The resulting file is 43.6 KB:

This doesn't show the dramatic concentric rings of banding.  It certainly looks nothing like the alleged JPEG shown in the blog.  To check if the conversion to PNG would damage the JPEG as shown in the blog, I opened this JPEG in Mac's native Preview app and saved as PNG:

I'm not the only one who noticed this issue.  Several others called out the problem.  The author hasn't yet fixed his post, saying essentially "you're not looking at the big files I'm looking at".

Yesterday, a review by x264's lead developer, H.264 and VP8 for still image coding: WebP?, concluded, "You’d have to be nuts to try to replace JPEG with this blurry mess as-is."  I don't know if the webp images in this blog post show a blurry mess, but who can say if their representation is any fairer than the badly broken JPEG results?

It seems to me that when presenting a visual comparison of image compression, the visual itself really should be representative.

How to find iPad or iPhone device ID (UDID) when stuck halfway through the iOS 4.2 beta 2 update

Yesterday's update says it should only be installed on a device used for development testing.  Okay, that's what it always says. No problem...  

But, if you hadn't specifically registered the device ID in your dev account, then after wipe and install, iTunes blocks saying the device isn't known.  Problem...

Unfortunately, the usual method of getting the device ID by option-clicking on the device info screen in iTunes won't work, because iTunes is stuck refusing to activate and access the device.

Go to Library/Application Support/MobileSync/Backup, where your iOS device backups are in folders with 40 character long names.  These 40 hex character names are your device IDs.  Copy the folder name that's the backup of this device, and paste it as a new device ID in your Apple Developer device ID control panel.

Unplug and replug your device, iTunes will activate it, and you'll be offered to restore your backup as usual.

If your iPhone 4 HDR photos are dull or washed out, you're doing it wrong

HDR stands for "high dynamic range".  Camera sensors can only capture a limited number of greys between shades near black and shades near white.  The difference between the darkest shade and lightest shade the camera can capture is the camera's "dynamic range".  

The more "dynamic range" the camera's sensor can capture, the more you'll be able to see details in the shadows and highlights of your image.

Your eye can see a very high dynamic range.  You can see into dark shadows inside a room while also seeing details in sunlit grass outside.  A typical camera cannot see both at once.  It can expose for the bright outdoors, or the dark indoors, but not both.  So, to capture a "high dynamic range" image with dark shadows and bright highlights, you can take several photos, exposing one for the shadows, one for mid-tones, and one for highlights, then assemble them into a wide or "high dynamic range" original.  

Unfortunately, you can't see that full range on today's monitors or save that full range in a JPEG image.  This is why some cameras offer a "raw" image format with as much as 64 times more levels of brightness: so they can record much more of the original scene for you to work with later.

Web images are typically shared the JPEG format, has a limited number of steps between black and white.   There is no such thing as an "HDR" JPEG image.  There are only "tone mapped" JPEGs, where a high dynamic range scene or raw image had its colors and shades selectively mapped onto the limited colors and brightness range of a JPEG.

As a simplified comparison, a raw image can store 16 thousand levels of brightness per color, but a jpeg can only store 256 levels of brightness per color.

If you use a high end camera, it can capture a wider range of shades from black to white than a JPEG can show.  Photo tools like "Adobe Camera Raw" try to convert the full range of shades in the raw image shot by the camera into a range that can be shown in a JPEG.  The wide range of shades are compressed into a single limited range of shades.  To keep the picture from looking flat and all grey, this compression is done using "tone mapping", where bright areas get contrasts added back in among nearby whites, and dark areas get contrast added back in among nearby blacks.

The same principle applies with an HDR.  Between the dark, medium, and light exposures, there is too much brightness range for a single image, so the shades are compressed.  

HDR software like Photomatix or Photoshop help you choose the tones you want to preserve, and choose how you want the contrast applied, to get an image that includes some lights and darks at the same time, while throwing away other colors and shades, to make the most of the limited dynamic range in a JPEG.

This brings us to the iPhone 4.  In a very high contrast scene, some parts of the scene would be over-exposed or under-exposed, because the range between dark and light is too great.

I'll illustrate this with a picture taken to show a dynamic range problem and how to solve it.

In this picture, the patio stones to above left of the pool are in bright sun, and have become "washed out".  The same is true of the fourth pillar or post.  You can see wood tones on the third pillar, but the fourth one is washed out.  Those brightnesses were too bright, so they just got mapped onto the brightness of white.

I've included this picture's "histogram" in the lower right corner.  A histogram is a graph of how much of the image is each color from dark shades at the left of the graph to light shades at the right of the graph.  In this histogram you can see there are a lot of dark blue tinted shades (all the stones in the shadows), and most of the image (reds, greens, blues) are below middle brightness.  You can also see at the far right of the histogram the spikes in bright colors, with the highest spikes in the green (all the washed out bright green leaves).

iPhone 4 saves a regular exposure, then saves the composite image it makes from its dark and light exposure, which people are calling "HDR".  This is the "HDR" photo the iPhone 4 generated from the same scene:

This image looks flat and grey.  This is why some people are saying iPhone HDR images are dull, or washed out.

If you look at the histogram, you can see there are still some very dark blues (the stones and nearest post), but most colors have moved more towards the middle.  In this case, the shadows of red, green, and blue, are now nearer the middle tones.  We perceive middle tones as grey or dull.  If you look at the right side of the histogram, you can see there is no longer a giant spike of bright white, the spike has been moved a little towards the mid-tones as well.  The colors or brightnesses have been compressed towards the center, to be able to record a few more levels of shadow and a few more levels of highlight into the same limited 256 levels of brightness per color in the JPEG image format.

Studying the picture, you can see the areas that were washed out now have detail, such as the patio stones to the left of the bench, and the wood grain of the fourth post.

If you stop here, then yes, the iPhone 4 HDR photos are flat and dull.  But you should not stop here.  Your image has more of the original scene recorded in it, with more details in the shadows, and more details in the highlights.  

Unfortunately to fit in the shadows and highlights, much of the colors were moved toward the flat dull middle of our perception.  So, your job as a photographer is to "develop" the picture and decide which brightnesses and details are important to you, by adding back contrast.  This iPhone HDR original has shadow detail you can push darker, and highlight detail you can push lighter.  By adding back some contrast, you end up with a picture like this:

If you look very closely at the histogram, you can see some dark colors moved left into the shadows, some highlights pushed right into white, while keeping enough shadow and highlight detail to be able to see the stone pavings, grass and patio stones left of the bench, the green leaves of the pear tree, and the color of the fourth post in the sunlight.  The overall image looks natural again, and is definitely better than the original non-HDR photo, even though they're both the same exposure or brightness.

iPhone 4 "HDR" mode preserves more of the shadow and highlight detail at the expense of a necessarily duller image, as a wider range than a JPEG can record is stored into the JPEG.  You can then adjust exposure, brightness, and contrast of that JPEG to "expose" the dynamic range that's most appealing to you.  You cannot fix the original exposure, because the highlights and shadows were cut off and lost.  

You can adjust the iPhone 4 HDR image's contrast or tone mapping, and you absolutely should.  Free software like iPhoto and Picasa can adjust brightness and contrast, as of course can Aperture or Lightroom.    

Note:  If you want the popular "HDR look" of wild contrast and halo effects, grab the ProHDR app for iPhone and go nuts.  Those images that look like that are not actually "HDR" images, they are tone-mapped images.  You can use the Pro HDR app to generate wild tone mapping, or to take sane exposures of complicated situations and blend or tone-map them in camera to your liking.  The app works well as long as nothing in your image moves between shots.

Usability week ending September 26th

Friday, 24th September, 11:05 AM
6 things Pay-TV operators can learn from #Blockbuster fall, starting with loving customers: http://j.mp/aWJKJc #vod #netflix #usability #ux

Thursday, 23rd September, 5:21 PM
Hosting is hard -- "Facebook Down, Like Buttons Vanish, Internet Implodes", for hours yesterday and today: http://j.mp/by7pTi #downtime

Wednesday, 22nd September, 4:42 PM
The key to good writing is focus. Writer for iPad (by iA) leverages concentration, orientation, and typography to help: http://j.mp/d990fx

Tuesday, 21st September, 6:05 PM
Principles of effective communication--assume you'll be misunderstood, share deeper meaning, keep it simple: http://j.mp/aDbXFX

Monday, 20th September, 5:29 PM
Help users achieve Techno-Literacy by using the min amount of #technology that will max their options: http://j.mp/9xqanY #ux #usability

via twitter.com/terretta

Usability week ending September 19th

Friday, 17th September, 8:19 PM
What do serious people mean when they say “user experience design” instead of just “web design”?: http://j.mp/bxJV6c #ux #design

Wednesday, 15th September, 10:12 PM
Recommendations for #usability in product development practice, based on a PhD research project: http://j.mp/aPITfH #ia #ux

Tuesday, 14th September, 9:42 AM
Designing for #children requires distinct #usability approaches, including targeting content narrowly for different ages: http://j.mp/drYiLH

Monday, 13th September, 9:27 AM
More than any other adjective, reviewers condemn apps they don’t like as “useless”: http://j.mp/bn5Mk1 #usability #apps #ux

via twitter.com/terretta

Usability week ending September 12th

Friday, 10th September, 9:11 AM
Use an inline or drop down login box so users don’t have to wait for a new page to load: http://j.mp/9pm9aP #ia #ux

Friday, 10th September, 9:07 AM
What teens really think about the usability of #Swype texting, expressed in a 4 panel "rage comic": http://j.mp/d51fVv #usability #android

Thursday, 9th September, 6:59 PM
Apple shares App Store Review Guidelines, apparently cares about #UX. "Join us in trying to surprise and delight users": http://j.mp/ck7XaZ

Wednesday, 8th September, 12:15 PM
Typing in mobile web apps is considered frustrating, yet in US, 4.1 billion SMS are sent a day. Maybe it's the usability: http://j.mp/cNhoLH

Monday, 6th September, 3:41 PM
E.W. Dijkstra on teaching the "radical novelties" of computer science http://post.ly/vzZc

Monday, 6th September, 12:57 PM
The real problem is companies have a roving eye--always more interested in the customers they don’t have: http://j.mp/b4uvI9 #ux #service

via twitter.com/terretta