This article is from a series called 10 x 10, written by Method’s diverse and talented leaders who are shaping the future of products, services, and entire industries. 10x10 is a series of thought pieces which highlight new approaches and ways of thinking about varying industry challenges, needs, and trends. You can read more 10 x10 articles here. You can read the original "Art of Noise" here.
Noise plays an integral role in the patterns that drive human perception. Make sure your target audience research doesn’t remove it.
If you were forced to rely on only two target audiences to guide all your future design work, I’d strongly recommend using astronauts and toddlers. Fortunately, the connection between them goes beyond the design of their underwear to the nature of perception and expertise, and in what we treat as valid data, and what we choose to ignore as “noise” — the extraneous details, out-of-category input, the anecdotal tidbits. As it turns out, noise is much more valuable to the generation of useful design insights than you might think.
The effect of stripping out too much noise.
Apollo 14 Lunar Path: 1. Landing Site, 2. Turnaround point
First the astronauts. One little-known quirk of the Apollo moon landings was the difficulty the astronauts had with judging distances on the Moon. The most dramatic example of this difficulty occurred in 1971 during Apollo 14, in which Alan Shepard and Edgar Mitchell were tasked with examining the 1,000 foot-wide Cone Crater, after they had landed their spacecraft less than a mile away. But after a long, exhausting uphill walk in their awkward space suits, they just couldn’t identify the rim of the crater. Finally, perplexed and frustrated, the declining oxygen levels in their suits forced them to turn back. Forty years later, high-resolution images from new lunar satellites showed they had indeed come close; the trail of their footprints, still perfectly preserved in the soil, stop less than 100 feet from the rim of the crater. A huge, 1,000-foot wide crater, and they couldn’t tell they were practically right on top of it. Why?
It should have been easy for them, right? These guys were trained as Navy test pilots; landing jets on aircraft carriers requires some expertise in distance judgment. They also had detailed plans and maps for their mission and had the support of an entire team of engineers on Earth. But their expertise was actually part of the core problem. The data their minds were trying to process was too good. All of the “noise” essential to creating the patterns their minds needed to accurately process the data was missing. And patterns are the key to human perception, especially for experts.
Consider everything that was missing up there. First, there’s no air on the Moon, so there’s no atmospheric haze either. Eyes that grew up on Earth expect more distant objects to appear lighter in color and have softer edges than closer things. Yet everything on the Moon looks tack-sharp, regardless of distance. Second, the lack of trees, telephone poles and other familiar objects left no reference points for comparison. Third, since the Moon is much smaller than the Earth, the horizon is closer, thus ruining another reliable benchmark. Finally, the odd combination of harsh, brilliant sunshine with a pitch-black sky created cognitive dissonance, causing the brain to doubt the validity of everything it saw.
Ironically, that kind of truthful, distortion-free data is usually what experience designers want to have as input for their decision-making, no matter what they’re trying to do. We tend to believe that complex systems are the tidy, linear sum of the individual variables that create them. But despite the pristine environment of the Moon, the Apollo astronauts were repeatedly baffled when it came to simple distance and size perceptions, even after each team came back from the Moon and told the next team to be aware of it.
Learning about pattern-recognition from toddlers.
Meanwhile, the toddlers I mentioned earlier provide a corresponding example of the power of patterns in perception. When my first child was about 4, we came across a wonderful series of picture books called Look-Alikes, created by the late Joan Steiner.
Each book has a collection of staged photographs of miniature every-day scenes like railway stations, city skylines, and amusement parks created entirely from common, found objects. Without any special adornment, a drink thermos masquerades as a locomotive, scissors become a ferris wheel, and even a hand grenade makes for a very convincing pot-belly stove. The entire game is to un-see the familiarity of the scene, and identify all the common objects ludicrously pretending to be something other than what they are. There’s no trick photography involved, but you can look at each picture for hours and not “see” everything that’s right there in front of you. You know it’s a trick, but you keep falling for it over and over.
The really amazing part is that the toddler, a true novice with only a few years’ experience in seeing, completely understands the scenes she’s looking at, even though every individual piece of “data” she’s looking at is a deliberate lie. Yet the pattern of data that creates the scene is “perfect.” We already know what those scenes are supposed to look like before we even see the book’s version of them, so we unconsciously project that pattern onto what we’re looking at, even to the point of constantly rejecting the contrary data our eyes are showing us. There is in fact no amusement park in the photograph I called an amusement park. But I see it anyway.
In data processing parlance, the signal-to-noise ratio of the moonscape was perfect (actually, infinitely high), and zero for Look-Alikes pages (the whole joke is that there really was no signal there in the first place). Yet a toddler can read the noisy scene perfectly, and the seasoned test pilots were baffled by the noiseless scene. How can this be?
The lesson is that patterns drive perception more so than the integrity of the data that create the patterns. We perceive our way through life, we don't think our way through it. Thinking is what we do after we realize that our perception has failed us somehow. But because pattern recognition is so powerfully efficient, it’s our default state. The thinking part? Not so much.
This just might be why online grocery shopping has yet to really take off. The average large US supermarket offers about 50,000 SKUs, yet a weekly grocery shopper can easily get a complete trip done in about 30 minutes. We certainly don’t feel like we’re making 50,000 yes/no decisions to make that trip, but in effect we actually do. Put that same huge selection online, and all of those decisions are indeed conscious. Even though grocery shopping is a repetitive, list-based task, the in-store noise of all those products that aren’t on your list give you essential cues to finding the ones that are, and in reminding you of those that were not on your list, but you still need. That’s even before you get to the detail level, where all the other sensory cues tell you which bunch of bananas is just right for you. So despite all the extra effort and hassle involved in going to the store in person, it still works better because of, not in spite of, the patterns of extraneous noise you have to process to get the job done.
To account for the role of noise within the essential skill of pattern recognition, we need to remind ourselves how complex seemingly simple tasks really are. Visually reading a scene, whether it's a moonscape, a children's book illustration, a grocery store, or a re-designed website, is an inherently complex task. Whenever people are faced with complexity (i.e. all day, every day), they use pattern recognition to instantly identify, decipher, and understand what's going on, instead of examining each component individually. The catch is that all of the valuable consumer thought processes we want to address — understanding, passion, persuasion, the decision to act — are complex.
However, the research we use to help us design for these situations usually tries to dismantle this complexity. It also assumes a user who is actually paying attention, undistracted, in a clean and quiet environment (such as a market research facility), and cares deeply about the topic. Then we “clean” the data we collect, in an attempt to remove the noise. And getting rid of noise destroys the patterns that enable people to navigate those complex functions. So we wind up relying on an approach that does a poor job of modeling the system we’re trying to influence.
How to overcome complexityby embracing it:
Noise can add meaning to an experience.
The challenge is to overcome the seemingly paradoxical notion that paying attention to factors completely outside our topic of interest actually improves our understanding of that topic. Doing so requires acknowledging that our target audience may not care as much about our topic as we do, even if that topic represents our entire livelihood. It requires a broader definition of the boundaries of what that topic is, and including the often chaotic context that surrounds it in the real world. It also requires a more than casual comfort level with ambiguity; truly understanding complex systems involves recognizing how unpredictable, and often counter-intuitive, they really are.
This is why ethnographic research is so popular with all kinds of designers. The rich context ethnographies offer is full of useful noise; the improvising users do to actually use a product, the ancillary details that surround it, and the unexpected motivations a consumer might bring to its use. These are all easier to access via a qualitative, on-location approach than they are via a set of quantitative crosstabs or sitting behind a mirror watching a focus group. It’s also a powerful human-to-human interface, in which the designer uses his innate pattern-recognition capability to analyze patterns in user behavior.
What often gets overlooked is the role noise can and should play in quantitative research. Most designers’ avoid quantitative research because of the clinically dry nature of the charts it produces, and the often false sense of authority that statistically projectable quantitative data can wield. However, only quantitative research can reveal the kind of perceptual patterns that are invisible to qualitative methods, and the results needn’t be dry at all. The solution is to appropriately introduce the right kind of noise to quantitative research, to deliberately drop in the necessary telephone poles, trees, and haze that allows those higher-level perceptual patterns to be seen and interpreted.
Accuracy isn’t always the most important outcome; reality is.
Fortunately, there’s already a model for this. When analog music is digitally recorded, some of the higher highs and lower lows are lost in the conversion. Through a process called dithering, audio engineers can add randomized audio noise i to the digital signal. Strangely enough, even though the added noise has nothing to do with the original music, adding it actually improves the perceived quality of the digital audio file. The noise fills in the gaps left by the analog-to-digital conversion, essentially tricking your ear into hearing a more natural-sounding experience. The dithered audio really isn’t more accurate, it just sounds better, which is more important than accuracy. Returning to our opening examples, the moonscape was in dire need of dithering, while the Look-Alikes scenes were already heavily dithered. And the real world in general is heavily dithered.
So, for quantitative research aimed at guiding the design process, the trick is to value meaning above accuracy. Meaning can be gleaned via the noise you can add to the quantitative research process by including metrics outside the direct realm of your topic area. It means considering what else is adjacent to that topic area, it means acknowledging the importance of respondent indifference as well as their preferences, and what kind of potentially irrational motivations are behind the respondent’s approach to the topic, or the research itself.
At Method, we’ve developed a technique for observing these perceptual patterns in quantitative data by using perceptions of brands far afield of the category we’re designing for ). Essentially, it’s a dithering technique for brand perceptions. This technique often displays an uncanny knack for generating those hiding-in-plain-sight, aha moments that drive really useful insights. There are doubtless many other approaches you can employ, once you make the leap that acknowledges the usefulness of noise in your analysis.
But no matter what format of research you use in your design development process, (including no formal research at all), there are some guidelines you can follow to allow the right amount of useful noise to seep into your field of view, so that your final product does not wind up being missed on the moonscape of the marketplace:
A little humility works wonders
Recognizing that you’re not the center of your target audience’s universe allows you to understand how you fit in. Be sure to take honest stock of just where your target audience places your topic area on their list of priorities.
Step back far enough to allow patterns to emerge
No matter what metrics you’re using, consider looking several levels above them, or next to them, to identify patterns that are impossible to see when you’re too close to the subject.
Gauge the level of expertise of your target audience
How familiar is your target audience with your subject? Are they experts or novices, and how are you defining that? Generally, the higher the level of expertise, the higher the dependence on pattern recognition. Novices carefully and slowly compare details; experts read patterns quickly and act decisively.
Check the data dumpster before emptying.
No matter where your data comes from, think about what has been omitted. Was that distracting noise that was tossed, or crucial context?