What’s in a name?
Ubiquitous computing, ubicomp, context aware computing, whatever you call it, the basic idea is the same. Somehow we have to use the physical world to help “tame” the complexity that is rapidly overwhelming us all. It turns out that using our “ambient” surroundings, the particular “context” that we are within to help us understand and take control over the information age, is probably the only way we’ll be able to cope. In a sense, it’s about shifting your perspective from thinking about information being inside of computers to imagining people being immersed within a sea of information. This isn’t just a geek fantasy, its pretty much the only way we humans are going to be able to handle the increasing complexity of the modern world.
How so, why now?
Humans don’t change very fast (we’re not growing any bigger brains today than we did in our grandparent’s time, and we don’t have ever smaller fingers to cope with ever smaller keyboards). Many people believe that, although they can’t understand this newfangled world, their kids will be fine. “my kids live in the digital world, they know how to multitask.” Unfortunately our kids may not be much better off. A recent study by the Institute of Psychiatry at the University of London noted that participants that tried to multitask while managing their email saw a 10 point drop in their effective IQ (luckily it was only temporary). Sadly there is now a cottage industry (The Geek Squad, etc.) that has emerged to help us deal with all the complexity in our world. We’re not convinced even these wizards will be able to keep up with the coming flood. Ultimately we better start designing experiences that fit with the way humans think, the way people have evolved over a very long time to understand the world.
The amount of structured information in the world is doubling at an ever increasing rate. A century ago you’d need to either know someone that had a personal library (probably someone rich), be fortunate enough to live in a town where a new kind of institution was being built called a “public library,” or find a well stocked university. It took a long time for humankind to accumulate all the knowledge of the world and you had a lifetime to become versed in the intricacies, and the limits, of a given topic. That is no longer even remotely true. The advent of the Internet has given rise to an exponential growth (thanks to the network effect) in information.
- Power and complexity are becoming dirt cheap. A century ago the most complex and powerful devices in an average town might have been printing presses or the local telephone switchboard. Powerful things were expensive and because of their cost they were rare. Only a few people had to be versed in the intricacies of complexity. They could become an apprentice and learn how to deal with the complexity of a given device over their lifetime. But the advent of the transistor and microchip changed all that. According to the Semiconductor Industry Association, in 2003 alone human beings produced more transistors than grains of rice, cheaper. Today, for a few dollars, you can place a “webserver on a chip” into your product and communicate with it on the other side of the planet. The reduction in the cost of complex and powerful things has consequences. Today, complex and powerful features are ubiquitous (there’s that word again), which makes us all have to master everything around us every day, whether we want to or not. You have to be a master at printing, at telecommunications, at navigation, at everything. But frankly we’re not smart enough to handle the current or coming deluge of so called “smart stuff.” In fact the problem is just going to get worse, fast. In the early 1990s Dr. Peter Lucas predicted that there would be over a trillion connected devices by 2010. Industry analysts are now coming around to that point of view. That blinking VCR of yore with its plaintive cry of “12:00pm” that nobody seemed to be able to program could have been considered an early warning sign that something new was happening in the world.
Drowning in a sea of stuff.
Whether we like it or not, we’ve got a sea of smart stuff covering the world and waves of information sweeping across the surface. The problem is that all that complexity is being pumped full force into our faces as if a dam has broken. Most technologists frankly aren’t concerned about creating wading pools, building surf breaks, or giving us life vests (let alone letting us use our muscles and brains to learn how to swim). They just care about making more stuff.
So how will we live in this world? How will we make all this information, all this complexity, comprehensible to us slowly evolving humans? For brevity’s sake, we’ll only talk about the most basic pervasive computing concept that we use to help our clients tame complexity. In future articles we’ll explore other techniques. Suffice it to say that our Pervasive Computing practice focuses on discovering these secret features within the human experience and then bending computing technology to “fit” human perceptions (rather than forcing people to bend their minds around overly complicated technology).
Where it is, is what it is.
People can only remember a limited number of things in short-term memory. Which means that they aren’t going to remember all the details about how to work all those high-tech gadgets, applications, and environments that everyone is trying to sell them. Instead they’ll end up returning them (according to the Consumer Electronics Association more than half of the electronics purchased for consumer use are returned with nothing wrong with them because the users couldn’t work them).
If only we had the owner’s manual for the human experience, we’d be all set; we could look up the index and see if there were some secret method for helping people remember more than a few things at a time.
Let’s go back to that note about humans taking a really long time to evolve new features. It turns out that, although it takes us a long time to evolve, we’ve had a long time to get to the place we are today.
There are in fact a collection of secret “hacks” that we’ve evolved to handle remembering things. We could start to take advantage of these secret features within the human mind if we’d just start paying attention to users and forget the technology for a moment.
“Where it is, is what it is” is our saying for one of those hacks. It’s a hack for memory that can be employed to make complex and powerful things easy to remember and use. It turns out that humans can remember the location of thousands of things in their physical world. Not only that, but where something is located acts as a reference for what it is in our heads. We discovered in our research for Digital Equipment Corporation that office workers who were given paper reports weeks earlier would instantly glance in the right pile of papers (some people pile and some people file) the moment you asked them about the document. If you listen carefully you can hear it happening all around you.
“Remember that idea we talked about on the whiteboard over there?“
Now, what if you could use this same sort of mind hack, but with digital information instead of paper reports and whiteboard drawings? Now you’re talking about one of the core tenets of pervasive computing.
Look for future entries in this primer series where we will explore other mind hacks and human-centered design processes that MAYA employs to not only tame the complexity of the modern world but also turn the promise of pervasive computing into a serious differentiator for your business rather than a laboratory stunt.