Perspectives

Does “Mobile” Have a Future?

By Eric Fensterstock


Experience Design, Mobile, Mobile and tablet, Strategy

The first mobile computing device was the finger. Lightweight and convenient, a set of fingers was a must-have for the early human. The ease with which these devices allowed us to do simple math caused us to overlook the lack of RAM. Fingers could even be used for signaling, and the attached arms were helpful when the audience was beyond finger range.

The next advance was the abacus, which allowed a variety of mathematical operations—even cube roots!—and a larger range of numbers. Computers were becoming more powerful, but bigger.

Computers increased even further in both size and capability when they became made of soylent green. Er, I mean humans. For over four hundred years, “computer” was a job description. As men fought in World War II, many of the computers that helped win the war were women skilled at math.

Computers Settle Down

After the war, computers got much larger—the size of a room, and not a small room. Following the age-old pattern, the jump in size was far surpassed by the leap in functionality. The age of immobile computing took hold.

Massive machines were now what people pictured when they thought of computers. This image was so strong that a computer that only took up a few large cabinets could be called a “minicomputer.” Sure, there were pocket slide rules, even eventually desktop calculators, but the exciting stuff happened on a sort of factory floor for intelligence.

Computer size was at the top of the roller coaster, and next came the great rush down the other side. Vacuum tubes were replaced with transistors, then integrated circuits, and eventually we got as far as systems on a chip. Moore’s law pulled more strongly than gravity.

Regaining Mobility

Computers were starting to move. Mainframes were locked in organizations’ computer rooms, but minicomputers could be assigned to specific departments, creeping closer to users. When a computer was finally able to serve one person, to sit on that user’s desk, it was such an extreme change in so short a time that the device was called a microcomputer.

While the term “micro” looks like an exaggeration to us today, the desktop computer was close to a tipping point in size. In 1975, the IBM 5100 was a 55lb. “portable” computer. In 1981, the Osborne-1 weighed in at a relatively svelte 23.5 lbs. The tiny monitor sat inside the unit, between the floppy drives, and the keyboard was attached. After securing the keyboard, you could grab the handle and bring it to any desk you wanted.

Not long after, Epson released the HX-20. It was only 3.5lbs and about the size of an A4 sheet of paper. If you could live with a screen that measured 20 characters by 4 rows, you could have a computer that could easily go anywhere.

For this point, the Osborne-1’s descendants got smaller, eventually evolving into laptops; and the HX-20’s handheld descendants got smarter, evolving into the Palm Pilot and then the smartphone.

Eventually, we reached a point of convergence where it is hard to say what anything is. Apple sells an 11” laptop and Samsung sells a 12” tablet. The Nexus 6 phone is about the same size as a 6” Kindle Fire tablet.

Do we need “Mobile?”

After being a buzzword for awhile, “mobile” has been hugely hyped since the release of the iPhone. But is the term even useful anymore? What is mobile? Is a tablet that never leaves your home mobile because it uses a “mobile” operating system? What if your large-screen TV runs a version of the same OS?

Does a device need to leave your home to be mobile? Does that mean that an interactive system in a two-ton car is mobile? Or how about the computers on spacecraft?

Computers are now cheap enough to be everywhere: in thermostats, microwaves, and $35 media streamers. As prices fall farther, the watch you get at the drugstore is likely to become more like a smartwatch, just as a mainstream smartwatch is likely to get much more affordable. Everything gets smarter, everything gets cheaper, and everything becomes a computer. Even clothing, thanks to Google’s Project Jacquard.

Paradoxically, enormous data centers make tiny devices possible. If data connection speeds are fast enough, you don’t need to carry much computing power on your person. The intelligence can live in hundreds of millions of square feet of data centers all over the world. You can hold a phone near your mouth and talk to it, while the interpretation of your speech is carried out in a city far away. The phone responds as though it gets you, but it’s really just the old Cyrano de Bergerac game.

As everything becomes smart and connected and as we no longer need to know where computation happens or where data is stored, the end is near for the term “mobile.” As with “color TVs,” “hi-fi sound systems,” and “solid-state electronics,” we are just going to take “mobile devices” for granted.

Services are going to follow us around, like Samantha in the movie Her. The service will naturally go to whatever device you happen to have with you. Not only does Netflix run on just about everything, Marriott hotels will now let you use your Netflix account on their televisions.

Devices will increasingly work together to create an experience that doesn’t “live” in any single place. Even today, a jogger can use Bluetooth headphones that measure blood pressure. These can be paired with running and music apps on a phone and a user interface on a smartwatch. The running app lives on the phone, connects to GPS satellite signals, and saves the run data to the cloud, while the music app streams from a data center.

Designers of services and software are going to think less in terms of the two categories, mobile and desktop. Instead, they will create flexible systems that adapt to different devices. When the screen is big, the user interface is big and perhaps shows more than one set of information. When the screen is small, we have a simplified interface and a more focused view. When the interface is very, very small, such as a watch face, we only get “glances,” not a complex app. If the device provides the user’s location or blood pressure or other sensory data, this information will be used. If these sensors aren’t present, the software will ask questions or will provide limited functionality. Devices will be able to be specialized because lower costs will allow us to have more devices.

Before long, everything will be mobile and nobody will care. As wearables become more and more natural, computer interaction will feel more and more like body movement. Back to the days when the finger was the ultimate computer.