Wearables, also referred to as body-borne computers, are small electronic or sensor devices that are worn on the physical body—either on the bare skin or on top of clothing. They do not include computing devices that are implanted within the body (a medical domain that I expect will grow over the next decade); rather they are attached to the body.
Wearable computing is not a new concept—it has been around for more than half a century, mostly used for military purposes, assisting soldiers and aviators in the battlefield. Over time, as technology advanced and computer components increased in power while shrinking in size, the range of wearable technology applications grew, expanding into the consumer market. From healthcare, fitness, and wellness, which have already started blooming, to gaming, entertainment, music, fashion, transportation, education, finance, and enterprise, wearable technology is creating a massive new mobile market, with the power to transform the way people behave and interact with devices, the environment, and one another.
Today, wearable devices span the gamut from smart rings, bracelets, and necklaces, to smart glasses and watches, to smart gloves, socks, and t-shirts. Moreover, wearables don’t stop at humans. There are already smart collars for dogs, cats, and even cows, monitoring their activity, health, and behavior 24/7, while keeping owners connected to their four-legged friends at all times.
The wearable device market is still in its infancy, but it’s growing fast. According to IMS Research, the number of units shipped is expected to grow from 14 million in 2011, to 171 million devices shipped by 2016. ABI Research forecasts a much stronger penetration with 485 million annual device shipments by 2015. BI Intelligence estimates annual wearables shipments crossing the 100 million milestone in 2014, reaching 300 million units by 2018.
In terms of revenue, Transparency anticipates that the global wearables market, which stood at $750 million in 2012, will reach $5.8 billion in 2018 (a compound annual growth rate of 40.8 percent). According to IMS research, the wearables market will already exceed $6 billion by 2016.
Regardless of whether the $6 billion line will be crossed in 2016 or 2018, all these market estimations point to the same conclusion: wearables are the next big wave in technology.
Looking at the wearables market today—in its infancy—we can see four dominant segments emerging:
Wristbands or clip-on trackers that collect activity data such as steps taken, calories burned, stairs climbed, distance traveled, and hours slept. Figure 4-1 presents a few examples in this device-rich market segment, which includes Fitbit Flex, Jawbone UP, Misfit Shine, and Nike+ FuelBand.
A variety of devices (i.e., bracelets, clip-ons, bands, and patches) that monitor physiological status, including heart rate, respiration rate, ECG, temperature, emotional stress, dehydration, glucose level, and even posture. Lumo Back, Zephyr’s BioHarness, and Nuubo’s nECG are a few examples of this promising domain (see Figure 4-2).
These are typically wrist watches operating in collaboration with a smartphone (through a Bluetooth connection). Smartwatches offer an alternative display for some of the smartphone’s existing features in a more accessible fashion as well as dedicated functionalities and apps. The Pebble watch and Samsung’s Galaxy Gear are two prominent examples of this category. Motorola’s Moto 360 smartwatch, launched mid-2014, is another popular player in this category (see Figure 4-3).
The fourth market segment, as the name implies, are eyewear devices, offering a computer display projected against the lens. Smartglasses allow for hands-free interaction for actions such as checking notifications, taking a photo, running a search, and more. In addition, smartglasses can offer an augmented reality (AR) experience while interacting with the real world. Examples include Google Glass, the Vuzix M100 and GlassUp AR (see Figure 4-4).
A distinct group within the smartglasses category is that of virtual gaming devices. Two dominant examples are the virtual reality (VR) headset Oculus Rift (acquired by Facebook in March 2014) and CastAR (see Figure 4-5). These devices offer a new type of video game experience, expanding the interaction possibilities beyond (the limited) keyboard, mouse, and handheld gamepad interfaces. The VR/AR capabilities make it possible to establish immersive real-life sensation, and scale the game experience far beyond what the flat display can provide. The player becomes an integral part of the virtual game environment and actually needs to look up to see the top of hundred-foot jungle trees, for example, or use his hands to move objects in space to pick up a sword.
These four segments, while reflecting a market that is still evolving, serve as a foundation of inspiration for wearables user experience (UX).
Before we dive into the details of the UX (and human) factors to be considered when designing for wearables, it’s important that we step back and look at these devices as part of the broader ecosystem to which they belong. Remember that wearables are joining an increasingly device-rich world. In this world, people already own multiple connected devices—computers, smartphones, tablets, TVs, and more—which they use together in various combinations to access and consume data. This existing ecosystem of devices and the relationships between them carry an immense impact on the way wearables are used and the role they play in people’s lives.
In my book Designing Multi-Device Experiences (O’Reilly), I describe how since 2007 an ecosystem of connected devices has gradually formed, beginning with the proliferation of smartphones, tablets, and the plethora of apps that truly brought these devices to life. In this ecosystem, multiple connected devices interact with one another and wirelessly sharing data. These interactions are shaped by the different ways in which individuals use the content and services that flow between devices, along different contexts.
As a result, whenever new devices join the ecosystem (such as wearables), they change that grid of connections by introducing new ways to connect devices, content, and people to one another. I call this phenomenon the Interaction Effect. People’s behaviors and usage patterns with their existing device(s) change depending on the availability of other (often newer) devices. This change can manifest in using the devices more, using the devices less, or using them differently, for example in conjunction with one another.
Tablets are a good example of these dynamic changes: their increasing use led to a gradual decline in the purchase of older media and device usage (printed magazines and books, desktop computers, laptop computers, and dedicated e-readers). Simultaneously, tablets introduced a new usage pattern in conjunction with the TV, serving as a second-screen device providing an enhanced viewing experience.
In a similar manner, when you think about wearables, it’s important to consider their role as part of the existing—and future—ecosystem of devices and the ways they could impact interactions with all these devices. For example, owners of the Pebble watch get their smartphone notifications (text messages, phone calls, tweets) directly on their watch, which is accessible at a glance. In addition, with the new Mercedez-Benz Digital DriveStyle app, the Pebble watch complements the automotive experience. When outside of their car, Mercedez owners can see vital information about their cars, such as fuel level, door-lock status, and vehicle location. When driving, the watch provides a vibratory alert for real-time hazards such as accidents, road construction, or stalled vehicles.
These examples reinforce the premise that as wearables spread into the mainstream, they will change habits and usage patterns with existing devices as well as relationships between devices. Certain functionalities, commonly associated with smartphones (or other devices) today, might be replaced, complemented, or augmented by wearables, side by side, introducing the wearer to new capabilities and connections.
Here are two fundamental questions that you should constantly keep in mind:
Remember that people usually have alternative devices. In most cases, they have already formed habits with respect to using those devices. If you want them to change their behavior, the wearable experience needs to clearly win over consumers and make them want to abandon their existing device. The crucial elements are simplicity, benefit, and time. As Russell Holly accurately articulates: “Plain and simple, there are exactly zero times when it is acceptable for a task on your watch to take longer than the time needed to pull your smartphone out and complete the task.” And this doesn’t apply just for smartwatches.
Which new ecosystem connections can you create between devices to enhance wearables’ benefits for people? Which can better integrate these devices as part of the overall experience en route to their goal? For example, can the wearable device complement other devices (such as a smartphone)?
These questions will accompany us through the remainder of this chapter as we dive into the detailed set of wearables UX factors.
When designing for wearables, there are several aspects to take into account in order to ensure an effective, well-considered, scalable user experience, both in terms of the product’s industrial design and the interface design. These factors involve the actor (the person wearing the device), its surroundings, the device itself, the context of use, feature sets, interaction models, and any relationships with other devices.
Table 4-1 lists the main UX factors and their corresponding design options that you need to address when designing for wearables. As you look through them, keep in mind the following:
The different factors are intertwined and impact one another to different degrees. For example, wearable visibility is closely connected to design decisions about the display and interaction model: a wearable attached to a body part that is invisible to others (such as the ankle or back) and thus not immediately accessible to the wearer either, doesn’t need a dedicated display, and definitely not one with which the wearer has full interaction. In terms of the interaction model, tactile feedback is critical for the ongoing communication between the wearer and the device (more on that next).
A single wearable experience can integrate multiple design options associated with one factor. For example, a wearable device can incorporate both tracker and messenger roles, which enhance each other and/or address different contexts of use (more on that next).
Next, we will take a deep dive into these factors and discuss what each one means, the design options involved, and the affordances to consider. We accompany it all with product examples. Together, these factors provide you with a comprehensive UX framework when designing for wearables that accounts for all the core experience components.
The way a wearable device is designed to be worn—specifically, if it’s an accessory visible to others—carries a critical impact on the balance between function and fashion in the design process. While aesthetics play a role in the desirability of almost any product, when it comes to apparel that decorates the body, attractiveness moves up on the priority list. It has long been demonstrated that the articles people wear are a form of self-expression, a way for individuals to show the world their identity, uniqueness, and personality. As such, for wearables to move beyond the early, tech-savvy adopters into the open arms of the mass market, designers of those wearables must consider fashion. In other words, they need to consider how the wearable looks, how it looks on people, and how it makes them feel when they’re wearing it. The latter also includes investigating how to personalize the wearable and make people feel unique when wearing it, even if many others wear the same (or similar) wearables.
The importance of fashion and beauty in wearable design is beginning to sink in. The consumer wearables industry is driving the convergence of tech and fashion, with an increasing number of technology companies collaborating with fashion and jewelry designers to build their product. At CES 2014, Intel Corporation announced that is was teaming up with cutting-edge fashion design house and retailer, Opening Ceremony, to make a bracelet that will be sold at Barneys. Similarly, Fitbit announced a collaboration with Tory Burch to design fashionable necklaces and bracelets for its activity trackers. CSR, the chip manufacturer, already launched a slick-looking Bluetooth pendant (see Figure 4-6), which was developed in collaboration with jeweler Cellini. The device has a single customizable light for receiving notifications, and can also be configured to release perfume throughout the day. Figure 4-7 shows another example, the Netatmo June bracelet, designed in collaboration with Louis Vuitton and Camille Toupet, which measures sun exposure throughout the day. In Figure 4-7, note how the advertisement follows the jewelry industry spirit.
From the very early stages of the product inception, you should focus not only on the user interface design and the feature set (software side), but also put an emphasis on the wearable design itself (hardware side). This means that you’ll need to manage two core design efforts side by side: user interface (UI) design and industrial design (ID). Each effort requires dedicated attention, resources, and expertise, but at the same time they are tightly integrated and dependent on each other, functionally and visually. ID decisions in areas such as form factor, size, shape, display, ergonomics, texture, and colors directly impact the universe of possibilities (and constraints) of the UI, from layout, interaction, and flows, to art and visual language. This is especially prominent if the wearable offers an on-device display, which we discuss later in the chapter.
At the end of the day, from a consumer’s standpoint, both the UI and ID design create the overall experience. Thus, the two design groups need to work closely together to ensure ongoing conversation and collaboration, creating together a holistic user experience that feels like a synergy.
When defining your Minimal Viable Product (MVP) as well as in the ongoing process of setting the product roadmap and milestones, fashion attributes should be an integral part of the prioritization process. This means that if you’re building a smartwatch, for example, you might prefer to wait with some functional features in favor of launching it with a larger variety of wristband designs that appeal to wider target audience groups. Pebble demonstrated this approach with their release of Pebble Steel, a higher-end version of its popular smartwatch. Although we’re not privy to complete information about its product considerations, it’s probably safe to assume that Pebble has a long list of features in the pipeline. Still, the company chose for its second major product milestone (a year after their initial launch) to keep the same feature set of the original Pebble and offer instead a new fashionable stainless-steel body design, which you can see alongside the black-matte finish in Figure 4-8.
When characterizing your target audience(s), it’s not enough to consider just the “traditional” attributes in the digital sphere, such as demographics, skills, behavior patterns, usage habits, and so on. You also need to understand your users’ attitudes and preferences in terms of fashion, accessories, and jewelry. In that respect, just considering the gender already plays an important role. Looking at the selection of wearables today, most of them are still characterized with a very masculine design—dark colors, hard corners and edges, heavier, and more sturdy looking. This is a look and feel that appeals mostly to men. If you wish to attract the female audience, you’ll need to adopt a different design approach that corresponds with their fashion preferences. Better yet, go beyond the gender stereotypes and learn your specific audience preferences.
The wearable collects data about the wearer activity or physiological condition. This data can be used to monitor the user’s state as well as to encourage her to improve her fitness, movement, and other health factors. Jawbone UP is an example of such a wearable.
The wearable device, often being more readily accessible to the user than his smartphone, displays selected alerts and events from that device, such as a phone call, incoming message, or meeting reminder. The user can then decide whether to pick up the phone and act upon it or respond later.
Note that most wearables acting as “Messenger” today rely on a Bluetooth connection with the smartphone for their operation. Through this connection, they essentially mirror selected event notifications, so users are alerted more quickly. The Pebble watch functions as such a device. Another is Moto 360, which displays a variety of on-time alerts and notifications to users, as shown in Figure 4-11.
The wearable facilitates certain communication, media, or other activities that are already available on the smartphone (or other devices) by offering a simpler, more convenient experience. For example, capturing a video by using Google Glass is much easier compared to a smartphone. Instead of the user having to turn on the screen, unlock the device, launch the camera app, change to video mode, and then hold the camera device in front of the body to take the video, Google Glass allows the user to immediately trigger the camera (via a voice command/button click), right in the moment, to capture the video seamlessly, still remaining an integral part of that experience (rather than holding a device that places a “wall” between her and the scene).
The wearable device augments the real world with information that is overlaid on the environment and its objects, potentially with the capability to interact and digitally manipulate that information. The film Minority Report is a popular culture reference for this type of AR technology. In the iconic scene shown in Figure 4-12, agent John Anderton (Tom Cruise) is manipulating—in a perfect orchestral fashion—a heads-up AR display interface at the Justice Department headquarters.
AR-equipped wearables open up a wide array of use cases that these devices can enhance and optimize (from gaming, navigation, shopping, and traveling, to communication, education, medicine, search, and more). It is clearly a promising direction toward which the industry is advancing, one which could offer a more natural, unmediated, immersive interaction with the world. I expect that AR technologies along with their implementation in consumer products will significantly mature and grow over the next 5 to 10 years, leading to a tremendous change in the way we perform daily tasks, engage with content and media, and interact with the environment (and each other).
At the same time, it’s worth noting that currently very few devices are actually able to deliver such experiences (especially when referring to the interaction layer, on top of the information overlay), let alone for broad daily consumer use cases. Recall at the beginning of this chapter, I mentioned Oculus Rift and CastAR as two smartglass devices that integrate AR/VR technology focused on gaming. Another example is Meta’s Pro AR eyewear, which is one of the pioneering devices to introduce an interactive holographic UI. As of this writing, this wearable is on the cusp of becoming available, with an anticipated price tag of $3,650 (which is still far beyond mass market reach).
If you look carefully at these different roles, you’ll notice that they greatly impact other UX factors as part of the design, further stressing the interconnectedness between all these factors. For example, Facilitator and Enhancer both require the wearable to have a display—physical or projected—so that users can view the information and potentially interact with it, as well (we talk more about the types of wearable displays in the next section). This also means that the device needs to be located in an area that is easily reachable for the user (within touch, sight, and, preferably, hearing). These requirements essentially restrict you to the upper front part of the body (from the hips up). The head, arms, hands, and neck, as well as pockets are usually the most convenient locations.
A Tracker usually requires that it be placed on a specific body location and/or have direct contact with a certain body part in order to reliably record the desired data. This narrows down the location options yet still leaves room for creativity. For instance, if the device needs to be in touch with the chest, you could design it as a chest band, a necklace, or even as a smart t-shirt. The preferred route depends on the type of data you want to collect, the technology used, the target audience, and the use cases in focus.
As the wearable market and technology continue to develop, we will see the list of wearable roles enriched, both in terms of the functional and interaction possibilities within each role and additional new roles (probably more tailored to specific domains/market segments and needs).
In any case, remember that these roles are not necessarily mutually exclusive. Some wearables do choose to focus on a specific role only; consider, for example, MEMI (see Figure 4-14), an iPhone-compatible smartbracelet that serves as a messenger wearable. The bracelet uses light vibrations to notify the user of important phone calls, text messages, and calendar alerts.
Others, however, integrate multiple roles within the same device. Samsung’s Galaxy Gear smartwatch is an example of a wearable that serves as a tracker, messenger, and facilitator, all in one device. It has a pedometer that tracks step data; it is linked with the smartphone and displays notifications on the watch screen; and it facilitates actions such as calling, scheduling, and taking a voice memo by speaking to the Gear device (which is immediately accessible).
Deciding which route to take with a wearable (single role or multifunctional) goes back to the design fundamentals: your users, use cases, and the broader ecosystem of devices. As with any UX design, there’s a trade-off between simplicity and functionality; the more features you add to the product, the more complex it is to use. Therefore, make certain that you focus on your users and try to determine what they really need by looking across the entire experience map (with all its touch points) before adding more features.
Also, wearables are very small devices; thus, they are very limited in terms of display and interaction (which we discuss in the upcoming sections). Additionally, you have other devices in the ecosystem that people are using (smartphones, tablets, and so on), and these can either work in conjunction with the wearable or take care of some of the functionality altogether, thereby relieving the wearable of the burden.
Remember the discussion about the need to consider gender in fashion preferences and the current masculine-dominated wearable industry?
The MEMI smartbracelet shown in Figure 4-14 is focused on changing exactly this status quo. It is designed and branded as “Wearable Technology Made by Women for Women.” On their Kickstarter page, the founders explain, “Our friends don’t want to wear big, black, bulky tech devices. In fact, they don’t wear ‘devices,’ they wear jewelry. So, we set out to create a bracelet that is both stylish and functional” (http://kck.st/1phgyxX). The MEMI design is definitely a refreshing change in the wearables landscape, and the project has already exceeded its funding goal of $100,000.
The concept of display on-device is crucial to wearable design, both in terms of the physical design of the device and the experience it engenders.
The three core questions you should ask yourself to determine the right display treatment for your wearable are these:
Should the wearable inform the wearer of something, and how often?
What level of interaction is needed with the wearable (none, view information, browse information, add/edit information, other)? Does it need to be visual?
With these questions in mind, let’s review the range of options and their implications on the experience design.
Having no display means more industrial design flexibility in terms of the wearable size (specifically, it can be much smaller), the shape, thickness and overall structure. It’s also cheaper and technologically simpler to build. On the other hand, no display also means no visual interface, and thus less user interaction with the device. It doesn’t necessarily mean no interaction at all (as the wearable might still have physical buttons, a touch surface, sound, vibration, or voice interaction), but still, having no active visual communication with the wearable limits the scope and level of user interaction with it.
Keep in mind, though, that having no display doesn’t necessarily mean the wearable has no display channel at all. This is where the power of the ecosystem comes into play: the wearable device can send the data it collected to another connected device the user owns such as a smartphone or tablet, and the interaction takes place on that device, which offers a comfortable display.
The most common usage for wearables without display is data trackers, which measure physical and/or activity data. These wearables are often hidden—worn under clothes, or attached to them seamlessly. Here are a few examples:
A tiny movement and activity tracker designed to be hinged on or concealed within clothing such as a jacket sleeve, as depicted in Figure 4-15, or sport wristband. It tracks and captures precise body movement and sends the data to a complementary iOS app for tracking and review. Furthermore, this wearable can provide haptic feedback through its vibration motors, and thus can be set to trigger motion-based notifications in the smartphone app.
A smart posture sensor that fits around the waist and vibrates to alert the user if he slouches. It works in coordination with a smartphone app, which provides visual feedback regarding the user’s posture, and tracks progress over time, as demonstrated in Figure 4-16.
In addition, given the growing role of wearables in the healthcare industry, for cases in which the wearer is not the one interacting with the data (for example, pets, babies, the elderly, or ill people), having no display on the device would probably be the preferred route. It affords greater flexibility in the wearable design. Consequently, it can be more easily customized for their specific purpose, and the data collected can be sent to a caregiver’s device. See Figure 4-17 for three examples of such wearables.
Similar to the no-display wearables but with a little bit more visual feedback are the minimal-display wearables. These devices incorporate a small LED or OLED display, which displays on-device selected information that is critical to the experience. This display is not interactive: it’s one-directional, outputting information for the user to view, but the user doesn’t actively interact with it nor can the user enter any input.
Activity and health trackers currently dominate this wearables group, as well, offering to the wearer visual feedback on their progress. This feedback can take several forms:
A set of lights that provide a rough indication to the user about her daily activity progress. Figure 4-18 shows Fitbit Flex and Misfit Shine, two examples of this kind of minimal display. The lights illuminate automatically when the user reaches her daily goal (all lighting up festively, usually accompanied by vibration feedback). Additionally, the user can manually ask to see her progress status by clicking a button or performing a gesture (for instance, a double-tap on the wearable surface), which turns on the relative number of lights, based on the progress.
Similarly, you can also use a single light (turning on/off, or changing colors) to reflect different key experience states. One good example is CSR’s Bluetooth smart necklace, mentioned earlier in the chapter. It uses a smart LED which illuminates when a smartphone notification arrives. In fact, the user can customize the light to display different colors for different kinds of notifications. Figure 4-19 shows another example, the Mio LINK, a heart rate monitor, which offers just a single status LED.
Mio LINK comes with a complementary smartphone app, Mio GO, which offers extended data as well as a second-screen companion during indoors workout by re-creating landscapes and trails via video footage on the tablet screen.
The LED-based wearable UI keeps the display minimal and clean. Such a display is limited in terms of data, but it facilitates designing more fashionable, elegant-looking wearables. In other words, in the fashion-function balance, more weight is put on the design. Additional data and functionality is provided on the companion apps, similar to the no-display wearables.
An OLED display can present selected numbers to provide more concrete data about the user’s activity, such as calories burned, number of steps taken, distance walked, and so on. The activity trackers LG Lifeband and Withings Pulse are two products that use this display (see Figure 4-20). To keep the display as small as possible, the metrics are rotating on the same display area, either automatically every few seconds or by using a gesture or clicking an on-device button to switch between the numbers displayed.
Minimal display wearables can also combine both types of visual feedback, lights and numbers, as in the case of the Nike+ FuelBand, which is depicted in Figure 4-23.
With this combined approach, the user gets more comprehensive data about her status, accomplishments, and goal completion, which could contribute to her ongoing engagement with the device. However, it comes with a visual cost: the display becomes busier and less slick. Some of it has to do with the specific visual design applied (font size, colors, and so on). However, it is mainly tied to the amount of information that is displayed on the device.
This aesthetics-functionality trade-off emphasizes a fundamental question that wearables raise: how much information do people need to see on a wearable device versus viewing it on other ecosystem devices (a smartphone, for instance) to stay engaged with the experience?
This question doesn’t have a simple answer—especially given the novelty of this industry and that it hasn’t penetrated the mass market just yet. Also, as with most UX issues, the behavior depends on multiple factors, such as the type of wearable, the use case, the specific user group, the context, use patterns, and more. We still need a lot more user data and research to understand this aspect.
With that having been said, when I look at the case of the combined displays shown previously, I lean towards a single indicator (preferably a simple visual cue) for overall progress. It not only establishes a cleaner interface, but also offers a much simpler flow for users to grasp, follow, and act upon. The famous premise “less is more” is becoming practically sacred when it comes to wearables. Given their small size, interaction limitations, and interruption-based use pattern (more on this in a moment), keeping the UI clean, clear, and glanceable is key to their usability.
Looking ahead, I think a projected display can help minimal-display wearables in addressing the fashion-function trade-off. Instead of the device itself having a physical screen component, it could project the info using some gesture/button click on an adjacent surface, whether it’s some body part (like the dorsal of the hand), a physical object, or even thin air. This way, rough progression indication (using lights, for example) can be immediately accessible on-device, but concrete numbers will be projected on an external surface. This will still permit quick access to this data directly from the wearable while keeping the device design more aesthetic, clean, and elegant.
The third category of wearables in the Display On-Device factor is one that offers a full display. This allows for rich interaction with the device, which usually comes with a much broader feature set. Smartwatches and smartglasses are most common in this category, with an important distinction between them:
Smartwatches have an actual physical screen on-device with which users interact.
Smartglasses use a projected display that emulates a screen, but there is no actual physical one on-device.
Still, these devices share some key UX design challenges, mainly the small display size and mix of interaction patterns.
Both smartwatches and smartglasses today offer a very small display area to present and interact with information and actions. For example, Samsung’s Galaxy Gear screen size is 1.6” with 320 x 320 pixel resolution; Sony’s SmartWatch offers a 1.6” screen, too, but with a resolution of 220 x 176 pixels. Google Glass offers a display resolution of 640 x 360 pixels, which is the equivalent of a 25” screen viewed from eight feet away.
From a UX perspective, this means the design needs to be very sharp—clear contrast, stripped down to the core essence of information—and to rely on large visual elements that are easy to scan and process at a glance. In that respect, surprisingly enough, the design principles for full display wearables share a lot more in common with designing for TV, compared to devices such as smartphones or tablets. Although TV screens are significantly larger, the information is consumed and interacted with from a distance, and thus require a simplified design, as demonstrated in Figure 4-24.
In case the wearable display is not focused on information consumption only, but also allows touch interaction (as with smartwatches), the screen layout needs to accommodate the size of a human finger. Contrary to smartphones, for which interaction is often done using the thumb, when it comes to smartwatches, the index finger is the one most people use. The thumb is often needed to use physical keys on-device, or to help stabilize the device while using the index finger to press keys, as illustrated in Figure 4-25.
Looking ahead, comparing the two display types used by smartwatches and smartglasses today (physical screen and projected display, respectively), the latter seems to have better scaling prospects. In fact, there are already several prototypes for smartglasses that offer a much bigger display area. Still, when considering the display size, it’s important to keep in mind—especially with smartglasses—that bigger displays mean masking a bigger part of the visual field, as the displays are overlaid. We’ll discuss this more in the section Separate versus integrated visual field display.
There are multiple input methods that you can use (and are often needed, partly due to the limitations of the display size just described) to establish a comprehensive interaction model on these devices. These include voice, touch, and physical keys.
These channels are all applicable ways to interact with these wearables. In fact, in most cases of full interactive displays—especially as the feature set and available apps expand (along with their respective use cases and contexts of use)—no one method can cover the entire interaction spectrum required. Finding a way to take advantage of the strengths of each method while establishing a clear structure around the interaction model throughout the system is a challenging task, as discussed in the section “Interaction Model.”
Smartwatches encompass a physical display that can easily support direct manipulation on-screen by using touch (similarly to the familiar interaction model on smartphones and tablets). The majority of smartglasses, however, at least at this point cannot offer a parallel experience. Users cannot simply reach out and interact with the projected display; they need to rely on indirect manipulation using separate input methods, such as external touchpad, physical keys, or voice commands. This makes the interaction model somewhat more challenging for users because they need to make the cognitive leap between what they see and seems within reach, and the actual input methods they can use to interact with the display. This requires building and forming habits around a set of logical connections between the display and the available means to manipulate it. In our current digital world, where a vast portion of the daily interaction with devices is direct (touch-based smartphones, tablets, media players, kiosks, and so on), getting users to learn a new interaction model that relies on indirect manipulation, introduces a certain stumbling block. As a result, investing design resources in the onboarding experience as well as ongoing in-product education is very important to help people ramp up quickly.
Interacting with smartwatches—which are worn on the wrist and are therefore out of the main visual field—is essentially a separate, independent experience. The user’s attention needs to actively turn away from the direction he is looking to focus instead on the smartwatch screen (which receives the full attention for the interaction duration). When using smartglasses, however, the display is integrated into the main field of vision. Wherever the person’s attention is and in whichever direction he is looking, the display is right there, overlaid on top of the visual field; it cannot be separated from it. The user’s attention inevitably spans both—an additional UX challenge when designing for these devices. Furthermore, at this point in time (and probably for the next few years), most smartglasses don’t provide a seamless AR integration with the environment; rather, they project a small screen-like display within the field of vision. As a result, this virtual screen covers a portion of the background information. Finding the sweet spot where the overlaid display is integrated effectively in the visual field, visible enough to the user when needed but not in his way, requires careful handling (and continuous testing) in terms of display location, size, shape, colors, opacity, and so on. Currently, different smartglass providers take different approaches along these dimensions. Additional testing with larger populations is required to determine the optimal settings for such wearables.
With smartwatches, users can control—at the least—the angle of the screen and how close they are to it; therefore, they can increase legibility and facilitate usage.
With the displays for smartglasses, until they can be digitally manipulated (for example, the ability to zoom in/out), users are constrained to a fixed screen (in terms of size, angle, and location). This further enhances the need, discussed earlier, to keep the design very simple, clean, and focused on the very essence. As you add more visual elements to the display, you will face a more pressing need to use smaller font sizes, add colors/shades, decrease spacing, and so on to accommodate all the elements and establish visual prioritization between them. This in turn increases the cognitive load on the users, cluttering the display and harming legibility.
For the most part, users’ interaction with wearables is based on microinteractions. Microinteractions are defined as contained product moments that revolve around a single use case—they have one main task. Every time a user answers a call, changes a setting, replies to a message, posts a comment, or checks the weather, she engages with a microinteraction.
Microinteractions can take three forms of operation:
Initiated by the user; for example, double-tapping the Fitbit bracelet to get the information on goal completion status.
The user is alerted by the system to take an action. For example, consider Lumo Back vibrating when the wearer is slouching, signaling him to straighten up. These alerts can be triggered as a result of manual user configuration or contextual signals (for instance, location, proximity, the presence of other people/devices, body physiological metrics, and more).
Performed by the system, in the same manner as the Nike+ FuelBand synchronizes activity data automatically to Nike+.
When it comes to wearables, all three forms of operation come into play, though in different degrees based on the wearable role and the context of use. Trackers, for example, rely heavily on system automation to synchronize the data collected. In addition, many of them also incorporate semi-automatic operation by displaying notifications to users (for example, achieving the daily goal or running out of battery). Messengers work almost solely in semi-automatic mode, focusing on alerting the user whenever an event is taking place on the smartphone (for example, an incoming call or incoming message), based on whether the user chooses to take an action. Facilitators and enhancers, which facilitate richer interactions (and usually offer a richer feature set) incorporate all three.
Still, the largest share of user interaction is generated semi-automatically, as a result of interruptions triggered by the wearable device, or on behalf of the smartphone. The semi-automatic dominance shouldn’t come as a surprise, though. First, wearables are meant to be unobtrusive, and mostly “sit there” (hopefully looking pretty), keeping out of the way when not needed. Second, most wearables rely on just delivering information to the users, with minimal input, if at all. Third, given the wearable constraints in terms of display size and often interaction, too, the engagement pattern with them is mostly quick, focused microinteractions for a specific purpose, on a need basis.
From a UX perspective, this use pattern further emphasizes the importance of “less is more”:
The repeated short interactions, along with the limited attention span allocated to them, require that a special emphasis be placed on simple glanceable UI and fast response times.
Learning is a function of the time spent on a task and the time needed to complete it. Short, scattered interactions, like those that place with wearables, make it harder to promote learning compared to longer, more continuous engagements (as are often done on the desktop, for example). This means that you need to keep the core UX building blocks—mainly, navigation structure, information architecture, and interaction model—as simple and consistent as possible. Deep navigation hierarchies, or diversified interaction patterns along the product will make it harder to use and form habits.
The wearable experience needs to be focused on what the device does absolutely best, while always considering the broader ecosystem. It’s important to crystalize the wearable role in this bigger constellation of devices, and define the relationship between the different devices. Part of designing for wearables is also understanding what the wearable device should handle versus the parts that other devices should take on, together providing an optimized experience for the user.
When going into the detailed interaction design for wearables, you need to consider three relevant human senses: sight, hearing, and touch.
These senses can be communicated with through several main interaction channels (multimodal interaction):
These interaction channels serve two main information flows: output (user feedback) and input (data entry). Let’s explore each one through the lens of the main interaction channels.
As discussed throughout this chapter, most wearable devices today focus on the output information flow—providing feedback to the user (based on data these devices collect seamlessly).
Given that wearable devices are by definition worn on the body and thus come in direct (or close) contact with the skin, tactile feedback becomes a primary interaction channel.
In some cases, tactile is even more important than the visual. Why?
Let’s analyze this by going back to the wearable display types, which handle the visual feedback:
Wearables that have no display cannot rely on visual feedback at all. Consequently, tactile feedback becomes the main communication channel.
Audio (whether as a sound effect or voice output) is also an option that might be an effective feedback channel in certain contexts. However, it carries several caveats:
It can be heard publicly-, and thus takes away the advantage of discreteness many wearables provide.
To set audio feedback to a minimum (to keep it private), the device needs to be located in close proximity to the ears. This puts a significant constraint regarding the device placement.
Incorporating sound requires adding a way (setting/button/gesture) to turn it on and off, for cases in which it might disturb the user or her surroundings (work meetings, cinema, going to sleep, and so on)
Furthermore, between the tactile and audio channels, the tactile one (vibration) is often much more salient than sound, which brings up the question whether an additional feedback channel is even needed. Be aware that both vibration and sound are interruptive—they immediately grab the user’s attention and disrupt what she’s doing. For the sake of simplicity and delight, if you can have a single feedback channel, one that is subtle yet effective, you should prefer it and avoid any added disruptions (especially where with some wearables, the frequency of alerts can become pretty high).
Wearables that have minimal display should definitely utilize the visual feedback channel. However, visual clarity should also be accompanied by tactile feedback that is more immediate and noticeable. This is especially important when the feedback is a result of a system-triggered alert, and not initiated by the user. Remember that in many cases, the wearable is peripheral to or completely out of the user’s visual field; therefore, the feedback could be easily missed, without actively drawing the user’s attention to it.
Wearables that have full interactive displays have less clear-cut guidelines with regard to the feedback channels and require closer analysis on a per-case basis. In smartglasses, for example, the visual display is obviously the primary feedback channel, as it’s integrated into the user’s field of vision. In this case, there is no real need to add yet another feedback channel in the form of vibration. Also, the head area—and especially around the eyes—is not the best area for tactile feedback. In addition, because smartglasses are placed close to the ears, the audio channel becomes an effective channel, as well. You can use this channel for sound effects indicating certain events as well as for voice output (reading the information presented on the screen, for example).
With smartwatches, however, both visual and tactile feedbacks are important for information output. Similar to minimal-display wearables, the device is generally away from the visual field, and although a visual effect in a full-screen display draws more attention compared to a minimal display, it can still be easily missed.
When it comes to a full interactive display, which offers not only information display but also input entry by the user, the interaction model becomes more complex—especially as the feature set and interaction spectrum expands. Still, there are several UX guidelines to keep in mind:
Voice is a great input channel for users to express their intent; it’s flexible, based on natural language, and easy to do. However, there are certain considerations to take into account:
If you allow voice input in one part of the interface, users will naturally try to continue using it across the system. This means that you should strive to support voice interaction across the board.
In case full voice support is not possible, and depending on the importance of voice interaction in your experience, you can still provide partial voice support (rather than none at all). In that case, identify first the user flows that would benefit most from speech input. Then, try to have voice support integrated throughout these flows, from start to end rather than only in a few screens within a user flow. Also, it’s highly recommended to provide a clear UI indication as to where speech input is supported, to establish predictability and confidence among users.
Preferably, the voice interaction should be available for use at any time (that is, the device is in constant listening mode), without requiring the user to first turn on the screen or unlock the device.
Remember, voice interaction cannot fit all use contexts. Due to its “loud” nature, publicly available to the surroundings, it has to be backed up with a way to operate the UI silently (for example, using the visual display).
Although voice recognition technology has improved immensely, it’s still experiencing significant challenges in languages other than English or when used by people with heavy foreign accents. This means that users who are not native English speakers (or non-English speakers) might face problems using that input method. This is yet another reason to provide an alternative interaction channel.
Popular actions that users are expected to use often would benefit greatly from quick access. You can provide this access in a number of ways:
Dedicated gesture (similarly to the “shake” gesture on Moto X, which immediately launches the camera app)
Dedicated physical key (such as the back button on the Pebble watch)
Shortcut(s) in the home screen
Direct voice command
You can use physical keys methodologically in two main ways:
Main navigation: Keys are used for an ongoing operation of the device, such as navigating up/down in menus and lists, select ingitems, going back, and skipping to home. This type of physical keys usage is much more suited to a smartwatch than smartglasses.
Peripheral/dedicated actions: Keys are used for very specific actions, such as launching specific apps, power on/off, and mute. This model can fit both smartwatches and smartglasses.
When using physical keys for dedicated actions, you may notice you need to offer access to some of the actions via the visual interface as well, in order to prevent friction. An example could be when a physical key provides quick access to an app on the device, or to contacting a specific person. In such a case, you should also include that app (or person) as part of the dedicated UI area (apps section or contacts list), so users can easily get to them when their interaction is focused on the visual interface.
Note that given the small size of these wearable devices (and until projected displays mature), trying to incorporate physical or virtual keys for typing in text is not recommended. This action will be extremely cumbersome, time consuming, and error prone. You should probably reconsider offering text-heavy features on those devices in the first place, and if there’s a pressing need for text entry, using voice would be a better approach.
If a touchscreen is used for main navigation (as in smartwatches), be sure you accommodate the “fat finger” syndrome, allowing for a big enough touch area and keeping a simple and clean display. If you use a touchpad that requires indirect manipulation (as in smartglasses, for example):
Try to stay loyal to the familiar swiping patterns and directions people are already accustomed to from using other devices like smartphones, tablets, and laptops.
Be sure to align as much as possible the visual interface transitions and animations with the swiping movements done on the touchpad, to enhance the logical connections between them.
Sound can be a useful feedback channel to provide reassurance to users during their interaction with the device interface (for example, when navigating through screens, making a selection, confirming an action, and so on).
This channel is beneficial when the wearable is placed close to the ears so that the sound can be kept subtle and nonintrusive.
When the wearable is far from the ears—thus sound feedback becomes less effective—you can replace it with tactile feedback (such as subtle vibration)
If your product is open for third-party integrations (external apps that can be developed for the wearable), provide a clear specification of the UI framework, rules, and principles so that developers can more easily follow that UI and help establish a consistent experience.
As discussed throughout this chapter, wearables are part of a broader ecosystem of connected devices. As such, when designing for wearables, you have to think about them in the context of the bigger constellation, along the user journey. Wearables are not small smartphones, nor can they replace them. They complement them in different ways, along a variety of contexts and functionalities:
In some cases, wearables provide a superior experience compared to smartphones due to the sensory information they can track, being more readily accessible, or allowing hands-free operation. Getting directions on smartglasses or tracking body movement using a smart shirt are a few examples. In other cases, however, the smartphone is still more convenient to use due to its bigger screen size and embedded keyboard (for example, when sending a text message or making a call).
It’s not always about one device’s superiority over the other but rather their joint operation providing an overall better experience to the user. One set of important use cases here is using the wearable as a personal identifier. Given the highly personal nature of wearables, and their physical attachment to the body, these devices can be used as “identity devices,” allowing the wearer to seamlessly authenticate to and activate other devices. One good example is Walt Disney’s MagicBand, described earlier in the chapter. Another example is Moto X’s trusted-device feature. A user can set a connected Bluetooth device (such as a Pebble watch) as a “trusted device” for the phone. From that point on, when the phone is connected to that trusted device, the lock screen is bypassed. Pressing the power button takes the user directly to the home screen, as shown in Figure 4-28. Given the number of times a day people go through their lock screen having to enter their password/PIN to access the phone’s content, this feature significantly streamlines the experience.
The wearable can sometimes start the user flow, which then continues onto the smartphone. A good example is the Pebble watch or MEMI bracelet, which alerts the user about important events immediately, even if the phone is in her bag or silent. The user can then act upon these alerts using the smartphone (e.g., answering a call or replying to a message). Another example is passing on content from one device to the other (e.g., starting a hangout on smartglasses during the taxi ride to work and then continuing it on the desktop when getting to the office).
Wearables can also control and/or be controlled by the smartphone (or other devices). Going back to the wearables as personal identifiers, they can serve as “keys” to other devices, like unlocking the car, opening the home doors, or controlling the alarm system. The Nymi bracelet, which uses your unique cardiac rhythm to verify your identity, is an example of such a wearable device (see Figure 4-29).
In addition, fully interactive display wearables can also take on the remote control capabilities that smartphones offer today, perhaps controlling the thermostat, or TV. Their greater accessibility can be beneficial in such use cases.
The challenge—and opportunity—is in identifying these different cases, and designing the wearable user experience accordingly, not replicating the smartphone (or any other device), but really understanding a wearable’s role in the broader ecosystem, and as part of people’s daily lives, as they go through their tasks.
Wearable computing has expanded into the consumer market and is growing fast. Four market segments gained early adoption: sports and fitness trackers, health and medical sensors, smartwatches, and smartglasses.
Wearables are joining an already device-rich world in which people already own multiple connected devices and use them together. This existing ecosystem of devices carry an immense impact on the way wearables are used, and the role they play in people’s lives.
When designing for wearables, there are four main UX and human factors to consider: visibility, role, display on-device, and interaction model. These factors are closely tied to each other and impact one another.
Visibility: The way a wearable device is designed to be worn—specifically, if it’s visible to others—demands special attention to the balance between function and fashion. Aesthetics play a critical role in advancing to the next level of mass adoption.
Role: In the current wearables landscape, wearables take one or more of the following roles: tracker, messenger, facilitator, and enhancer. As the industry matures, we will see more roles added, especially as AR (and the ability to project displays) advances.
Display on-device: Display on-device is crucial to wearable design, ranging between no display on-device, minimal OLED-based display, and fully interactive display.
Each option carries different implications on the experience design, and should be determined based on the following questions:
Should the wearable inform the wearer of something, and how often?
What level of interaction is needed with the wearable? Does it need to be visual?
Can the smartphone (or other device) be used as an extension of the wearable, together providing an engaging experience for the user?
Interaction model: For the most part, users’ interactions with wearables are based on microinteractions, which can be manual, semi-automatic, or fully automatic. In addition, the interaction model involves 2 dimensions:
Multimodal interaction via four main channels: visual, audio, tactile, and physical keys. These channels serve two information flows: output and input.
Multi-device interaction: Wearables are part of a broader ecosystem of connected devices, and thus they need to be considered in the bigger constellation, along a variety of contexts and functionalities.
“Less is more” is a key guideline when designing for wearables. Focusing on the essence and delivering it through a simple, glanceable UI is critical to the experience design.
 The first wearable computer was invented in 1961 to predict winning roulette numbers.
 Second screen refers to the use of a computing device (usually a smartphone or a tablet) to provide an enhanced viewing experience for content on another device (commonly a TV). This enhanced experience often includes complementary information about the TV program currently being viewed as well as interactive and social features, such as voting, sharing moments, answering questions, and more.
 With special emphasis on the Internet of Things, predicted to connect around 40 billion devices and things by 2020 (source: http://onforb.es/1CJ2ZiS). These “things” can be any physical objects, from home appliances and medical devices, to roads and bridges, to toasters, coffee machines, and milk cartons—even microchipped pets and people.
 In certain cases, the person wearing the device doesn’t necessarily control it or even interact with it (for example, a parent controlling a wearable on a child, or a caregiver tracking wearable data on an elderly patient). In this case, there are actually two actors—the one wearing the device and the person controlling it.
 As technology advances, I expect we will see more independent wearables, which don’t require connection to another device for their activity. Along with that, alerts (and other actions) could go beyond mirroring notifications, and get triggered based on various contextual signals (like getting to a certain location, being in proximity to certain people or devices, and others).
 One-thumb operation in smartphones is feasible and convenient as long as the device size affords enough room for stable grip and free thumb movement using one hand. With larger smartphones (2.5-inch width and larger), it’s becoming harder to hold and operate them steadily with one hand only. In these cases, the typical grip is one hand holding the device (usually the nondominant hand) and the index finger of the other hand interacting with the touch interface.
 A potential future direction for enabling expanded display in physical screens is a newly discovered material that allows electronics to stretch. You can read more about this at http://mashable.com/2012/06/28/stretchable-electronics/.
 One example is Meta’s Pro glasses, which offer a display area 15 times larger than Google Glass.
 MetaPro AR Smartglass is the first to introduce a holographic interface, allowing spatial control using the fingers. This is an important step in advancing us towards making the Minority Report interaction model a reality.
 This behavior pattern might change when fully AR-equipped smartglasses/lenses become widespread. In the future, where people can digitally manipulate the display, the interaction might become a longer one, focusing the entire user’s attention on completing a more complex task.
 It would be helpful to establish an intelligent context-aware mechanism for automatically changing the sound state. For example, consider turning the sound off when the device detects the user is in the cinema based on data signals such as location, time, motion, calendar, ticket reservations, and so on. But even then, you would probably still need to provide a manual option as well to give users control when the autosound mechanism doesn’t capture the context.
 In case the display includes expanded information (e.g., concrete numbers rather than just light-based feedback), a physical button might be needed to allow easy paging between the different metrics. Alternatively, if the wearable is based on a touchpad, this action could also be implemented as a gesture (a tap, double-tap, or long press). The latter, though, is less discoverable.
 For more information about types of relationships between multiple devices (for instance, Consistent, Complementary, and Continuous), see my book Designing Multi-Device Experiences (O’Reilly, http://bit.ly/design_multidevice_exp).