User Interfaces – One size doesn’t fit all

User Interfaces – One size doesn’t fit all

This is the first in what I hope to be a series of blog posts about user interfaces; where we are, where we’re going, and where we’re likely not going.

Yesterday, as I was pondering this blog post, I thought about where we’ve come with user interfaces. Today, PC users often point to the iPad as not being “ready for business”, yet the same thing happened when the PC poked its way into the world of typewriters and mainframes/minicomputers, and surely happened when the typewriter itself first came on scene in the 1800’s.

What we call “the office” today has morphed time and time again over the last 150 or so years, due to new technology coming on the scene, and changing how we work, and how we approach business problems (and computing at home).

Through almost the entirety of the 20th Century, office devices were driven through the use of a keyboard, eventually growing an appendage (the mouse), replacing paper with a digital display (CRT), and replacing the single document interface of the typewriter with overlapping windows in the Mac, then Windows (dismissing the Xerox Star and others which never succeeded in the market).

Several companies have attempted to make digital ink (a stylus/text recognition) or voice first class input mechanisms many times over the last 30 or so years. I’ll likely discuss in a future piece why neither of those have really gone anywhere, and why I doubt they will get adopted – or at least to the level that those companies might have hoped for.

As we look at cell phones, and then smartphones until the iPhone arrived (again, panning the Newton – a device which I owned, yet would still call a failure), the simple keyboard/display/pointer metaphor was pushed to the limit. With Windows CE-based Pocket PCs and RIM Blackberry devices both continuing the trend – the former with a stylus-based pointer, the latter with a trackball. One key device that did anything out of the ordinary was the Palm organizer (with their own unique method of text input). I contend that Palm devices got a huge adoption curve because of what they could do – but they stalled and never reached mainstream use because they required such massive mental reprogramming to take advantage of. I know that’s why I never bought one – it reminded me too much of my Newton, which I had high hopes for, but had such horrible text recognition that you couldn’t ever hope to annotate in any significant volume and be able to use the result later.

With almost 30 years of common mouse-driven computing behind us, we arrive in 2012. Recently, I’ve been watching what Microsoft has been doing with Windows Phone 7, then Xbox, and soon Windows 8 – the great push for “Metro” design language across every platform they offer. I’m not yet certain I’m a fan – of either Metro, or “Metro everywhere”. But that’s the topic of conversation for my next post – so I’ll stay on track.

The iPhone’s arrival 5 years ago changed the way that many of us interact with technology. Instead of multiple windows vying for attention, and a pointing device and keyboard being required to complete tasks, our finger(s) became the implement, the software the mechanism, for getting things done on devices – and a single application interface (harkening back to the typewriter or pen and paper in some senses) became the approach Apple enforced. This was likely done for many reasons; to simplify the interface and focus a user on a single task at a time, as well as to enforce a mechanism where power could be conserved by shutting down all non-critical tasks, thereby making the most of the limited ARM processor and limited battery capacity available.

This stands in stark contrast to even today’s Mac, where overlapping windows are still quite the norm, despite Lion delivering a framework for full-screen applications driven primarily through gestures – but not on-screen gestures. No, the Mac does not support primary gesturing directly to the screen (nor do I hope it ever does), it solely supports trackpad or Magic Mouse-based secondary gestures. The terms primary gesturing and secondary gesturing are not common, but they are terms that I have adapted to suit the scenario where the user is gesturing on the surface displaying the content itself (primary), or secondary being input from a secondary surface such as a trackpad or pseudo-trackpad due to screen distance from the user, as is the case with the Mac and Apple TV – and as I will contend in my next post, with Windows 8.

Consider the table below. I’ve gone through and considered some of the user interface constraints across the principal 4 Apple user interface paradigms. Specifically, note the default orientation for each device, the primary method(s) of input, typical distance from the user, and use of gestures.

 

Apple device Mac iPhone iPad Apple TV
Default orientation Landscape Portrait Portrait Landscape
Alternative orientation Not often App only App and OS No
Displays 1 or more 1 1 1
Typical distance from user 1-2′ 6-12″ 10-15″ 6-12′
Multi-window layout Yes No No No
Full-screen apps Available Only Only Only
Text input device Physical keyboard finger finger remote/iOS device
Text input type direct on-screen keyboard on-screen keyboard on-screen keyboard
Pointing device mouse/trackpad finger finger remote/iOS device
Pointer type cursor/select touch touch cursor/select
Gesture input Trackpad or Magic Mouse Direct Direct None
Gestures OS and app Apps OS and app None
Max digits/gesture 4 2 5 0
Max display size (diagonal) 27″ 3.5″ 9.7″ Depends on HDTV
App Launch Direct, Dock, Off-screen launcher Primary shell Primary shell Primary shell
Return to shell Gesture/keyboard Home button Gestures/Home button Back button
App Model Open/App Store App Store only App Store only Apple proprietary
App Store Yes Yes Yes No

We’ve seen Apple slowly, gently, expand gestures from the original two-finger scrolling gesture on Mac OS X and early pinch gestures on iOS to become much much more. But the utility of these gestures is somewhat hampered when the gesture is a secondary gesture as it always is on a Mac or would be on an Apple TV if the device supported gestures. It’s also hampered by simple real estate. There isn’t enough room on an Apple Magic Trackpad for 5 fingers, on a Magic Mouse for 4 fingers, or on an iPhone for even 3 fingers. This is why we see the largest breadth of gestures on the iPad.

But just as important to note is how the Apple TV much more closely mimics the Mac. Devices with secondary gestures must use cursors to represent the location of input on the screen. On the Mac, this is an actual cursor. On the Apple TV, it is a visual highlight around the currently selected element. The Apple TV supports both it’s own (mediocre) remote that is simply an up/down backwards/forwards selector, or the remote app for iOS, which turns your iOS device into a secondary gesture appliance – but there’s a problem here. First, the iOS remote app is always in portrait, and the Apple TV itself is always in landscape – meaning that unless one of them changes, it’s always going to be a weird paradox where secondary gestures beyond backwards/forwards will always be a struggle.

The iPad, in many senses, is Apple’s most accommodating platform. It easily switches both apps and the entire OS shell between landscape or portrait modes, supports many intrinsic gestures for direct manipulation of the shell (task switching, app switching, five-finger home screen access), and most significantly, due to it’s larger gesturing surface, supports the use of more fingers for input simultaneously than the Mac (4 fingers), iPhone (2 fingers) or Apple TV (no fingers, or 2 fingers, depending on which remote you’re using). The iPad is also realistically the smallest screen size that a typical human can place both hands across simultaneously in landscape mode to enable typing (it’s a challenge – and why I still use my Apple Bluetooth keyboard while typing on the iPad).

I’ve seen many people say that Apple TV should use Siri or other voice commands, or Kinect-like “body gesturing”. Both of those have foibles that I’ll talk about in the future. Given the simple tasks that Apple TV will incorporate – even when it does have third-party apps, which I expect it to at some point, the current simple remote, or improvements to the iOS remote app will likely suffice for a long time. Taking this in mind with the continued evolution of AirPlay mirroring, which treats the Apple TV as a dumb terminal, further negating the need for an over-engineered remote, voice commands, or gesturing, and it continues to say that Apple won’t go nuts and build entirely new user interface methods just for the Apple TV.

Lion truly began the move to incorporate some of iOS’ best user interface elements into OS X itself. A breadth of gestures, full-screen application capabilities, and a new (improved?) App launcher tried to gently begin assimilating the two user interfaces.

I recently had a journalist ask me if Apple wasn’t going far enough with Mountain Lion, if it wasn’t moving fast enough at combining the user interfaces from iOS and OS X together. I couldn’t really disagree more. iOS on iPhone and iPad is designed for a completely different user experience than the Mac. If you just jam them together, you completely trash the Mac experience just to say you did it (you can probably guess which way my Windows 8 post is likely to go at this point).

The Mac supports multi-windowing because it always has. Because it’s easy. Because there’s an insane amount of real estate to offer. Mac Apps have still generally tended to be pretty cleanly designed, but not in the way iOS apps are. iOS apps on the iPhone/iPod Touch were designed to provide the information you need at a glance, and provide mechanisms to expose more information as you need it. But an iPhone app must be (by design) more focused than an iPad app. iPad apps that are simply up-sized iPhone apps are horrible. They don’t make the most of the platform, and don’t help the user get anything more done than if they just had an iPhone. Conversely, Mac Apps can’t just be shimmed down to the iPad. You must take the iPad into account when designing the application. The fact that real typing can occur, that a user can use it as their primary device in many scenarios, the large amount of screen real-estate available, and the fact that up to 10 fingers (and no mouse) could be involved at once. Finally, when or if we see Apps on the Apple TV, they’ll need different design consideration altogether. Unlike the Mac, iPhone, and iPad, the Apple TV is more often than not a multi-user device. Two or more users at a time. It also has no direct gesturing, nor is it likely to. Instead, Apps running natively on the Apple TV will tend to be about direct content consumption, with forward/backward  or up/down gestures being the most logical accepted navigation methods.

The important thing here is that, as I noted recently, each platform presents its own opportunities, and as a result apps on each must be uniquely designed to make the most of the platform they are running on, without compromising the platform or the value proposition that your app was designed to deliver.

Apple may be pooling engineering resources across these four platforms, and it may be taking elements from one and putting them into another. But I believe Apple is doing what it does best (yes, even with the recent much maligned Apple TV launcher redesign), and slowly, considerately, taking these changes into account, not just declaring one user interface paradigm the winner, and trying to make it work across all four platforms. Usually when you do that – when you try to make one size fit all – you wind up with one or two platforms wearing a user interface that doesn’t fit.

Comments are closed.