In my last blog post, I discussed the different user interface approaches that Apple is currently taking across all of its platforms. Four platforms, four slightly different answers.
There is, I believe, a rational explanation for each of them – and most importantly, a rational reason for all four to at this point at least, not have a completely identical experience.
In a recent meeting at work, we discussed Metro and WinRT as they related to an article that a peer was working on. The problem comes up in the use of “Metro” as an adjective used throughout Microsoft to describe many things now – not all of which are equivalent. That is, Windows Phone 7 “features” Metro, as does the Xbox now, Windows 8 soon, and numerous other apps (including Visual Studio 11?!) have been denoted as featuring Metro. Moreover, “Metro-style” has more precisely been used to describe the new style of applications within Windows 8.
A fundamental problem here is overuse, (and potentially abuse) of the term “Metro”. Metro isn’t a thing. It’s not an API. It is, in many ways, a state of being. It is, like Kanban, a design approach to completing a manufacturing task, pioneered by Toyota. Though Metro may have it’s own unique framework on each platform that it runs upon, the core thrust of Metro is always consistent. Clean layout, a focus on typography and how (and when) information is displayed to the end user.
In the past, Microsoft was criticized for not making Tablet PC optimized for pen input. Truth is, Microsoft did redesign components of Windows for pen – but it completely failed to build a software platform for developers to make the most of, never clearly delineated the value for consumers to buy the devices, or for them to push developers to build pen-enabled apps. I’ll talk about all of this in a future post. But the important thing to understand is, as we went over in my last post, how designing for touch, multi-touch, and mouse/trackpad, are done at Apple – and I think this is critical. The interface for tablets and the interface for desktops, though ever so slowly converging, are still completely different.
My chief criticism about the new desktop and Metro apps as they are currently implemented on Windows 8 is that neither provides enough mouse affordance. Simply put, they were designed for touch/multitouch – not for mouse/trackpad. I’ve said this several times on Twitter, and I’ve usually had more people agree with me than disagree with me, but I’ve had a few naysayers. Let me explain what I mean.
Consider Windows 1.0-Windows 7.0. As the UX evolved, it became incredibly obvious (especially given almost 30 years of mouse-driven GUIs) what elements on the screen were mouse targets. These affordances innately look like things you can push, scroll, click, or grab, or that, like menus, we had learned to accomodate en masse (see the areas in the image below that I’ve highlighted in a lovely shade of teal).
I can’t find the quote at the moment, but I seem to recall a Windows 8 video or soundbite where one of the Microsoft execs giving a demo said something to the effect that Windows 8 featured a user interface that you simply “want to touch”. On a tablet, and on Windows Phone (7 or 8 when we get there), this is fine and obvious – because intrinsically everyone with one of those devices has touch support built-in. Almost all of the glowing reviews I’ve seen for Windows 8 appeared to be from reviewers who were using tablets – but most of my peers who have, as I have, tried it on a desktop without touch (since it is what most of us have), it didn’t have that same great feeling that those glowing reviewers shared.
Simply put, things become fuzzier when using Windows 8 on a desktop system without touch. Likely you’ve seen it, and perhaps it’s not a fair thing to discuss in some people’s eyes, but the video Chris Pirillo took of his dad trying to use Windows 8 with a mouse, I believe, drives my point home painfully well. The removal of the Start “orb” is a great example of a mouse affordance that has been eliminated. As a result, when touch is not available, it is not apparent how to even go “back” to the Start Page (you’ve removed the one old affordance to even get to the entirely new launch experience). While flicking about with your finger on a touch device will eventually land you back at the Start Page, using a mouse requires far more exploration, and the removal of the key user interface element that users expect in order to even launch apps.
Similarly, when in the Start Page, the user interface does encourage you to touch it – which, again, is fine if you can. But on the typical desktop – any machine upgraded from Windows 7 – the likelihood of touch support being present is incredibly low. As a result of the Metro design of the Start Page, it is apparent that not all of your apps are visible on the screen, but it is not apparent how to scroll over to see them – until you notice the scroll bar on the bottom of the screen. I personally believe that when on the Start Page, mouse movement left or right towards the edges of the screen should induce scrolling that direction – but it does not – at least it does not on any machines I’ve tested. The Charms bar is incredibly hard to coax out of hiding with a mouse, since the mouse targets are so small. I believe that simply bumping the right side of the screen with the mouse should present the Charms bar.
Where we see Apple taking the already minimalist phone experience and upsizing it to work on the tablet, and gently introduce new metaphors and gestures to Mac OS X, Microsoft has left Phone alone (for now, though it may well become a sub-category of Windows 8), but has completely redesigned Windows, and the core user interface elements of it, to be touch-first, and full-screen. In effect, they have redesigned almost the entire OS to suit tablets, and then foisted that same model – almost unmodified, onto the desktop.
Personally, while I’d like it to work, the experience just doesn’t work smoothly for me. Users should not have to learn significant keyboard shortcuts to use Windows 8 on a “touchless” desktop. They won’t. Asking a user to memorize a litany of keyboard shortcuts is not that different from asking a user new to Windows 3.1 to use the GUI with only the keyboard, and not a mouse. Doable, but an exercise in frustration that doesn’t end up with user joy.
Moreover, I believe (and I’m at odds with many here, too) that primary touch for a desktop system doesn’t make a ton of sense. If we’re talking occasional use for some games, or for photo manipulation, perhaps. But to have a knowledge worker’s arm reaching out to a screen all day for most user interactions? It makes my arm muscles hurt just thinking about it.
As I told a peer, the goal of any input device should be making you all but forget that you are using it. This was the case for most of us with the mouse, but is not in Windows 8 for mouse-bound users.
It’s not that dissimilar to the Metro design elements as the Xbox currently uses them (for navigation). When using the controller to navigate the Xbox, it is similar to the Apple TV. When using the Kinect, you can also use the same back/forward, with a “palm up, and hold” motion (which can take a bit of patience). While this may have some viability in the living room, it doesn’t work well at the moment when a user is seated. So the premise of using one’s Kinect to navigate to Hulu Plus or Netflix is foiled by the need to, realistically, stand up and use the Kinect just to navigate menus. While this may change in the future, I contend that the idea of “large gestures” as I call them, where your arms are representing movements that your fingers or a mouse would traditionally do, aren’t perfect – but are useful. However, you still will wind up sometimes using a secondary gesture (like the existing Xbox controller to navigate) or the Windows Phone 7 Xbox Companion for games and apps that aren’t voice or Kinect enabled. Voice is indeed a potential option for some command and control tasks – we’ll come back to voice in that future post I promised – not now. I think in many ways Metro does work on the Xbox, but I’m not as certain that Kinect is the vehicle to drive it across the board – you still have to think about it – give it consideration as you are performing tasks. The gestures can feel “heavy” to perform, since you have to get the UI ready to accept the gesture, then perform the gesture, and wait for it to be accepted. Kinect isn’t yet ready to be a primary input device – but given time and enhancements (Kinect 1.5 for Windows is on it’s way soon!) it could be. Time will tell.
Personally, I don’t think Metro on Windows 8 touch-based devices – tablets – is a bad idea at all. If your head isn’t wrapped around the way iOS works in the way my head is, I think it could work even better than iOS. On tablets, and even sub-sized down to Windows Phone 7/8, the design approach works quite well in a sort of “well duh!” manner. But on the desktop, or mouse/trackpad only devices? I’m not so certain that Metro as it exists is situated for success.
That said, if Windows 8 were my idea, what would I do differently?
- Detect the presence or non-presence of touch support at setup time.
- If touch is present, use the existing experience that focuses more on the Metro/WinRT realm, and less on the desktop.
- If touch is not present, use an approach which melds old and new – legacy Start Menu, but Metro apps running on the desktop in a manner not that different from the way a full-screen HTML Application (HTA) would have on earlier versions of Windows.
What would this look like? This is a Windows 8 app today:
Below is what it would look like on Windows 8, if you bumped the upper edge of the screen in the same way the optional Auto-hide of the Taskbar works in Windows today:
The per-app Charm bar should work as I mentioned earlier, auto-revealing if you bump that right edge while in the app. The windows could be full-size, or snapped (if they support it), but do not need to be able to be resized - as they aren’t in Metro today.
The Start Button should be restored – at least on systems without touch, and the Taskbar should be autohide, revealing itself in the same way this menubar is in the example above, if a user bumped the bottom edge of their screen. The Start Menu when using a mouse, I believe, should be derived from the Windows 7 Start Menu, not the completely clean slate model of the Start Page, and the Start Menu should incorporate both Metro and Win32 apps.
I believe that in this “non-touch” mode, all Apps (Metro or Win32) should show in:
- The Taskbar
- The traditional Alt+tab list
- The Flip 3D mode, if supported by the system
When Metro apps are not in the foreground today on Windows 8, they are suspended or killed. This would not have to be any different in the model I have suggested.
If they’re in the foreground fullscreen, or foreground snapped, they’re active. If they’re not, they’re suspended. You can minimize them, and they’re suspended. You can close them (and perform the same action that the App close gesture does in Windows 8).
Part of this approach that I’ve suggested may sound familiar. You’ll recall I didn’t necessarily agree with that journalist that Apple had done the wrong thing with Lion, as “odd” as it seemed, by introducing full-screen apps. No – I actually counter that if Microsoft took the approach I’ve outlined above, and made the most of the desktop (again, even omitting window resizing outside of Snap), it would:
- Help spur Metro adoption and
- Encourage support for Windows 8 in the enterprise (where I fear it could stall without this change).
- Help strengthen, not harm, the Windows desktop, while simultaneously using it as a halo to pull Metro and Windows 8 tablets into a strong position.
If Microsoft were to make this change to Windows 8 and not de-emphasize Metro, but instead mesh together the best of the Metro design approach with what the Windows desktop has done best for 25 years, I think it’s a (forgive the pun) win-win. I’m concerned without this change that Windows 8 may be too bold of a change – that it may be asking too much of desktop users, too quickly.