06
Jul 13

The iWatch – boom or bust?

In my wife’s family, there is a term used to describe how many people can comfortably work in a kitchen at the same time. The measurement is described in “butts”, as in “this is a one-butt kitchen”, or the common, but not very helpful “1.5 butt kitchen”. Most American kitchens aren’t more than 2 butts. But I digress.

I bring this up for the following reason. There is a certain level of utility that you can exploit in a kitchen as it exists, and no more. You cannot take the typical American kitchen and shove 4 grown adults in it and expect them to be productive simultaneously. You also cannot take a single oven, with two racks or not, and roast two turkeys – it just doesn’t work.

It’s my firm belief that this idea – the idea of a “canvas size” applies to almost any work surface we come across. From a kitchen or appliances therein, and beyond. But there is one place that I find it applies incredibly well – to modern digital devices.

The other day, I took out four of my Apple devices, and sat them side-by-side in increasing size order, and pondered a bit.

  • First was my old-school Nano; the older square design without a click-wheel that everyone loved the idea of making a watch out of.
  • Second was my iPhone 5.
  • Third, my iPad 2.
  • Finally, My 13″ Retina Macbook Pro.

It’s really fascinating when you stop to look at tactile surfaces sorted like this. While the MacBook Pro has a massively larger screen than the iPhone 5, the touch-surface of the TrackPad is only marginally larger than that of the iPhone. I’ve discussed touch and digits before, but the recent discussion of the “iWatch” has me pondering this yet again.

While many people are bullish on Google Glass (disregarding the high-end price that is sure to come down someday) or see the appeal of an Apple “iWatch”, I’m not so sure at this point. For some reason, the idea of a smart watch (aside from as a token peripheral), or an augmented reality headset like Glass doesn’t fly for me.

That generation iPod Nano was a neat device, and worked alright – but not great – as a watch. Among the key problems the original iOS Nano had when strapped down as a watch?

  1. It was huge – in the same ungainly manner as Microsoft’s SPOT watches, Suunto watches, or (the king of schlock), Swatch Pop watches.
  2. It had no WiFi or Bluetooth, so couldn’t easily be synched to any other media collection.

Outside of use as a watch, for as huge as it was, the UI was hamstrung in terms of touch. I believe navigation of this model was unintuitive and clumsy – one of the reasons I think Apple went back to a larger display on the current Nano.

I feel like many people who get excited about Google Glass or the “iWatch” are in love with the idea of wearables, without thinking about the state of technology and – more importantly, simple physical limitations. Let’s discard Google Glass for a bit, and focus on the iWatch.

I mentioned how the Nano model used as a watch was big, for its size (stay with me). But simply because of screen real-estate, it was limited to one-finger input. Navigating the UI of this model can get rather frustrating, so it’s handy that it doesn’t matter which finger you use. <rimshot/>

Because of their physical canvas size available for touch, each of the devices I mentioned above has different bounds of what kinds of gestures it can support:

  • iPod Nano – Single finger (generally index, while holding with other index/thumb)
  • iPhone 5 – Two fingers (generally index and thumb, while holding with other hand)
  • iPad 2 – Up to five fingers for gesturing, up to 8/10 for typing if your hands are small enough.
  • MacBook Pro – Up to five fingers for gesturing (though the 5-finger “pinch” gesture works with only 4 as well).

I don’t have an iPad Mini, but for a long time I was cynical about the device for anything but an e-reader due to the fact that it can’t be used with two-hands for typing. Apparently there are enough people just using it as an e-reader or typing with thumbs that they don’t mind the limitations.

So if we look at the size constraints of the Nano and ponder an “iWatch”, just what kind of I/O could it even offer? The tiny Nano wasn’t designed first as a watch – so the bezel was overly large, it featured a clip on the back, it needed a 30-pin connector and headphone jack… You could eliminate all of those with work – though the headphone jack would likely need to stay for now. But even with a slightly larger display, an “iWatch” would still be limited to the following types of input:

  1. A single finger (or a stylus – not likely from Apple).
  2. Voice (both through a direct microphone and through the phone, like Glass).

Though it could support other Bluetooth peripherals, I expect that they’ll pair to the iPhone or iPod Touch, rather than the watch itself – and the input would be monitoring, not keyboard/mouse/touchpad. The idea of watching someone try to type significant text on a smart watch screen with an Apple Bluetooth keyboard is rather amusing, frankly. Even more critically, I imagine that an “iWatch” would use Bluetooth Low Energy in order to not require charging every single day. It’d limit what it could connect to, but that’s pretty much a required tradeoff in my book.

In terms of output, it would again be limited to a screen about the same size as the old Nano, or smaller. AirPlay in or out isn’t likely.

My cynicism about the “iWatch” is based primarily around the limited utility I see for the device. In many ways if Apple makes the device, I see it being largely limited to a status indicator for the iPhone/iPod Touch/iPad that it is “paired” with. Likely serving to provide push notifications for mail/messaging/phone calls, or very simple I/O control for certain apps on the phone. For example, taking Siri commands, play/pause/forward for Pandora or Spotify, tracking your calendar, tasks, or mapping directions, etc. But as I’ve discussed before, and above, the “iWatch” would likely be a poor candidate for either long-form text entry whether typed or dictated. (Dictate a blog post or book through Siri? I’ll poke my eyes with a sharp stick instead, thanks.) For some reason, some people are fascinated by the Dick Tracy approach of issuing commands to your watch (or your glasses, or your shoe phone). But the small screen of the “iWatch” means it will be good for very narrow input, and very limited output. I like Siri a lot, and use it for some very specific tasks. But it will be a while before it or any other voice command is suitable for anything but short-form command-response tasks. Looking back at Glass, Google’s voice command in Glass may be nominally better, but again, will likely be most useful as an augmented reality heads-up-display/recorder.

Perhaps the low interest I have in the “iWatch”, Pebble Watch, or Google Glass can be traced back to my post discussing live tiles a few weeks ago. While I think there is some value to be had with an interconnected watch – or smartphone command peripherals like this, I think people are so in love with the idea that they’re not necessarily seeing how constrained the utility actually will be. One finger. Voice command. Perhaps a couple of buttons – but not many. Possibly pulse and pedometer. It’s not a smartphone on your wrist, it’s a remote control (and a constrained remote display) for your phone. I believe it’ll be handy for some scenarios, but it certainly won’t replace smartphones themselves anytime soon, nor will it become a device used by the general populace – not unless it comes free in the box with each iPhone (it won’t).

I think we’re in the early dawn of how we interact with devices and the world around us. I’m not trying to be overly cynical – I think we’ll see massive innovation over time, and see computing become more ubiquitous and spread throughout a network of devices around and on us.

For now, I don’t believe that any “iWatch” will be a stellar success – at least in the short run – but it could as it evolves over time to provide interfaces we can’t fathom today.


12
Jun 13

Content, not the chrome. Apps, not the phone.

Ahead of WWDC 2013, many people were still expecting Apple to add live tiles, and possibly widgets to iOS 7. I didn’t expect either, and as a result wasn’t terribly disappointed to see them not included (that might be an understatement on my part).

At first glance, live tiles may seem like a no-brainer in any operating system. Tiles that provide you information from within an app… How could this go wrong?

Here’s the problems that I have with live tiles in Windows 8, and why I think they wouldn’t make sense on iOS (either):

  1. They’re overused.
  2. Often, they aren’t that useful.
  3. They are distracting.
  4. They’re hardly ever in view.

Let me explain each a bit.

They’re overused. Why do I say this? Because Microsoft has focused on live tiles in their messaging for app developers as if apps that don’t feature a live tile should be shamed. Not the case. I believe live tiles should only be used when there is something actionable to present to the user (ex: new mail) and that actionable item can succinctly be presented though the live tile (ex: subject of the mail). Unfortunately, even just the built-in applications from Microsoft abuse the live tile concept. Too many feature live tiles, and too many of those live tiles are of very limited utility or are too repetitive. Having one or two live tiles is fine, especially if they’re useful -like Mail and Weather, and perhaps Calendar.

But if you add too many live tiles, Windows 8 stops looking like this:

Windows Start screen

And instead starts looking like this:

Times Square

What I’m saying is that there is a point where the utility of live tiles starts to become a problem, not a benefit, if you’re shoving too much dynamic information in the user’s face while providing very little value.

Often, they aren’t that useful. Much like a well-designed app, the utility of a live tile is only as useful as the content it is set to display. iOS has featured notification badges (the red overlay on Mail that constantly indicates you’re not at inbox zero) for many years. Many people bash the badges as being stupid or useless, but they serve as an action indicator where often, not much more is needed, and even more often, not much more can be done. A notification (or live tile) on a badge should instantly provide an indicator of status if that’s all it is to do (ex: You have new mail), and a deeper summary if that is possible (your iOS line-of-business app that tracks new tasks for your helpdesk has 32 new tasks). In iOS, the icon for Calendar has, in effect, always been a live tile. The date you see on the icon is the actual date. Though of limited utility (given that there is already a clock at the top of the screen in the iOS shell, and the icon is tiny), the icon for the clock app in iOS 7 is now a live tile in the same sense – it features the correct time, including a sweeping second hand.

But I don’t believe a live tile should always be live, and even when it is, if it isn’t actionable, it’s no better than After Dark. It ceases to have utility, it’s just there for entertainment value. Applications that do have a concrete reason for offering a live tile absolutely should. If they don’t, they shouldn’t. Don’t provide one just because “you’re supposed to”.

They are distracting. As I noted above, if you’re looking at the Start screen to find a particular application, and you have very many live tiles, it starts to become distracting, and not helpful, that they application is trying to provide you more information than you actually need at that moment. The start screen isn’t an app, it’s a shell. The primary reason for it to exist is to run applications. Rotating pictures of people, or of your own collection of photos (both of which repeat) are novel and cute for a bit, but rapidly become tiring to me.

It’s like going into Best Buy to look around, and getting inundated with salespeople. You know what you’re looking for, and otherwise it’s just a distraction.

They’re hardly ever in view. The Start screen is a shell, It’s not even like the Explorer shell or the gadgets in Vista where it could be set to always in view. If you’re not actively launching an app (or using multimon), the Start screen isn’t in view. So why the emphasis on adding interactivity (or infinite customizability) to a thing that’s basically just a launchpad?

This gets us full circle back to why I don’t think it’s a big deal that iOS doesn’t have live tiles, or even widgets. I’ve mentioned before that Microsoft employees seem to like using the expression “(just) a sea of icons” to describe the iOS app launcher. Well, yeah. That’s kind of the point? It’s a brutally simplified shell that gets you in to the apps. The iPhone (or any iOS device) isn’t about the platform, and it isn’t about the shell. It’s about the apps. Mobile devices exist to be view portals into the functionality provided by applications – including those built-in to the device.

When using a mobile device, users don’t sit there staring longingly at the shell, waiting for it to do something. They’re in apps, responding to notifications from other apps through the shell, and jumping between apps using the sharing verbs available between apps (monikers or direct APIs). On stage when first revealing Windows 8, Steven Sinofsky highlighted the focus of Windows 8 on (with a not-so-subtle jab at the browser of the same name), “content, not the chrome”. To that I add, “It’s the apps, not the phone”. Yes, shells need to evolve and grow. But rarely should they be the center of attention – as that’s rarely where the user actually spends most of their time.


08
May 13

Tools to optimize working on the Mac

A few weeks ago I wrote about gestures on the Mac vs. Windows 8. By and large, I’ve shifted to using my Mac with most apps in full-screen, and really making the most of the gestures included in OS X 10.8. It isn’t always easy, as certain apps (looking at you, Word 2011), don’t optimally use full-screen. Word has Focus mode (its own full-screen model) and now supports OS X’s full-screen mode – but not together. Meaning if you shift to Focus mode, gestures don’t work as well as they could, since Word is on the desktop. More importantly, when working on a project, I often need two or more windows open at once. For this, full-screen doesn’t work, but something like Windows 7 Snap is ideal.

I’ve found quite a few tools over the past few weeks that have made working on the Mac an enjoyable experience. Some of these (Pages, and Office for Mac 2011) I’ve owned for a while. But most are things I’ve purchased since I bought my 13″ Retina MBP. In alphabetical order, here’s the list:

  • BetterSnapTool (US$1.99) – Elegantly snaps windows to a quarter, half, or maximized screen on the desktop (or custom sizes/layouts, using the cursor, keyboard shortcuts, or by overloading OS X’s native window control buttons. This is an incredibly well done app, and I would have paid far more than US$1.99 for it. (BetterSnapTool does not interact with OS X’s full-screen model, unfortunately, but that’s a minor thing.)
  • ForkLift (US$19.99) – Okay, OS X’s Finder kind of stinks. It works fine for the limited needs of most users, and honestly it really seems that Apple is keen to largely kill off the Finder in due time. (Try to get to the root of a Mac’s HDD on Mountain Lion. Just try it.) Regardless, Finder doesn’t flex very far to meet the needs of power users. For this, I’ve turned to ForkLift, which provides a multi-pane file browser. Our workflow has me working with local files, an SMB server, and a hosted SharePoint 2007 server. Though I have found a few small glitches – especially with SharePoint – ForkLift lets me move files through our workflow with little special hoop jumping necessary for any given step.
  • FormatMatch (Free) – One of the most annoying things in Word is its insistence on asking you how you want to paste in text. There was a better way to configure this in earlier versions of Word, but in 2011, the so-called “smart cut and paste” is more annoying than smart. FormatMatch effectively strips out formatting  when you cut so it receives destination formatting when you paste. A configurable shortcut enables you to turn it off when you actually do want formatting to stay applied when you paste. Not perfect, but it was free.
  • Jump Desktop ($US29.99) – In my opinion, the best tool to RDP to a Windows PC or VNC to a Mac (or other system). I’ve used the iOS client for years. Very full-featured client, supports Microsoft’s latest operating systems as well as features like Remote Desktop gateways and folder sharing. Because there is no Visio application for the Mac, and frankly no equivalent (I mean that in both the good and bad sense of it), I use “Physical Desktop Infrastructure”, and RDP to my Samsung Slate in order to edit Visio documents, which I sync using SkyDrive. (Disclaimer: I won a free copy of Jump Desktop – but already owned it for iOS, so I would have surely bought for OS X in time.)
  • Lock Me Now (Free) – Says what it does, does what it says. At Microsoft, you learn to lock your desktop or face the wrath of peers (who send email to management telling them how good you are about locking your desktop!) For this reason, I got in the habit of hitting Windows Key+L as I walked away from my computer, beginning with Windows XP, when it was first added. OS X has no such feature, locking your computer generally requires you to use the mouse, or find some shortcutting tool or script to lock the desktop. With an easily configured shortcut, this app can lock your desktop (I use the logical Cmd+L).
  • Office 2011 (US$219) – I’ll start by saying I’m not a fan of Outlook 2011. I use the mail, contacts, and calendaring features built into the Mac, and appreciate that they play better with Time Machine, which I use to back up all of my Macs. But as to the rest of the applications, there is no alternative for an organization that has a workflow that revolves around Microsoft Office format documents – there really isn’t. While Office 2011 has some thoughtful features that even Office 2013 and Office 2010 are lacking, at 2 years old, it’s starting to feel a bit dated, as it fails to take advantage of native OS X functionality (or do so optimally, as I noted). I expect an update to Office for Mac in 2014, so we’ll see how far that goes to catch up to where OS X (well into 10.9 by then) takes us. I’m a bit concerned, but not surprised, that the new crop of business intelligence features (both those built into Excel 2013 today and those in preview for it) are Windows only, and there only on the enterprise licensed/Office 365 variants of the suite). I don’t expect that to change – but there again is another reason why Jump Desktop is worth so much to me.
  • Pages (US$19.99) – Yeah, go ahead, say it. I bought Pages for one reason (I own both the iOS and OS X versions of all iWork apps, FWIW, but primarily use Pages). That reason? The ability to easily write in Pages and export to ePub in a reliable way. I’ve also recently decided that the value I got out of Evernote (I rarely used the search functionality, but was paying for a note synchronization service with search) was surpassed by the better UI offered by Pages, which syncs between OS X and iOS devices. I can create groups of files that are visible to all devices through iCloud. It just works. If I had a PC I used regularly, or I needed search, it wouldn’t work, and Evernote would be the more logical choice. But that isn’t the case. A follower on Twitter asked why I don’t use OneNote instead – this is pretty easy to answer. OneNote is overpowered on Windows, underpowered on every Apple platform it is available on, and not available on the Mac. So it doesn’t fit my workflow at all.
  • Pomodoro (US$2.99) – Gimmicky user interface that really should be cleaned up and simplified, but does what it infers – it’s a Pomodoro timer that tracks work sessions and breaks. 
  • Scribe (US$12.99) – I love this tool. Way overpriced for what it does, but I couldn’t find a tool that did what I wanted any better than this. I have found a few nits that cause it to crash, but overall, the simplest, most pleasant outliner I’ve found. Great for brainstorming and organizing thoughts. You might be looking at this and my earlier mention of Visio and wondering why I don’t buy the OmniGroup’s tools for outlining and mind mapping. Because I think they’re tragically overpriced and overrated for what they provide.
  • SkyDrive (Free) – Use it to sync a queue of Office documents I’ve got in progress between my Macs, Windows 8 Samsung Slate, and my iOS devices. I can’t tell you how much I love having everything synchronized and being able to open docs in the Office Web Apps when I need to.
  • Streambox (US$4.99) – Exceptional Pandora client for OS X that runs in the main Menu of your Mac, and provides configurable shortcuts for interacting with the service.
  • VirtualBox (Free) – I was a fan of VMware for years. I used Workstation at Microsoft, Winternals, and CoreTrace extensively, and was a beta tester of VMware Fusion from the very beginning. But the product has gotten so expensive, and required almost annual upgrades that seemed to diminish in value to me over time. I no longer use virtualization as a key component of my workflow, but do need to fire up a virtual machine once in a while. So VirtualBox meets my needs perfectly. It’s not the prettiest virtualization solution for the Mac, but it is the cheapest, and it works fine for what I need.
  • Voila (US$29.99) – I feel like I’ve barely scratched the surface of this tool that does an amazing job with screenshots, screen captures, audio, and more. It’s already proven quite useful for a few personal and work projects, though. Need to spend more time with it, but really like what I’ve seen so far.

16
Apr 13

Windows 8 and OS X Mountain Lion – separated at birth?


Alright – shake out the giggles from the title, and let me show you why I said that.

Until recently I had been using Windows 8 every day – and recently switched to a Mac (running 10.8 Mountain Lion) as my primary computing device. The more I have used Mountain Lion – especially with apps in full-screen mode – the more certain things felt subtly similar to Windows 8.

I believe that Mountain Lion is yet another step in Apple’s gradual (some might say slow) rhythm to converge the iOS and OS X platforms, as iOS devices become more capable and OS X becomes more touch friendly, but Apple is doing it in a very cautious way – slowly building a visual and functional perimeter around Mac applications to make them behave much more like iOS applications. I have a thesis around that, which I’ll try to discuss in another post soon. But the main point is that Apple and Microsoft are both shooting for relatively common goals – immersive applications available from an application marketplace that they control for their platforms – with an increasing emphasis on touch – or at least on gestures. I’m not going to say who cloned whom, as many of these are simply examples of multiple discovery, where Apple and Microsoft, largely now chasing common goals, implement similar features in order to achieve them. Let’s take a look at a few similarities.

Pervasive Cloud Storage

From the first time you sign on to Windows 8 or Mountain Lion, the similarities begin. On Windows 8, it tries the hard sell to get you to use a Microsoft Account for your identity – not linking it to a local account as you can do with an Active Directory account, but making your Microsoft Account a local account, and enabling you to synchronize settings (but currently not applications and the Start screen) between two or more computers.

Windows SkyDrive Sync

Apple, on the other hand, doesn’t embed iCloud quite as in-your-face, and doesn’t use it to synchronize most settings (or Dock items – unlike its predecessor, MobileMe) but does embed it all over the operating system with several built-in features (such as Safari tab synching across OS X and iOS) Photo Stream, Notes, and Reminders, with applications also able to hook in on their own for storage. Unlike SkyDrive, iCloud (like the file system on iOS) is opaque, and not user navigable – only exposed through applications and operating system features that elect to hook into iCloud. Speaking of hooking into iCloud, some apps like TextEdit ask if you want to save new or existing documents locally or in iCloud (with a dialog that is, honestly, un-Apple-like).

iCloud Sync

Heads-up Application Launcher

Both Windows 8 and Mountain Lion provide a “heads-up” approach to launching applications. With Windows 8, this is the Start screen. With OS X, it is Launchpad, first introduced with OS X Lion in 2011. Windows 8’s Start screen (love it or hate it), is a full-screen (usually multi-screen, continuously scrolling) launcher. This launcher can feature notifications and additional information from the applications themselves. Applications can be grouped, and “tiles” can be resized, but not combined into collapsible folders, and are somewhat fussy about placement. Windows does provide interactivity through the Start screen, in the form of Live tiles. See the Weather app below for an example of a Live tile, and Productivity as an example of a group. To my point about fussiness – note the Remote Desktop tile, and the two to its left. Remote Desktop cannot currently be placed underneath CalcTrek in that column – the Start screen always wants columns of a set width (one wide column or two double-width columns), not a single-width column.

Windows Start screen

Since OS X Lion (10.7, almost two years ago), Apple has included Launchpad, which is a feature that presents a (drum-roll, please) full-screen (usually multi-screen, individually paged, as in iOS) application launcher. Unlike the Start screen, Launchpad does not feature any sort of status for applications. They are a static “sea of icons” as Microsoft likes to say about iOS. Instead, notifications now use the Apple Notification Center, which is integrated into the shell. Launchpad application icons don’t ever have notification “badges”, say for reminders or new mail. Instead, notifications are available for applications that are in the OS X Dock or in Notification Center. One or more application icons in Launchpad can be grouped together into a folder, which can be named – just as in iOS. Here is Launchpad:

Launchpad

Intriguingly, OS X Mountain Lion added a much needed feature to Launchpad (which Windows 8 featured from the first day the public saw it), type to search the list of applications. Here is Windows 8 app search, and here is the same feature in OS X.

Application Store

File under “obvious comparison point”. Beginning with OS X Lion in 2011, the Mac App Store offered a limited selection of applications for free download or purchase. In Lion, these were effectively just Mac Apps that were willing to forego 30% of their sales revenue to be in the store (they didn’t have to live within tight constraints). In Mountain Lion, apps were forced to live within the confines of a sandbox, much like applications on iOS – where the damage one app can do to others, the operating system, or user data, is limited. Windows Store applications (WinRT applications) by definition must live within a very strict sandbox – in many ways more strict than the rules required beginning with Mountain Lion.

The Windows Store follows the same design paradigms as other Windows 8 applications. In general, the design of the Windows Store and the App Store on OS X are remarkably similar. A significant difference is that Windows Store applications can be – at the developer’s discretion – provided as trials. No such feature is explicitly available in the App Store, though some developers do achieve a similar goal by providing a free simplified or limited version of the application that is unlocked through an in-app purchase.

Here is the Windows Store:
Windows Store

Here is the App Store on OS X (running windowed, though it can of course run full-screen too):
App Store on OS X

Immersive Applications

Windows Store applications, by definition, are immersive. The full-screen user interface is designed to remove window chrome and let the application itself shine through. Windows Store applications must be either full-screen, snapped, or backgrounded. The next release of Windows is expected to add more window modes for Windows Store applications, but will still not add (back) overlapping windows – in other words, it will still be more like Windows 2.0 than Windows 3.0.

Here is an example of a Windows Store application, the immersive mode of Internet Explorer – which is only capable of being run full-screen or snapped with another app, not in a standalone window:

Modern IE

Here is an example of a full-screen application on OS X Mountain Lion. Note that not all applications can run full-screen. However all applications that can be can also be run windowed. Here is an example of Pages running full-screen on Mountain Lion:

Here is Pages with that same document in a window. The full-screen models of both Mountain Lion and Windows 8 feature hidden menus. The Windows 8 App bar as implemented for Windows Store applications is hidden off the screen to the top or bottom of the application, and can be implemented in wildly varying implementations by developers. The menus for full-screen applications in Mountain Lion are effectively the same Apple Menu-based menu that would normally appear when it was running not in full-screen. The main difference is that the Apple Menu in non Full-screen mode is detached – like Mac applications have always been. In full-screen mode, the menu behaves much more like a Windows application, stuck to the application running full-screen. The menu is hidden until the cursor is hovered over an area a few pixels tall across the top of the screen. Similarly, the Dock is always hidden when applications are running full-screen, until the cursor hovers over a similar bar of space across the bottom of the screen.

What is kind of fascinating to consider here is that Internet Explorer 10 in Windows 8 is, in many ways, mirroring the functionality provided by a Lion/Mountain Lion full-screen application. It is one binary, with two modes – Windowed Win32, and full-screen immersive – just as Pages is displaying in the images shown and linked earlier.

Gesture-friendly

In “desktop mode”, both Windows 8 and OS X Mountain Lion focus more on gestures than previous releases of both. With a touch-screen or trackpad, Windows 8 is very usable (I believe more usable than it is with a mouse), once you have mastered the gestures included. Both have aspects of the shell and many applications that recognize now common gestures such as pull to refresh, pinch to zoom, and rotation with two fingers.

Windows 8 provides a single, single-finger in from the left, gesture to switch applications one at a time, which can be expanded to show a selection of previously run applications to be available, but also includes the desktop. Though I feel Windows 8’s app switching gesture to be limited, it works, and could be expanded in the future to be more powerful. Here you can see Windows 8’s application switcher.

I have used gestures in iOS for the iPad since they first arrived in a preview form that required you to enable them through Xcode. The funny thing about these gestures is, while they aren’t necessary to use on the iPad, they are pretty easy to learn, and can make navigating around the OS much easier. When I started using my rMBP with its built-in trackpad and a Magic Trackpad at my desk, I quickly realized that knowing those gestures immediately translated to OS X. While you don’t need to know them there either, they make getting around much easier. Key gestures are common between iOS on the iPad and on OS X:

  1. 5-finger pinch – iOS: “closes” application and goes to shell application launcher – OS X: Goes to Launchpad
  2. 4 finger-swipe left or right – navigates up or down the application stack of iOS applications/OS X full-screen applications, desktop, & Dashboard (which I disable, as I don’t find it useful).
  3. 4 finger swipe up (or double-press of home button) – on iOS, shows you the list of recent applications from most recent  to least (left to right). Swiping left moves you down the stack. Swiping right moves you up the stack (see 2, above). On OS X, this shows you “Mission Control”, which is effectively the same thing as iOS, just with desktop and full-screen applications included
  4. 3 or 2 finger swipe to the left while on the desktop exposes OS X’s Notification Center.
  5. 2-finger swipe in many OS X applications is used to navigate backwards or forwards, including Safari and the App Store. Regrettably, two-fingered navigation back and forth is not available in the Finder (a weird oversight, but perhaps a sign of the importance Apple feels about the Finder).

Here is OS X’s Mission Control feature, exposing two full-screen applications (iTunes and Pages) and three applications on the desktop (Reminders, Safari, and Mail):

Mission Control

The most fascinating thing here is that, while Windows 8 has been maligned for it’s forced duality of immersive-land and the legacy desktop, the Mac is actually doing the same thing – it just isn’t forcing applications to be full-screen (yet). Legacy applications run on the desktop, and new applications written to the latest APIs run full-screen and support gestures. Quick – was that sentence about Windows 8, or Mountain Lion? It applies equally to both!

I think it’s very interesting to take a step back and see where Apple has very gradually moved forward over the last several instances of OS X, towards a more touch and immersive model, where Microsoft took the plunge with both feet, focusing first on touch, while leaving the Win32 desktop in place – but seemingly as a second-class citizen in priority to WinRT and Windows Store applications.

The next several years will be quite interesting to watch, as I think Apple and Microsoft will wind up at a similar place – just taking very different steps, and very different timeframes, to get there.


14
Apr 13

The PadFone is not the future

I’ve been pondering the existence of devices like the Asus PadFone and PadFone 2 recently.

Not really convertible devices, not really hybrid devices, they’re an electronic centaur. Like an Amphicar or a Taylor Aerocar, the PadFone devices compromise their ability to be one good device by instead being two less than great devices.

I haven’t found a good description of devices like the PadFone – I refer to them as “form integrated”. One device is a dumb terminal and relies on the brain of the other.

While a novel approach, the reality is that form integrated devices are a bit nonsensical. Imagine a phone that integrates with a tablet, or a tablet that integrates into a larger display. To really work well, the devices must be acquired together, and if one breaks, it kills the other (lose your Fone from the PadFone, and you’ve got a PadBrick).

You also wind up with devices where the phone must be overpowered in order to drive the tablet (wasting battery) or a weak phone that results in a gutless tablet when docked.

Rather than this “host/parasite” model of the form integrated approach, I would personally much rather see a smart pairing of devices. Pairing of my iPhone, iPad, and Mac, or pairing of a Windows Phone, Windows 8 tablet, and a Windows 8 desktop.

What do I mean by smart pairing? I sit down at my desktop, and it sees my phone automatically over Bluetooth or the like. No docking, no need to even remove it from my pocket. Pair it once, and see all the content on it. Search for “Rob”, and see email that isn’t even on the desktop. Search for “Windows Blue”, and it opens documents that are on the iPhone.

The Documents directory on my desktop should be browsable from my phone, too (when on the same network or if I elect to link them over the Internet).

Content, even if it is stored in application silos, as Windows Store applications and iOS/OS X applications do, should be available from any device.

I think it would also be ideal if applications could keep context wherever I go. Apple’s iCloud implementation begins to do this. You can take a document in Pages across the Mac, iPad, and iPhone, and access the document wherever you are. Where Asus is creating a hardware-based pairing between devices, Apple is creating a software-based pairing, through iCloud. It is still early, and rough, but I personally like that approach better.

My belief is that people don’t want to dock devices and have one device be the brain of another. They don’t want to overpay for a pair of devices that aren’t particularly good at either role and instead will pay a premium for two great devices, especially if they integrate together seamlessly and automatically.

Much as I believe the future of automotive electronics is in “smartphone software integrated” head units rather than overly-complex integrated computing built into the car, the future of ubiquitous computing lies in a fabric of smart devices that work together, with the smartphone most likely being the key “brain” among them. Not with its CPU driving everything else, but instead with it’s storage being pervasively available wherever you are, without needing to be docked or plugged in.


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?


06
Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.


08
Feb 13

Task-Oriented Computing

Over the past six years, as the iPhone, then iPad, and similar devices have caused a ripple within the technology sector, the industry and pundits have struggled to define what these devices are.

From the beginning, they were always classified as “content consumption devices”. But this was a misnomer then, and it’s definitely wrong today. Whether we’re talking about Apple’s devices, Android phones or tablets, Blackberry’s new phones, or devices running Windows 8/RT and Windows Phone, calling them content consumption devices is just plain wrong.

A while ago, I wrote about hero apps and promiscuous apps. I didn’t say it then, but I’ll clarify it now. Promiscuous apps hit first not because they are standout applications for a device to run, but rather because they’re easy!

Friends who know me well know that I’m often comparing the auto industry of the early 1900’s with today’s computing/technology fields. When you consider Henry Ford at the sunrise of the auto industry, the Quadricycle was his first attempt to build a car. This wasn’t the car he made his name with. But it’s the car that got him started. This car featured no safety equipment, no windscreen – it didn’t even have a steering wheel, instead opting for the still common (at the time) tiller to control the vehicle.

Promiscuous applications show up on new platforms for the same reason that Henry’s Quadricycle didn’t feature rollover protection and side-impact beams. It’s easy to design the basics. It’s hard to a) think beyond what you’ve seen and b) build something complex without understanding the risks/benefits necessary to build it to begin with.

As a result, we see these content portals like Netflix, Skype, Dropbox, and Amazon Kindle Reader show up first because they have a clear and well understood workflow that honestly isn’t that hard to bring to new platforms so long as the platforms deliver certain fundamentals. Also, most mobile platforms are “close enough” that with a little work, these promiscuous apps can get their quickly.

But when we look out farther in the future – in fact, when we look at Windows RT and criticize it for a lack of best-of-breed apps that exploit the platform less than 4 months after the platform first released, it’s also easy to see why those apps aren’t on Windows RT or in the Windows Store (yet), and why they take a while to arrive on any new platform to begin with.

Developing great new apps on any platform is a combination of having the skills to exploit the platform while also intimately understanding the workflow of your potential end-users. Each of these takes time, together they can be a very complicated undertaking. As we look at apps like Tweetie (Twitter for iPhone now) and Sparrow (acquired by Google), the unique ways that they stepped back and examined the workflow requirements of their users, and built clean, constrained feature sets to meet those requirements – and often innovative interface approaches to deliver them – are key things that made them successful.

The iPad being (wrongfully, I believe) categorized as a content consumption device has everything to do with those applications that first arrived on the device (the easy ones). It took time to build applications that were both exploitative of the platform and met the requirements of their users in a way that would drive both the application adoption and platform adoption. People looked at the iPad as a consumption device from the beginning because it is easy to do so. “Look, it’s a giant screen. All it’s good for is reading books and watching cat videos.” Horsefeathers. The iPad, like Windows RT, is a “clean slate”. Given built-in WiFi and optional 3G+ connectivity, tablets become a means to perform workflow tasks in ways we’d never consider with a computer before. From Point of Service tasks to business workflow, anytime a human needs to be fed information and asked to provide a decision or input to a workflow, a tablet or a phone can make a suitable vehicle for performing that task. Rather than the monolithic Line of Business (LOB) apps we’ve become used to over the first 20 years of Windows’ life, instead we’re approaching a school where – although they take time to design and implement correctly – more finite task oriented applications are coming into vogue. Using what I refer to as “task-oriented computing”, where we focus less on the business requirements of the system, and more on what users need to get done during their workday, this new class of applications can be readily integrated into existing back-office systems, but offer a much easier and more constrained user workflow, faster iteration, and easier deployment when improving it versus classic “fat client” LOB apps of yore.

The key in task-oriented computing, of course, is understanding the workflow of your users (or your potential users, if this is a new application – whether inside or outside of a business), and distilling that workflow into the correct discrete steps necessary to result in an app that flows efficiently for the end users, and runs on the devices they need it to. A key tenet here is of course, “less is more” and when given the choice of throwing in a complex or cumbersome feature or workflow – jettisoning the feature until time and understanding enable it to be executed correctly. When we look at the world of ubiquitous computing before us, the role that task-oriented computing plays is quite clear. Rather than making users take hammers to drive in screws, smaller, task-oriented applications can enable them to process workflow that may have been cumbersome before and enable workers to perform other more critical tasks instead.

When talking about computing today in relation to the auto industry, I often bring up the electric starter. After the death of a friend in 1910 due a crank starter kicking back and injuring him, Henry Leland pushed to get electric starters in place on his vehicles, and opened up motoring to a populace that may have shunned motorcars before then, do to the physical strength necessary to start them, and potential for danger if something went wrong with the crank.

When we stand back and approach computing from the perspective of “what does the software need to do in order to accommodate the user” instead of “what does the user need to do in order to accommodate the software” as we have for the last 20 years, we can begin to remove much of the complexity that computing, still in its infancy, has shoved into the face of users.


28
Oct 12

iOS is showing its age

My iPhone and my iPad are almost always running the latest version of iOS. When the App Store icon lights up with app updates, I click it like a Pavlovian parlor trick. Sometimes to regret, but not always…

My wife on the other hand? Her iPhone is running iOS 5 – she’s terrified of the new maps app. Her App Store icon read “48” last night when I went in to try and unwind the me.com/Mac.com/iCloud.com bedlam she has accidentally created for herself. 48. 48 app updates. My OCD makes my neck itch just thinking about that. Not to even think about the chaos of the accounts that cannot be merged that I still have to try and repair.

The original vision of iOS was that of a thin client. Fat OS, but with Web-based apps that could have been patched relatively easily, when treated as a service. But when the App Store arrived, it broke all that. From that point on, every user became their own admin. As a result, iOS devices became the new Windows. Patched only by force, or when the IT-savvy relative freaks out about how out of date the OS or apps are. Conversely, because core apps like Maps are updated with the OS (or removed, as in the case of the YouTube app), some users – even technical ones – will elect to play this update through, and not update. While innumerable people have updated to iOS 6, lots haven’t.

People don’t like to get their tires rotated. They don’t like to get their oil changed, or teeth cleaned. Call it laziness… Call it a desire for ruthless efficiency… People rarely perform proactive maintenance. iOS should have an option, on by default to update in the background. More importantly, in an ecosystem where too many app authors do the bare minimum in terms of security, apps should have that same option.

The original iPhone succeeded not because of apps. No, it succeeded because it was a better, more usable phone than almost anything else on the market. It just worked. It had voicemails we could see before listening, contacts we could easily edit on the phone, and a Web browser that was better than any mobile browser we’d ever seen before.

But the OS is showing its age. Little nuances like the somewhat functional search screen, Favorites in Contacts, and VIPs in Mail show that iOS is under structural pressure to deal with the volume of data it tries to display in a viable way. Notifications and the Settings app seem fragmented and are starting to become as disorganized as the Windows Control Panel (that’s bad!). Photo Stream sharing is a joke. It’s unusable. The edges are showing.

Of all the things I could wish for in the next version of iOS – if there was one guiding mantra I could tell Tim Cook I want in the next iOS… I would say, “Please give me less of more, and more of less.” The OS may need to be expanded where the OS can do more with the modern hardware of the phone after the iPhone 5 and the 5th generation iPad, but in so many more ways, it needs to be cautiously, carefully reorganized – cleaned up, with the spirit that the original iPhone and iPhone OS used to establish their role – that of simplicity, a mantra of “It just works”. OS and application updates that self-apply for all consumers except those who opt out of it…

I’ve been a fan of the iPhone from the beginning. But I really think the platform is showing its age, and isn’t nearly as usable as it once was. All too often lately, I look at something in the OS and have to shake my head that it works that way. It’s time to clean up the house.


10
Jun 11

It’s the attach rate, stupid!

For over a year, I’ve struggled to quantify something that I’ve felt was a truism in the iPhone vs. Android battle. I still can’t fully quantify it with evidence, but I think the market is beginning to bear out what I’ve thought was the case.

For a long time, I’ve believed that the consumers who buy Android devices and the consumers who buy iOS devices (I’m talking Android phones to iPhone, primarily) are fundamentally different types of consumers. It’s not to say that there aren’t some similarities, and there’s not crossover – but I believe this to be the case.

Fred Wilson, a very successful VC, has stated that the future belongs to Android (or at least that there are going to be a lot of Android devices around). Similarly, a WSJ reporter wrote a piece that stated (subscription required) that the descent of the price that your average Android device is bringing will force Apple’s hand and make them lower the price of the iPhone.

Let’s state these theses:

  1. A crapload of Android devices in the market is good. My question – for whom? Google? App authors? Telcos? Device manufacturers (OEMs)?
  2. Android’s falling price will cause Apple to have to match it. My question – why? I don’t believe that Android consumers are Apple consumers, and vice versa.

I contend that they are both wrong.

In 2007, Apple shocked the world by releasing a phone. A REALLY EXPENSIVE phone. But this phone did something important. Every phone before it had been a device seemingly designed by committee, to meet the business goals of a wireless telco. This one was designed for the consumer first. Apple’s first foray into cell phones (the ROKR E1, in 2005), used badge enginering and iTunes compatibility to try and make an impact. It wasn’t an Apple product at all. The impact was a dull thud.

Apple surely learned a lot in that exercise, first of which I believe was that a device that promised any Apple experience needed to be a full Apple experience. The backstory is now the stuff of legend, but with partner AT&T (not Apple’s first choice, but one lucky/wise enough to go along with Apple’s approach), found an ally willing to cede control of the device experience in exchange for temporal exclusivity of the device. I contend that this act, right here, putting Apple design first over everything else is what founded the basis of Apple’s success with the iPhone, and why both of the theses above are wrong.

Take a look at this image:

How Apple’s cycle progresses:

  1. Apple delivers what customers want (in 2007, a phone designed for consumers by Apple, not by and for the telco)
  2. Customers (with, generally, higher than average discretionary income) pay a premium for devices
  3. App vendors arrive
  4. Customers pay a premium for apps
  5. App vendors thrive (process repeats as Apple expands the capabilities of iOS annually or faster)

Step 1 for Apple was delivering the first iPhone. Remember, this phone had NO 3rd party apps at launch. It had the ability to pin web pages to the home screen, and these could be designed to be “application-ish”. No dev ecosystem or tools, no App Store, no sales revenue. Oh, and it also had a very premium price of $599, and was locked to AT&T’s network.

But it had a touch-driven user interface, accelerometers, a very usable web browser, powerful email client, a camera, iTunes media integration and an Apple fit and finish to the device and software that recalled what Mac fans were used to.

That’s where we were in 2007. People paid through the nose to get a phone that put some aspect of design in front of telco business requirements.

Those initial customers and a crazy, excited jailbreaking community who pushed the bounds of the platform further and further (and still do) encouraged Apple to reconsider their “App story”, and later open the App Store. Tons of vendors arrived. Lots probably failed, a few probably succeeded, and a handful exploded to incredible success. Remember, these are games and apps for $.99-$9.99-ish. I don’t care who you are. If you paid the reduced $399 price for the first gen iPhone or slightly less for the 3G and 3GS when they came out, even $9.99 isn’t going to break the bank for an amazing game or app. In an era when a console game sold for 5 times that – and that was high-end apps. Few sell for more than that… App vendors grew, took chances on new apps that pushed what the platform could do, and they made money for it. Apple took this success and refined each successive version of the iPhone, which has not dropped in price (for the current top-end phone) since the 3GS. Combine this with the reasonably stable dev ecosystem (strong dev tools, a relatively consistent set of devices to support) and it creates a strong market, and a happy place for ISV’s to try and make money from paid apps for iOS.

iPhone customers pay a premium for their phones – and I contend, that same ethos passes through to the iTunes store. Apple customers buy far more content and apps than Android. So while it’s well and good that Fred sees a glut of Android devices, I don’t see that as a good thing – at least not for Android. And unlike the WSJ reporter, I don’t see it doing anything to the iPhone’s price, either. The iPhone’s content attach rate, the spend per consumer per phone is what created, and continues to rapidly grow, the Apple app ecosystem, Apple’s marketshare, and Apple’s revenue figures. Why does Apple give away iOS, and effectively give away OS X Lion? Because Apple doesn’t make money in operating systems anymore. They’re a content company. Usable hardware and a usable OS are a wormhole into the content portal (and the sale of their own apps) where Apple makes more and more of their revenue. In gaming consoles, where you lose considerable money upfront, the key is attach rate. Apple has made mobile phones no different, except they aren’t hemorrhaging money to establish and continue to grow their market share. Through increased spend, prototypical Apple consumers continue the virtuous cycle of market expansion. The question is whether Apple can sustain that enough as the market expands beyond prototypical/legacy “Apple” consumers to consumers who may be more thrifty.

Let’s take a look at Android now.

How Google’s cycle progresses:

  1. Google delivers what telcos want (a free operating system. Actually, Google pays the telco)
  2. Telcos lock down/fork devices, flood market (craplets on the desktop, custom shells, locked firmware, no updates)
  3. Bottom drops out of Android handset market (due to oversupply)
  4. Budget-sensitive/feature phone buyers buy Android (when your customers love you for your free email, it may also be an indicator of their willingness to spend actual money)
  5. App vendors fail to thrive without ad-sub apps

Yeah, I know, a bunch of you are going to get ticked at me here. “What do you mean delivered what the telcos want? Android is open source, you eediot!”

Sure. Yeah. For you and your friends that enjoy mucking with source, that’s all well and good. Consumers don’t give a crap about open source. But carriers have fallen in love with Android because it’s cheap (they get their own OS to even customize down to a source level) and Google pays them due to revenue Google will get off of the Internet traffic from the handsets. So OEMs got paid to put an open source OS on devices instead of blowing their own money to build their own OS as some had before, and many of them put custom shells and glued down applications – tricks OEMs used to pull with Windows CE/Mobile years ago. Most importantly, many phones ended up hosing consumers, with devices abandoned months after release, provided with few if any updates. Any enhancements Google added to future versions, and more importantly, any security fixes, are unavailable to a huge category of consumers.

So now we’ve got telcos all trying to grab a sword and take a swipe at Apple, trying one device, failing, trying another, failing, trying another… flooding the market with devices that are effectively identical to consumers, and many are, frankly, cheap. Failing to succeed at a 1:1 price range to the iPhone, telcos begin to cut prices and take the devices downmarket. This fails to dissuade the typical iPhone buyer, typically with more discretionary income to spend (or waste, depending on your POV). Now, as many, including Fred, point out, this leads to a shipload of Android devices hitting the market. Um. Yay?

If you have a ton of devices coming out, flooding the market, driving prices down, you begin to attract a different category of consumer. Beyond the open source devotee, I argue that Android now attracts consumers who historically might have only gone for a feature phone. But because they are generally more budget/value conscious, they buy few or any commercial apps. The little statistics we get out of Google on Android app sales reflects this hypothesis – far more free, ad-driven apps appear to be downloaded on Android than for-pay apps.

This of course doesn’t bother Google any at first, because they’re getting ad traffic from the phone itself, and now from more and more ads in apps. But the telcos hose themselves by forcing prices down, and hurt themselves and their OEM partners as the bottom falls out of the Android market. Congratulations, you’ve just replaced feature phones with Android, and there are a lot of them. But Fred’s wrong. That’s not good for app vendors. Sales are good for them, not a skinny vig off of in-app ad revenue. It’s great for Google, though. Compare this to Apple, where ISV’s are broadly making money, and Apple is making considerable revenue off of device sales, content sales, and app sales – and the constant sales of millions of iPhones would seem to indicate that the WSJ reporter is incorrect, consumers are still willing to pay a premium for a consumer-driven experience.

The vicious cycle continues here for Google – though we see Google trying to wrest control back from the telcos in the form of Android anti-fragmentation clauses in contracts to try and prevent further disintegration of the Android experience.

For Apple the platform begat the ecosystem which pulled in the developers, and attracted more consumers into the ecosystem to buy apps. As I’ve said before, a platform is nothing without apps. If a device doesn’t have any more use cases than the phone or PC you’ve already got, why would you switch? Google may glut the market with devices, but without an app ecosystem of consumers willing to pay for apps, few commercial app ISVs will try, and fewer apps will pull in less and less non-ad-driven revenue. Great, so Google gets ad revenue, consumers get any app they want as long as it has an ad, and they get the user interface that their OEM designed, not the one Google did. Android is a Trojan horse that harms the telcos and OEMs more than they imagine. Making more and more devices. Selling them for less and less. Combined with the fact that Android users consume more bandwidth (again, great for Google, bad for the telco), and it sounds like a recipe for harming yourself with consumers who want more and more from you for less and less, and competitors striking back at you with devices that look, act, and feel just like yours, or better. No wonder the telcos are trying to lock down Android. Sounds more and more like 2006 all over again.

The question is, as Microsoft tries to break into this market, what happens to them? More on that in a bit.