06
Jul 13

The iWatch – boom or bust?

In my wife’s family, there is a term used to describe how many people can comfortably work in a kitchen at the same time. The measurement is described in “butts”, as in “this is a one-butt kitchen”, or the common, but not very helpful “1.5 butt kitchen”. Most American kitchens aren’t more than 2 butts. But I digress.

I bring this up for the following reason. There is a certain level of utility that you can exploit in a kitchen as it exists, and no more. You cannot take the typical American kitchen and shove 4 grown adults in it and expect them to be productive simultaneously. You also cannot take a single oven, with two racks or not, and roast two turkeys – it just doesn’t work.

It’s my firm belief that this idea – the idea of a “canvas size” applies to almost any work surface we come across. From a kitchen or appliances therein, and beyond. But there is one place that I find it applies incredibly well – to modern digital devices.

The other day, I took out four of my Apple devices, and sat them side-by-side in increasing size order, and pondered a bit.

  • First was my old-school Nano; the older square design without a click-wheel that everyone loved the idea of making a watch out of.
  • Second was my iPhone 5.
  • Third, my iPad 2.
  • Finally, My 13″ Retina Macbook Pro.

It’s really fascinating when you stop to look at tactile surfaces sorted like this. While the MacBook Pro has a massively larger screen than the iPhone 5, the touch-surface of the TrackPad is only marginally larger than that of the iPhone. I’ve discussed touch and digits before, but the recent discussion of the “iWatch” has me pondering this yet again.

While many people are bullish on Google Glass (disregarding the high-end price that is sure to come down someday) or see the appeal of an Apple “iWatch”, I’m not so sure at this point. For some reason, the idea of a smart watch (aside from as a token peripheral), or an augmented reality headset like Glass doesn’t fly for me.

That generation iPod Nano was a neat device, and worked alright – but not great – as a watch. Among the key problems the original iOS Nano had when strapped down as a watch?

  1. It was huge – in the same ungainly manner as Microsoft’s SPOT watches, Suunto watches, or (the king of schlock), Swatch Pop watches.
  2. It had no WiFi or Bluetooth, so couldn’t easily be synched to any other media collection.

Outside of use as a watch, for as huge as it was, the UI was hamstrung in terms of touch. I believe navigation of this model was unintuitive and clumsy – one of the reasons I think Apple went back to a larger display on the current Nano.

I feel like many people who get excited about Google Glass or the “iWatch” are in love with the idea of wearables, without thinking about the state of technology and – more importantly, simple physical limitations. Let’s discard Google Glass for a bit, and focus on the iWatch.

I mentioned how the Nano model used as a watch was big, for its size (stay with me). But simply because of screen real-estate, it was limited to one-finger input. Navigating the UI of this model can get rather frustrating, so it’s handy that it doesn’t matter which finger you use. <rimshot/>

Because of their physical canvas size available for touch, each of the devices I mentioned above has different bounds of what kinds of gestures it can support:

  • iPod Nano – Single finger (generally index, while holding with other index/thumb)
  • iPhone 5 – Two fingers (generally index and thumb, while holding with other hand)
  • iPad 2 – Up to five fingers for gesturing, up to 8/10 for typing if your hands are small enough.
  • MacBook Pro – Up to five fingers for gesturing (though the 5-finger “pinch” gesture works with only 4 as well).

I don’t have an iPad Mini, but for a long time I was cynical about the device for anything but an e-reader due to the fact that it can’t be used with two-hands for typing. Apparently there are enough people just using it as an e-reader or typing with thumbs that they don’t mind the limitations.

So if we look at the size constraints of the Nano and ponder an “iWatch”, just what kind of I/O could it even offer? The tiny Nano wasn’t designed first as a watch – so the bezel was overly large, it featured a clip on the back, it needed a 30-pin connector and headphone jack… You could eliminate all of those with work – though the headphone jack would likely need to stay for now. But even with a slightly larger display, an “iWatch” would still be limited to the following types of input:

  1. A single finger (or a stylus – not likely from Apple).
  2. Voice (both through a direct microphone and through the phone, like Glass).

Though it could support other Bluetooth peripherals, I expect that they’ll pair to the iPhone or iPod Touch, rather than the watch itself – and the input would be monitoring, not keyboard/mouse/touchpad. The idea of watching someone try to type significant text on a smart watch screen with an Apple Bluetooth keyboard is rather amusing, frankly. Even more critically, I imagine that an “iWatch” would use Bluetooth Low Energy in order to not require charging every single day. It’d limit what it could connect to, but that’s pretty much a required tradeoff in my book.

In terms of output, it would again be limited to a screen about the same size as the old Nano, or smaller. AirPlay in or out isn’t likely.

My cynicism about the “iWatch” is based primarily around the limited utility I see for the device. In many ways if Apple makes the device, I see it being largely limited to a status indicator for the iPhone/iPod Touch/iPad that it is “paired” with. Likely serving to provide push notifications for mail/messaging/phone calls, or very simple I/O control for certain apps on the phone. For example, taking Siri commands, play/pause/forward for Pandora or Spotify, tracking your calendar, tasks, or mapping directions, etc. But as I’ve discussed before, and above, the “iWatch” would likely be a poor candidate for either long-form text entry whether typed or dictated. (Dictate a blog post or book through Siri? I’ll poke my eyes with a sharp stick instead, thanks.) For some reason, some people are fascinated by the Dick Tracy approach of issuing commands to your watch (or your glasses, or your shoe phone). But the small screen of the “iWatch” means it will be good for very narrow input, and very limited output. I like Siri a lot, and use it for some very specific tasks. But it will be a while before it or any other voice command is suitable for anything but short-form command-response tasks. Looking back at Glass, Google’s voice command in Glass may be nominally better, but again, will likely be most useful as an augmented reality heads-up-display/recorder.

Perhaps the low interest I have in the “iWatch”, Pebble Watch, or Google Glass can be traced back to my post discussing live tiles a few weeks ago. While I think there is some value to be had with an interconnected watch – or smartphone command peripherals like this, I think people are so in love with the idea that they’re not necessarily seeing how constrained the utility actually will be. One finger. Voice command. Perhaps a couple of buttons – but not many. Possibly pulse and pedometer. It’s not a smartphone on your wrist, it’s a remote control (and a constrained remote display) for your phone. I believe it’ll be handy for some scenarios, but it certainly won’t replace smartphones themselves anytime soon, nor will it become a device used by the general populace – not unless it comes free in the box with each iPhone (it won’t).

I think we’re in the early dawn of how we interact with devices and the world around us. I’m not trying to be overly cynical – I think we’ll see massive innovation over time, and see computing become more ubiquitous and spread throughout a network of devices around and on us.

For now, I don’t believe that any “iWatch” will be a stellar success – at least in the short run – but it could as it evolves over time to provide interfaces we can’t fathom today.


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?