06
Nov 14

Is Office for mobile devices free?

As soon as I saw today’s news, I thought that there would be confusion about what “Office for tablets and smartphones going free” would mean. There certainly has been.

Office for iOS and Android smartphones and tablets is indeed free, within certain bounds. I’m going to attempt to succinctly delinate the cases under which it is, and is not, free.

Office is free for you to use on your smartphone or tablet if, and only if:

  1. You are not using it for commercial purposes
  2. You are not performing “advanced editing“.

If you want to use the advanced editing features of Office for your smartphone or tablet as defined in the link above, you need one of the following:

  • An Office 365 Personal or Home subscription
  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

If you’re using Office on your smartphone or tablet for any commercial purpose, you need the following:

  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

For consumers, this change is great, and convenient. You’ll be able to use Office for basic edits on almost any mobile device for free. For commercial organizations, I’m concerned about how they can prevent this becoming a large license compliance issue when employees bring their own iPads in to work.

For your reference, here are the license agreements for Excel for iOSPowerPoint for iOS, and Word for iOS.

*I wanted to add a footnote here to clarify one vagary. The new “Business” Office 365 plans don’t technically include Office 365 ProPlus – they are more akin to “Office 365 Standard”, but appears to have no overarching branding. Regardless, if you have Office 365 Business or Office 365 Business Premium, which include the desktop suite, you also have rights to the Office mobile applications.

Learn more about how to properly license Office for smartphones and tablets at a Directions on Microsoft Licensing Boot Camp. Next event is Seattle, on Dec. 8-9, 2014. We’ll cover the latest info on Office 365, Windows Per User licensing, and much more.


07
Sep 14

On the death of files and folders

As I write this, I’m on a plane at 30,000+ feet, headed to Chicago. Seatmates include a couple from Toronto headed home from a cruise to Alaska. The husband and I talk technology a bit, and he mentions that his wife particularly enjoys sending letters as they travel. He and I both smile as we consider the novelty in 2014 of taking a piece of paper, writing thoughts to friends and family, and putting it in an envelope to travel around the world to be warmly received by the recipient.

Both Windows and Mac computers today are centered around the classic files and folders nomenclature we’ve all worked with for decades. From the beginning of the computer, mankind has struggled to insert metaphors from the physical world into our digital environments. The desktop, the briefcase, files that look like paper, folders that look like hanging file folders. Even today as the use of removable media decreases, we hang on to the floppy diskette icon, a symbol that means nothing to pre-teens of today, to command an application to “write” data to physical storage.

Why?

It’s time to stop using metaphors from the physical world – or at least to stop sending “files” to collaborators in order to have them receive work we deign to share with them.

Writing this post involves me eating a bit of crow – but only a bit. Prior to me leaving Microsoft in 2004, I had a rather… heated… conversation with a member of the WinFS team about a topic that is remarkably close to this. WinFS was an attempt to take files as we knew them and treat them as “objects”. In short, WinFS would take the legacy .ppt files as you knew them, and deserialize (decompose) them into a giant central data store within Windows based upon SQL Server, allowing you to search, organize, and move them in an easier manner. But a fundamental question I could never get answered by that team (the core of my heated conversation) was how that data would be shared with people external to your computer. WinFS would always have to serialize the data back out into a .ppt file (or some other “container”) in order to be sent to someone else. The WinFS team sought to convert everything on your system into a URL, as well – so you would have navigated the local file system almost as if your local machine was a Web server rather than using the local file and folder hierarchy that we had all become used to since the earliest versions of Windows or the Mac.

So as I look back on WinFS, some of the ideas were right, but in classic Microsoft form, at best it may have been a bit of premature innovation, and at worst it may have been nerd porn relatively disconnected from actual user scenarios and use cases.

From the dawn of the iPhone, power users have complained that iOS lacked something as simple as a file explorer/file picker. This wasn’t an error on Apple’s part; a significant percentage of Apple’s ease of use (largely aped by Android and Windows (at least with WinRT and Windows Phone applications) is by abstracting away the legacy file and folder bird’s nest of Windows, the Mac, etc.

As we enter the fall cavalcade of consumer devices ahead of the holiday, one truth appears plainly clear; that standalone “cloud storage” as we know it is largely headed for the economic off-ramp. The three main platform players have now put cloud storage in as a platform pillar, not an opportunity to be filled by partners. Apple (iCloud Drive), Google (Google Drive), and Microsoft (OneDrive and OneDrive for Business – their consumer and business offerings, respectively), have all been placed firmly in as a part of their respective platform. Lock-in now isn’t just a part of the device or the OS, it’s about where your files live, as that can help create a platform network effect (AT&T Friends and Family, but in the cloud). I know for me, my entire family is iOS based. I can send a link from iCloud drive files to any member of my family and know they can see the photo I took or the words I wrote.

But that’s just it. Regardless of how my file is stored in Apple’s, Google’s, or Microsoft’s hosted storage, I share it through a link. Every “document” envelope as we knew it in the past is now a URL, with applications on each device capable of opening their file content.

Moreover, today’s worker generally wants their work:

  1. Saved automatically
  2. Backed up to the cloud automatically (within reason, and protected accordingly)
  3. Versioned and revertible
  4. Accessible anywhere
  5. Coauthoring capable (work with one or more colleagues concurrently without needing to save and exchange a “file”)
  6. As these sorts of features become ubiquitous across productivity tools, the line between a “file” and a “URL” becomes increasingly blurred, and the more, well, the more our computers start acting just like the WinFS team wanted them to over a decade ago.

    If you look at the typical user’s desktop, it’s a dumping ground of documents. It’s a mess. So are their favorites/bookmarks, music, videos, and any other “file type” they have.

    On the Mac, iTunes (music metadata), iPhoto (face/EXIF, and date info), and now the finder itself (properties and now tags) are a complete mess of metadata. A colleague in the Longhorn Client Product Management Group was responsible for owning the photo experience for WinFS. Even then I think I crushed his spirit by pointing out what a pain in the ass it was going to be to enter in all of the metadata for photos as users returned for trips, in order to make the photos be anything more than a digital shoebox that sits under the bed.

    I’m going to tell all the nerds in the world a secret. Ready? Users don’t screw around entering metadata. So anything you build that is metadata-centric that doesn’t populate the metadata for the user is… largely unused.

    I mention this because, as we move towards vendor-centered repositories of our documents, it becomes an opportunity for vendors to do much of what WinFS wanted to do, and help users catalog and organize their data; but it has to be done almost automatically for them. I’m somewhat excited about Microsoft’s Delve (nee Oslo) primarily because if it is done right (and if/when Google offers a similar feature), users will be able to discover content across the enterprise that can help them with their job. Written word will in so many ways become a properly archived, searchable, and collaboration-ready tool for businesses (and users themselves, ideally).

    Part of the direction I think we need to see is tools that become better about organizing and cataloging our information as we create it, and keeping track of the lineage of written word and digital information. Create a file using a given template? That should be easily visible. Take a trip with family members? Photos should be easily stitched together into a searchable family album.

    Power users, of course, want to feel a sense of control over the files and folders on their computing devices (some of them even enjoy filling in metadata fields). These are the same users who complained loudly that iOS didn’t have a Finder or traditional file picker, and who persuaded Microsoft to add a file explorer of sorts to Windows Phone, as Windows 8 and Microsoft’s OneDrive and OneDrive for Business services began blurring out the legacy Windows File Explorer. There’s a good likelihood that next year’s release of Windows 9 could see the legacy Win32 desktop disappear on touch-centric Windows devices (much like Windows Phone 8.x, where Win32 still technically exists, but is kept out of view. I firmly expect this move will (to say it gently) irk Windows power users. These are the same type of users who freaked out when Apple removed the save functionality from Pages/Numbers/Keynote. Yet that approach is now commonplace for the productivity suites of all of the “big 3” productivity players (Microsoft, Google, and Apple), where real-time coauthoring requires an abstraction of the traditional “Save” verb we all became used to since the 1980’s. For Windows to succeed as a novice-approachable touch environment as iOS is, it means jettisoning a visible Win32 and the File Explorer. With this, OneDrive and the simplified file pickers in Windows become the centerpiece of how users will interact with local files.

    I’m not saying that files and folders will disappear tomorrow, or that they’ll really ever disappear entirely at all. But increasingly, especially in collaboration-based use cases, the file and folder metaphors will largely move to the wayside, replaced by Web-based experiences and the use of URLs with dedicated platform-specific local, mobile or online apps interacting with them.


06
Jul 13

The iWatch – boom or bust?

In my wife’s family, there is a term used to describe how many people can comfortably work in a kitchen at the same time. The measurement is described in “butts”, as in “this is a one-butt kitchen”, or the common, but not very helpful “1.5 butt kitchen”. Most American kitchens aren’t more than 2 butts. But I digress.

I bring this up for the following reason. There is a certain level of utility that you can exploit in a kitchen as it exists, and no more. You cannot take the typical American kitchen and shove 4 grown adults in it and expect them to be productive simultaneously. You also cannot take a single oven, with two racks or not, and roast two turkeys – it just doesn’t work.

It’s my firm belief that this idea – the idea of a “canvas size” applies to almost any work surface we come across. From a kitchen or appliances therein, and beyond. But there is one place that I find it applies incredibly well – to modern digital devices.

The other day, I took out four of my Apple devices, and sat them side-by-side in increasing size order, and pondered a bit.

  • First was my old-school Nano; the older square design without a click-wheel that everyone loved the idea of making a watch out of.
  • Second was my iPhone 5.
  • Third, my iPad 2.
  • Finally, My 13″ Retina Macbook Pro.

It’s really fascinating when you stop to look at tactile surfaces sorted like this. While the MacBook Pro has a massively larger screen than the iPhone 5, the touch-surface of the TrackPad is only marginally larger than that of the iPhone. I’ve discussed touch and digits before, but the recent discussion of the “iWatch” has me pondering this yet again.

While many people are bullish on Google Glass (disregarding the high-end price that is sure to come down someday) or see the appeal of an Apple “iWatch”, I’m not so sure at this point. For some reason, the idea of a smart watch (aside from as a token peripheral), or an augmented reality headset like Glass doesn’t fly for me.

That generation iPod Nano was a neat device, and worked alright – but not great – as a watch. Among the key problems the original iOS Nano had when strapped down as a watch?

  1. It was huge – in the same ungainly manner as Microsoft’s SPOT watches, Suunto watches, or (the king of schlock), Swatch Pop watches.
  2. It had no WiFi or Bluetooth, so couldn’t easily be synched to any other media collection.

Outside of use as a watch, for as huge as it was, the UI was hamstrung in terms of touch. I believe navigation of this model was unintuitive and clumsy – one of the reasons I think Apple went back to a larger display on the current Nano.

I feel like many people who get excited about Google Glass or the “iWatch” are in love with the idea of wearables, without thinking about the state of technology and – more importantly, simple physical limitations. Let’s discard Google Glass for a bit, and focus on the iWatch.

I mentioned how the Nano model used as a watch was big, for its size (stay with me). But simply because of screen real-estate, it was limited to one-finger input. Navigating the UI of this model can get rather frustrating, so it’s handy that it doesn’t matter which finger you use. <rimshot/>

Because of their physical canvas size available for touch, each of the devices I mentioned above has different bounds of what kinds of gestures it can support:

  • iPod Nano – Single finger (generally index, while holding with other index/thumb)
  • iPhone 5 – Two fingers (generally index and thumb, while holding with other hand)
  • iPad 2 – Up to five fingers for gesturing, up to 8/10 for typing if your hands are small enough.
  • MacBook Pro – Up to five fingers for gesturing (though the 5-finger “pinch” gesture works with only 4 as well).

I don’t have an iPad Mini, but for a long time I was cynical about the device for anything but an e-reader due to the fact that it can’t be used with two-hands for typing. Apparently there are enough people just using it as an e-reader or typing with thumbs that they don’t mind the limitations.

So if we look at the size constraints of the Nano and ponder an “iWatch”, just what kind of I/O could it even offer? The tiny Nano wasn’t designed first as a watch – so the bezel was overly large, it featured a clip on the back, it needed a 30-pin connector and headphone jack… You could eliminate all of those with work – though the headphone jack would likely need to stay for now. But even with a slightly larger display, an “iWatch” would still be limited to the following types of input:

  1. A single finger (or a stylus – not likely from Apple).
  2. Voice (both through a direct microphone and through the phone, like Glass).

Though it could support other Bluetooth peripherals, I expect that they’ll pair to the iPhone or iPod Touch, rather than the watch itself – and the input would be monitoring, not keyboard/mouse/touchpad. The idea of watching someone try to type significant text on a smart watch screen with an Apple Bluetooth keyboard is rather amusing, frankly. Even more critically, I imagine that an “iWatch” would use Bluetooth Low Energy in order to not require charging every single day. It’d limit what it could connect to, but that’s pretty much a required tradeoff in my book.

In terms of output, it would again be limited to a screen about the same size as the old Nano, or smaller. AirPlay in or out isn’t likely.

My cynicism about the “iWatch” is based primarily around the limited utility I see for the device. In many ways if Apple makes the device, I see it being largely limited to a status indicator for the iPhone/iPod Touch/iPad that it is “paired” with. Likely serving to provide push notifications for mail/messaging/phone calls, or very simple I/O control for certain apps on the phone. For example, taking Siri commands, play/pause/forward for Pandora or Spotify, tracking your calendar, tasks, or mapping directions, etc. But as I’ve discussed before, and above, the “iWatch” would likely be a poor candidate for either long-form text entry whether typed or dictated. (Dictate a blog post or book through Siri? I’ll poke my eyes with a sharp stick instead, thanks.) For some reason, some people are fascinated by the Dick Tracy approach of issuing commands to your watch (or your glasses, or your shoe phone). But the small screen of the “iWatch” means it will be good for very narrow input, and very limited output. I like Siri a lot, and use it for some very specific tasks. But it will be a while before it or any other voice command is suitable for anything but short-form command-response tasks. Looking back at Glass, Google’s voice command in Glass may be nominally better, but again, will likely be most useful as an augmented reality heads-up-display/recorder.

Perhaps the low interest I have in the “iWatch”, Pebble Watch, or Google Glass can be traced back to my post discussing live tiles a few weeks ago. While I think there is some value to be had with an interconnected watch – or smartphone command peripherals like this, I think people are so in love with the idea that they’re not necessarily seeing how constrained the utility actually will be. One finger. Voice command. Perhaps a couple of buttons – but not many. Possibly pulse and pedometer. It’s not a smartphone on your wrist, it’s a remote control (and a constrained remote display) for your phone. I believe it’ll be handy for some scenarios, but it certainly won’t replace smartphones themselves anytime soon, nor will it become a device used by the general populace – not unless it comes free in the box with each iPhone (it won’t).

I think we’re in the early dawn of how we interact with devices and the world around us. I’m not trying to be overly cynical – I think we’ll see massive innovation over time, and see computing become more ubiquitous and spread throughout a network of devices around and on us.

For now, I don’t believe that any “iWatch” will be a stellar success – at least in the short run – but it could as it evolves over time to provide interfaces we can’t fathom today.


14
May 13

The Cloud is the App is the Cloud.

During the last week, I have had an incredible number of conversations about Office 365 with press, customers, and peers. It’s apparent that with version 3.0 of their hosted services, as Microsoft has done many times before at v3.0, this is the one that could put some points on the board, if not take a lead in the game.

But one thing has been painfully clear to me for quite some time, and the last week only serves to reinforce it. As I’ve mentioned before, there’s not only confusion about Microsoft’s on-premises and hosted offerings, but simply confusion about what Office 365 is. The definitions are squishy, and Microsoft isn’t doing a great job of really enunciating what Office 365 brings to the table. Many assume that Office 365 is primarily about the Office client applications (when in fact only the premium business editions of Office 365 even include the desktop suite! Many others assume that Office 365 is only hosted services, and Web-based applications, along the lines of Google Apps for Business.

The truth is, there’s a medley of Office 365 editions among the 4 Office 365 “families” (Small Business, Midsize Business, Enterprise/Academic/Government, and Home Premium). But one thing is true – Office 365 is about hosted services (Exchange Online/Lync Online/SharePoint Online for businesses, or Outlook.com/Skype/SkyDrive for consumers), and – predominantly – the Office desktop application suite.

I bring this up because many people point at native applications and Web applications and say that there is a chasm growing… an unending rift that threatens to tear apart the ecosystem. I disagree. I think it is quite the opposite. Web apps (“cloud apps” if you like) and native apps (“apps”) are colliding at high speed. Even today it isn’t really that easy to tell them apart, and it’s only going to get harder.

When Adobe announced their Cloud Connect service last week, some people said there wasn’t much “cloud” about it. In general, I agree. To that same end, one can point a finger at Office 365 and say, “that’s not cloud either” because to deliver the most full-featured experience, it relies upon a so-called “fat client” locally installed on each endpoint, even though for a business, a huge amount of the value, and a large amount of the cost, is coming from the cloud services that those apps connect to.

To me, this is much ado about nothing. While it’s true that one can’t call Office 365 (or Cloud Connect) a 100% cloud solution, at least in the case of Office, each version of Microsoft’s hosted services has come closer than the one before to delivering much of the value of a cloud service, it continues to rely on these local bits, rather than running the entire application through a Web browser. With Office, this is quite intended. The day Office runs equally well on the Web as it does on Windows is the day that Microsoft announces they’re shutting down the Windows division.

But what’s interesting is that as we discuss/debate whether Microsoft and Adobe’s offerings are indeed “cloudy enough”, as they strive to provide more thick apps as a service, Google is working on the opposite, applications that run in the browser, but exploit more local resources. When we look at the high-speed collision of Android into ChromeOS, as well as Microsoft’s convergence of Web development into the WinRT application framework, this all begins to – as a goal – make sense.

In 1995, as the Web was dawning, it wasn’t about applications. It was about sites. It gradually became about applications and APIs – about getting things done, with the Web, not our new local networks, as the sole communication medium. Conversely, even the iPhone began with a very finite suite of actions that a user could perform. One screen of apps that Apple provided, and extensibility only by pinning Websites to the Home screen. Nothing that actually exploited the native power and functionality of the phone to help users complete tasks more readily. Apple eventually provided the full SDK that enabled native, local applications, which would still often connect out to the Internet to perform their role – when the Internet was available.

Windows has largely always been about “fat client” applications, even going so far as to have the now quite old – but once new and novel – Remote Desktop Protocol to enable fat clients to become light-ish weight, as long as a network connection back to the server (or eventually desktop) running the application was available.

I bring these examples up because the idea  of “cloud applications” or cloud services is, as I noted, becoming squishy and hard to explicitly define, though I have to personally consider whether I really care that deeply about when applications are or are not cloudy (or are partly cloudy?).

Users buy (or use) applications because they have a specific task they need to complete. Users don’t care what framework the application is written in, what languages were used, what operating system any back-end of the application is running on, or what Web server it is connecting to.

What users do care about is getting the task done that led them to that application to begin with. Importantly, they need productivity wherever it can be available. With applications that are cloud-only, when you have a slow, or nonexistent Internet connection, you are… dead. You have no productivity. Flying on a plane but editing a Word document? You need a fat client. Whether it’s Google Apps for Business running on a Chromebook (with caching), QuickOffice on an iPad, or Office 2013 Pro Plus running on a Windows 7 laptop, without some local logic and file caching, you’re SOL at 39,000 feet without an Internet connection.

Conversely, if you are solely using Microsoft Office (or Pages), and you’re editing that important doc at an airport that happens to have WiFi before a flight that does not have WiFi, you might be SOL if you don’t sync the document to the Web if you accidentally leave your laptop on board the flight afterwards, never to be seen again. Once upon a time, productivity meant storing files locally only, or hand-pushing files to the Web. Both Office 2013 and Apple’s iWork (through iCloud) offer great synchronization.

The point is that there is value to having a thicker client:

  • Can take advantage of local hardware, data, and services.
  • Can perform some level of role offline.

But there is value to taking advantage of the Web:

  • Saved state from application can be recovered from any other device with the application and correct credentials.
  • Can hook into other services and APIs available over the Web, pull in additional data sources, and collaborate with additional users inside or outside the organization.

But I believe that the merit of both mean that the future is in applications that are both local and cloudy – across the board. Many people are bullish that Chromebooks are the future. Many people think Chromebooks are bull. I think the truth is somewhere in the middle. As desktop productivity evolves, it will have deeper and deeper tentacles out to the Web – for storage and backup, for extensibility, and more. Conversely, as purely Web-based productivity evolves, expect the opposite. It will continue to have greater local storage and more ability to exploit local device capabilities, as we’re seeing Chrome and ChromeOS do.

Office 365 isn’t a cloud-only service in most tiers. Nor do I ever really expect it to be. Frankly, though, Google Apps isn’t really a cloud-only service today – and I don’t expect it to go any direction except towards a more offline capable story as well. Web apps and native apps aren’t a binary switch. We won’t have one or the other in the future. Before too long, most Web apps will have a local component, and most local applications will have a Web component. The best part is that when we reach this point, “cloud” will mean even less than it means today.

 

 

 


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?


10
Jun 11

It’s the attach rate, stupid!

For over a year, I’ve struggled to quantify something that I’ve felt was a truism in the iPhone vs. Android battle. I still can’t fully quantify it with evidence, but I think the market is beginning to bear out what I’ve thought was the case.

For a long time, I’ve believed that the consumers who buy Android devices and the consumers who buy iOS devices (I’m talking Android phones to iPhone, primarily) are fundamentally different types of consumers. It’s not to say that there aren’t some similarities, and there’s not crossover – but I believe this to be the case.

Fred Wilson, a very successful VC, has stated that the future belongs to Android (or at least that there are going to be a lot of Android devices around). Similarly, a WSJ reporter wrote a piece that stated (subscription required) that the descent of the price that your average Android device is bringing will force Apple’s hand and make them lower the price of the iPhone.

Let’s state these theses:

  1. A crapload of Android devices in the market is good. My question – for whom? Google? App authors? Telcos? Device manufacturers (OEMs)?
  2. Android’s falling price will cause Apple to have to match it. My question – why? I don’t believe that Android consumers are Apple consumers, and vice versa.

I contend that they are both wrong.

In 2007, Apple shocked the world by releasing a phone. A REALLY EXPENSIVE phone. But this phone did something important. Every phone before it had been a device seemingly designed by committee, to meet the business goals of a wireless telco. This one was designed for the consumer first. Apple’s first foray into cell phones (the ROKR E1, in 2005), used badge enginering and iTunes compatibility to try and make an impact. It wasn’t an Apple product at all. The impact was a dull thud.

Apple surely learned a lot in that exercise, first of which I believe was that a device that promised any Apple experience needed to be a full Apple experience. The backstory is now the stuff of legend, but with partner AT&T (not Apple’s first choice, but one lucky/wise enough to go along with Apple’s approach), found an ally willing to cede control of the device experience in exchange for temporal exclusivity of the device. I contend that this act, right here, putting Apple design first over everything else is what founded the basis of Apple’s success with the iPhone, and why both of the theses above are wrong.

Take a look at this image:

How Apple’s cycle progresses:

  1. Apple delivers what customers want (in 2007, a phone designed for consumers by Apple, not by and for the telco)
  2. Customers (with, generally, higher than average discretionary income) pay a premium for devices
  3. App vendors arrive
  4. Customers pay a premium for apps
  5. App vendors thrive (process repeats as Apple expands the capabilities of iOS annually or faster)

Step 1 for Apple was delivering the first iPhone. Remember, this phone had NO 3rd party apps at launch. It had the ability to pin web pages to the home screen, and these could be designed to be “application-ish”. No dev ecosystem or tools, no App Store, no sales revenue. Oh, and it also had a very premium price of $599, and was locked to AT&T’s network.

But it had a touch-driven user interface, accelerometers, a very usable web browser, powerful email client, a camera, iTunes media integration and an Apple fit and finish to the device and software that recalled what Mac fans were used to.

That’s where we were in 2007. People paid through the nose to get a phone that put some aspect of design in front of telco business requirements.

Those initial customers and a crazy, excited jailbreaking community who pushed the bounds of the platform further and further (and still do) encouraged Apple to reconsider their “App story”, and later open the App Store. Tons of vendors arrived. Lots probably failed, a few probably succeeded, and a handful exploded to incredible success. Remember, these are games and apps for $.99-$9.99-ish. I don’t care who you are. If you paid the reduced $399 price for the first gen iPhone or slightly less for the 3G and 3GS when they came out, even $9.99 isn’t going to break the bank for an amazing game or app. In an era when a console game sold for 5 times that – and that was high-end apps. Few sell for more than that… App vendors grew, took chances on new apps that pushed what the platform could do, and they made money for it. Apple took this success and refined each successive version of the iPhone, which has not dropped in price (for the current top-end phone) since the 3GS. Combine this with the reasonably stable dev ecosystem (strong dev tools, a relatively consistent set of devices to support) and it creates a strong market, and a happy place for ISV’s to try and make money from paid apps for iOS.

iPhone customers pay a premium for their phones – and I contend, that same ethos passes through to the iTunes store. Apple customers buy far more content and apps than Android. So while it’s well and good that Fred sees a glut of Android devices, I don’t see that as a good thing – at least not for Android. And unlike the WSJ reporter, I don’t see it doing anything to the iPhone’s price, either. The iPhone’s content attach rate, the spend per consumer per phone is what created, and continues to rapidly grow, the Apple app ecosystem, Apple’s marketshare, and Apple’s revenue figures. Why does Apple give away iOS, and effectively give away OS X Lion? Because Apple doesn’t make money in operating systems anymore. They’re a content company. Usable hardware and a usable OS are a wormhole into the content portal (and the sale of their own apps) where Apple makes more and more of their revenue. In gaming consoles, where you lose considerable money upfront, the key is attach rate. Apple has made mobile phones no different, except they aren’t hemorrhaging money to establish and continue to grow their market share. Through increased spend, prototypical Apple consumers continue the virtuous cycle of market expansion. The question is whether Apple can sustain that enough as the market expands beyond prototypical/legacy “Apple” consumers to consumers who may be more thrifty.

Let’s take a look at Android now.

How Google’s cycle progresses:

  1. Google delivers what telcos want (a free operating system. Actually, Google pays the telco)
  2. Telcos lock down/fork devices, flood market (craplets on the desktop, custom shells, locked firmware, no updates)
  3. Bottom drops out of Android handset market (due to oversupply)
  4. Budget-sensitive/feature phone buyers buy Android (when your customers love you for your free email, it may also be an indicator of their willingness to spend actual money)
  5. App vendors fail to thrive without ad-sub apps

Yeah, I know, a bunch of you are going to get ticked at me here. “What do you mean delivered what the telcos want? Android is open source, you eediot!”

Sure. Yeah. For you and your friends that enjoy mucking with source, that’s all well and good. Consumers don’t give a crap about open source. But carriers have fallen in love with Android because it’s cheap (they get their own OS to even customize down to a source level) and Google pays them due to revenue Google will get off of the Internet traffic from the handsets. So OEMs got paid to put an open source OS on devices instead of blowing their own money to build their own OS as some had before, and many of them put custom shells and glued down applications – tricks OEMs used to pull with Windows CE/Mobile years ago. Most importantly, many phones ended up hosing consumers, with devices abandoned months after release, provided with few if any updates. Any enhancements Google added to future versions, and more importantly, any security fixes, are unavailable to a huge category of consumers.

So now we’ve got telcos all trying to grab a sword and take a swipe at Apple, trying one device, failing, trying another, failing, trying another… flooding the market with devices that are effectively identical to consumers, and many are, frankly, cheap. Failing to succeed at a 1:1 price range to the iPhone, telcos begin to cut prices and take the devices downmarket. This fails to dissuade the typical iPhone buyer, typically with more discretionary income to spend (or waste, depending on your POV). Now, as many, including Fred, point out, this leads to a shipload of Android devices hitting the market. Um. Yay?

If you have a ton of devices coming out, flooding the market, driving prices down, you begin to attract a different category of consumer. Beyond the open source devotee, I argue that Android now attracts consumers who historically might have only gone for a feature phone. But because they are generally more budget/value conscious, they buy few or any commercial apps. The little statistics we get out of Google on Android app sales reflects this hypothesis – far more free, ad-driven apps appear to be downloaded on Android than for-pay apps.

This of course doesn’t bother Google any at first, because they’re getting ad traffic from the phone itself, and now from more and more ads in apps. But the telcos hose themselves by forcing prices down, and hurt themselves and their OEM partners as the bottom falls out of the Android market. Congratulations, you’ve just replaced feature phones with Android, and there are a lot of them. But Fred’s wrong. That’s not good for app vendors. Sales are good for them, not a skinny vig off of in-app ad revenue. It’s great for Google, though. Compare this to Apple, where ISV’s are broadly making money, and Apple is making considerable revenue off of device sales, content sales, and app sales – and the constant sales of millions of iPhones would seem to indicate that the WSJ reporter is incorrect, consumers are still willing to pay a premium for a consumer-driven experience.

The vicious cycle continues here for Google – though we see Google trying to wrest control back from the telcos in the form of Android anti-fragmentation clauses in contracts to try and prevent further disintegration of the Android experience.

For Apple the platform begat the ecosystem which pulled in the developers, and attracted more consumers into the ecosystem to buy apps. As I’ve said before, a platform is nothing without apps. If a device doesn’t have any more use cases than the phone or PC you’ve already got, why would you switch? Google may glut the market with devices, but without an app ecosystem of consumers willing to pay for apps, few commercial app ISVs will try, and fewer apps will pull in less and less non-ad-driven revenue. Great, so Google gets ad revenue, consumers get any app they want as long as it has an ad, and they get the user interface that their OEM designed, not the one Google did. Android is a Trojan horse that harms the telcos and OEMs more than they imagine. Making more and more devices. Selling them for less and less. Combined with the fact that Android users consume more bandwidth (again, great for Google, bad for the telco), and it sounds like a recipe for harming yourself with consumers who want more and more from you for less and less, and competitors striking back at you with devices that look, act, and feel just like yours, or better. No wonder the telcos are trying to lock down Android. Sounds more and more like 2006 all over again.

The question is, as Microsoft tries to break into this market, what happens to them? More on that in a bit.


07
Feb 11

Hey kids, let’s go to Dubuque!

When you travel somewhere, especially somewhere new, somewhere eclectic – do you ever buy your airline ticket, hop on the plane, and eagerly look forward to planning your activities once you arrive?

No. No, you don’t. You plan a trip, buy tickets, get everything lined up long before you go. It’s been my contention for some time that buying a new computing device – smartphone, tablet/slate or other, is just like taking a trip. Also, unlike years ago where when we bought a computer, it was guaranteed to come with Windows and run all the old apps that for some reason we hang on to like hoarders on a TV show, today’s new devices come with a Baskin-Robbins assortment of operating systems – none of which will run Windows applications as-is (and that’s fine, as long as enough other apps are actually available for the device being considered).

With all due respect to the people of Dubuque, I call the act of buying a device without regard to how you’ll actually use it, “taking a trip to Dubuque“. I have been to Dubuque once, briefly while moving cross-country, but I can’t speak with authority as to the activities that avail themselves there (I’m sure there are some fun and interesting things to do). But having come from a similarly small town in Montana with a less catchy name, Dubuque works better as a destination that you’re going to want to plan for before you arrive, or you might be a little bored.

I was a fan of Microsoft’s Tablet PC platform when it first came on the scene – in fact my main computer at Microsoft for almost two years was a Motion Computing “slate” device (not a convertible, though I did order a Motion USB keyboard too). Unfortunately, my experience was that handwriting recognition, though handy, wasn’t perfect – and with my horrible handwriting, resulted in an archived database of my handwriting, not anything searchable or digitally usable. In essence, OneNote and a few drawing applications ( I didn’t have Photoshop, but surely it would be useful as well) were the only real applications that took advantage of the Tablet PC platform. That hasn’t changed much. Today the main reason why you’d buy a Tablet PC running Windows 7 is for pen input, not broad consumer scenarios (Motion Computing, which still makes great hardware has become soley focused on medical and services for exactly this reason). Though Windows 7 actually does have full multi-touch gesture support, most people don’t even know this, as witnessed during a recent webinar we had at work where people asked when Microsoft would introduce a version of Windows with touch support (they already do!) – and few applications make the most of it. I haven’t tried using Microsoft Office 2010 with a touch-focused PC, but I can’t imagine it being a great fit. Office, to date, is written to be driven via  a mouse (or a stylus, acting as a proxy-driven mouse). Touch requires a very different user interface design.

The iPad was successful from day 1 because it took advantage of the entire stable of iPhone applications, and simply doubled their resolution (to varying success), and used that to cantilever into motivating developers to build iPad optimized applications. No Android slate has established anywhere near the same market, most likely because of this aspect – when you get the device, what do you do with it? Sure. You’ll browse the web and check email. What else? How many consumers really want to pay $800+, plus data plans for a device that can just check email and browse the web? That’s not very viable. Today, HP announced new, pretty good looking all-in-one TouchSmart devices. Though one section of that article mentions them being consumer focused, the article ends with a fizzle, stating the systems are “designed with the ‘hospitality, retail, and health care’ industries in mind”. Yes, that’s right. Without a stable of consumer-focused multi-touch applications, devices like this, as great as they may sound at first glance, become just simple all-in-one PC’s for most, and touch-based only when damned into a career within a vertical industry with one or more in-house applications written just for touch, that they’ll run day in and day out until the device is retired.

It’s quite unfortunate how touch hasn’t taken off in Windows. ISVs don’t write apps because there aren’t enough touch-based Windows computers and no way to monetize to the ease and degree the Apple App Store has enabled, and yet people don’t buy touch-based Windows PCs for the same reason they don’t buy 3D TV’s – it’s a trip to Dubuque. Like most consumers, I’m not going to buy a ticket there until we’ve got some clear plans of what we’re going to do on the trip.


14
Dec 10

App Ideas – Parking finder

Name: Parking finder

Product: Mobile maps (iPhone, Android, Windows Phone 7, any other mobile device)
Problem: When looking up directions to a destination – why not provide parking resources too?
Proposed solution: You’re looking up directions to a theatre, pub, or some other venue that you want to go to – and almost any mapping software can get you there. But if you’re traveling to any densely populated area, such as downtown in a major city, a theme park, or other major destination – getting you there is only half of the battle. Where do you park?

  1. When looking up directions you should be able to specify include parking as an option, or set it as the default for your mapping product.
  2. Type in the destination.
  3. Click Route
  4. The directions include steps to get you to the destination by offering you nearby parking, which you can select and then be offered walking/bus/transit directions to get to your destination.
  5. Bonus points:
    1. Easy: Include options to categorize available parking by type:
      1. Street|Lot|Garage
      2. Free|Pay
      3. Cash|Credit
    2. Harder: Include pricing information
    3. Hardest: Include availability, and even offer the option to reserve a space using a credit card/Paypal.

Next time you go to a restaurant or concert, and you find parking a challenge, listen to the feedback around you – you’re not alone. I’ve noticed it’s a common thread that people have difficulty finding parking near their destination.


11
Dec 10

App Ideas – Route builder

This is the first post in a series I plan, outlining ideas either for modifications to existing products, or a desire for an entirely new product. As a product manager or program manager for almost 10 years, random ideas strike me at a moments notice, but I can’t productize everything I dream up. If I post an idea here, it is public domain.

Name: Route builder
Product: Mobile maps (iPhone, Android, Windows Phone 7, any other mobile device)
Problem: When you need to run three errands, why can you only put in one destination?
Proposed solution: Say you need to go to Target, your Chase bank branch, and a Hallmark store. Sure, if you’re in your home town, or going to stores you always use, it would be limited in use. But when going to stores, parks, or other destinations you dont normally visit or when travelling to other cities, it would be useful.

  1. Click Build Route
  2. Type in each of the destinations. I envision a spot for a single destination, with a + to append additional destinations.
  3. Click Route
  4. Route builder finds the most efficient route to visit all three of those destinations.
  5. Bonus points:
    1. I should be able to tell my mapping app my home address, work address, and add addresses of family members by way of the address book, allowing me to use them as a destination (or if I’m at one of them, the source).
    2. With any route, the user should be able to specify that they want to complete a round trip to their current location. In doing so, the route could be optimized either by the order I need or want to visit them, or by the most efficient route.
    3. I should be able to save a route for access later if I want to.
    4. This could easily be modified to append additional destinations after you’re on your way to destination 1.

As an iPhone owner, I’ve often wished for this functionality in the iPhone’s built in Maps application – and I doubt I’m alone.