17
Dec 13

Goodbye, Facebook

As I posted on Facebook earlier today. Don’t worry, FB, I’m still not using G+ either, as you two rapidly collide into each other.

I’m not going to make this complicated, Facebook. It’s not me, it’s you.

I liked it when we first met, I thought it was cool how you’d help me find friends, family, co-workers I hadn’t talked to for years, even some people I’ve known since preschool. That was nice, and you didn’t try to grab my wallet every time a friend would join, like some of the “social networks” did before you came along (looking at you, Classmates).

But over the years, you’ve gotten a little bit creepy, and you rarely tell me anything new or important anymore. In fact, in terms of a “social network”, you don’t really do much for me in terms of telling me what family and friends are really up to. Instead, my wall isn’t about what is important to me, it’s ads, links from Upworthy, ThinkProgress, and other sites that have learned how to game the social graph to become front and center. Now your content is just as worthless as when Google let Demand Media and others game SEO to backfill the Web with crap content.

I’m not exactly sure what demographic you’re trying to tune Facebook for, and it sure seems like you may not know either.

So with that, Facebook, I’m gonna have to let you go. I’ve downloaded my archive (man, we did have some good times), and I’m going to have to let you go. Tomorrow afternoon, I’m pulling the plug. If you ever need to find me, I’m easy enough to find on the Web, email, and Twitter.

Take care, Facebook. I hope you figure out what the heck you want to be when you grow up.

Wes Miller


27
Nov 13

Resistance is Futile or: GenTriFicatiOn

The vocal minority. You’ve heard of them, but who are they?

Companies often seek to change their status quo by modifying how they do business. Generally, this is a nice way of saying just they want more. More what, you ask? Traditionally, it would have meant they simply want more money, as in raising the cost of the goods they are selling (or lowering the cost that they will pay to suppliers or partners). These of course are done to increase revenue, or decrease operating expenses, respectively.

In today’s world, personally identifiable information (PII) isn’t just data, but instead is a currency which is invaluable to advertisers. While Google was the first to really succeed in this economy (of sorts), Facebook, Adobe, Microsoft, and anybody else with skin in the Internet advertising or analytics game is in the same position today. For these companies, their ask is an ever increasing cross-section of your identity. In exchange, they offer you “free” services. However, like any other business, they want an ever-increasing amount of your personal information in order to continue delivering that service. We’ve seen it with Facebook and their PII land grabs really beginning in earnest in 2010, and we’re seeing it at the current time with the encroachment of Google+ across Google sites where legacy communities aren’t very welcoming to the G+ GenTriFicatiOn.

Whether you’re talking about raising costs (reducing expenses) or asking for increasingly accurate PII, these price uplifts (or gazumps) are often not greeted warmly. In fact, there’s usually a vocal minority that quite often speak out and fight the change.

On Twitter yesterday, Taylor Buley asked if the uproar due to YouTube’s shift to Google+ could generate enough momentum for a real YouTube competitor.

I responded to Taylor at the time that I didn’t think it could. Back in 2010, when Facebook made their (at that time) largest shift in privacy policy, there was a rather large outcry by people bothered by the changes. The alternative network Diaspora was launched (and failed) out of this outcry.

There comes a certain point where these outcries cause an opinion to turn into a degree of a PR problem. But this PR problem is usually short lived. In the end, only two things can happen:

  1. The change is reversed (unlikely, as it causes a strategic retreat and a tactical reassessment)
  2. The turbulence subsides, the majority of users are retained, and some of the vocal minority are lost.

I consciously chose the term GenTriFicatiOn when I was describing Google+ earlier. Google is trying to build a community of happy PII sharers. But a lot of Google’s legacy community citizens don’t fit that mold. Google’s services are provided “free” in exchange for the price that they (Google) deems adequate. If you don’t want to pay that price, Google seems happy to see you exit the community.

Google today, like Facebook several years ago, is in the position of the chef with a frog in a pot. Slowly turning the heat up, and actually trying to excommunicate users who aren’t going to be willing participants in the Google of Tomorrow. Facebook most likely flushed the vocal privacy critics several years ago. Consider this Google Trends chart on the query “Facebook privacy”. While there is a regular churn on the topic, high water mark event H aligns nicely with the most contentious (to that date) privacy changes Facebook made, back in 2010.

Facebook_privacy

When Google shut down Google Reader last year, there was a huge outcry. However, Google obviously knew the value that Google Reader users provided in terms of PII sharing before it shut down the site. (Answer? Not much.) As a result? A huge outcry followed by a deafening thud. Google didn’t lose much of what they were after, which is those data sharing, Google loving users. See the Google Trends chart of the Google Reader outcry below. Towards the right we can see the initial outcry, followed most likely by discussion of alternatives/replacements and… resignation.

Google_reader

When these sites increase their PII cost to end users (let’s call these end users producers, not consumers), they’re taking a conscious gamble. The sites are hoping that the number of users who won’t care about their privacy exceeds the number of users who do. In general, they’re likely right, especially if they carefully, consciously execute these steps one by one, and are aware of which ones will be the largest minefields. Of those Google properties remaining to be “Plussed”, Google Voice is likely the most contentious, although YouTube was also pretty likely to generate pushback, as it did. Again, those vocal users not happy with the changes aren’t going to be good Google+ users, so if Google+ is where Google believes their future lies, it’s in their best interest to churn those users out anyway.


22
Nov 13

Mutually Assured Distraction

Have you recently updated an app your computer or your smartphone (or accessed your favorite Web app), and been faced with the arrival of:

  1. New features out of the blue
  2. Changed behavior for existing features
  3. A release that removes or breaks a feature you frequently use
  4. A user interface change that completely modifies the way the app works?

If so, you might be a victim of mutually assured distraction (MAD). MAD can also alternatively be referred to as competitive cheese moving. 

Once upon a time, software companies released software on semi-predictable schedules, with a modicum of cheese moving. User interface elements might have been moved, but users familiar with the application (or sibling applications) could find their way around with some degree of ease.

However, with the arrival of milestone-driven and Web-based software, we increasingly find ourselves facing a world where applications we are comfortable with and used to are rapidly, somewhat inexplicably, shifting on us (quick apps?). Faced with increasing competition and the agile software approaches used by competitors, more and more (and larger and larger) software companies are pushing out software that’s sort of done, sort of usable, and sort of documented.

Mutually assured distraction allows company A to volley out a marketing message when they hit their milestone and release, only to be responded to when company B (and company C, D, ad nauseum) releases it’s own milestone months or weeks later – and the process repeats. With each milestone burp of a release, little nuanced changes in the software arrive, and it is up to the end user of the software to figure out what changed, if the implementation of their favorite checkbox feature from company B works better than the implementation of checkbox feature from company A did a month and a half ago. Or if it’s still even there.

The problem with MAD is the position it puts end users in (not to mention the organizations/employers that still support them, as these applications still often have to be used for collaboration between two or more employees – that is, people have to get work done).

Adding “value” all the time may seem like a boon for the end user. But it really isn’t. It makes understanding the features of the application as it exists today hard enough, and the reality is that no end user has the neurons available (or desire) to keep track of all the changes coming in the application. They just want to get things done and use software and hardware that just works.

It’s one thing when you add a completely new feature that doesn’t really shift the way the app works for end users. It’s something else entirely when you remove or modify functionality that users depend upon and are comfortable using. When you do that, you’re violating a cardinal rule of building software:

Don’t shit on your end user’s desk.

Yes, it seems simple enough. People don’t like surprise. They don’t like it when you move things around just so you can say, “Look! We changed things! We improved it! LOOK AT THE VALUE YOU’RE GETTING!!!”

If you’re going to make your development milestones visible to end users, you darn well better give them some clue about what features you plan to add back (and ideally, some timeframe for when you plan to do so). For me, I think that this increasingly industry-wide move to faster and faster releases of key software applications creates an unsustainable cadence where users can never be fully productive with the application, and anyone responsible for supporting, deploying, or licensing applications for them is in for just as much pain, or more.


17
Sep 13

No, that new application you’re hearing about won’t replace Microsoft Office.

For two weeks straight, I’ve seen prognostications that <application> from <competitor> will replace Microsoft Office.

No. Nothing will ever replace Microsoft Office – at least for the time being for a huge chunk of business users. I know, I know… strong words – but let me explain.

While a single user who needs to simply compose their thoughts for personal use, or sometimes share them with one or two other users might be able to do so with a third-party Office document editor. Whether they save or export as an Office document, or insist that the recipients simply read it in a proprietary format (including OpenDocument), as soon as you have multiple users exchanging documents, embedding additional Office documents, using reviewing/track changes, or other complex Office features, these documents begin to fray and fall apart at the seams.

I typically see three use cases for Microsoft Office in a multiuser office setting:

  1. Simple Office document exchange between two or more users.
  2. Complex Office document exchange (use of “deep features” in Office).
  3. Custom Office document workflow between two or more users.

Even I have said in the press that the lack of Microsoft Office on the iPad has created an opportunity. However, that opportunity isn’t explicitly an opportunity for competitors. More often than not, it’s created an opportunity for the user in the sense that they haven’t had Office for the entire time they’ve had an iPad, so either they’ve simply “gone without” Office, or found alternative tools (most likely either a Web-based productivity suite or a productivity suite for their device that doesn’t include feature parity with Office for Windows or the Mac).

The users who have likely had the most “success” (using the term loosely) with replacing Office are likely the individual users I mentioned early on who are simply using Office documents as containers, not using any Office specific features to much depth, and can likely survive just using the document export features in Google Docs, iWork, or any other Web/mobile productivity suite not from Microsoft. Admittedly, Microsoft surely sees this scenario, and as such has made the Office Web Apps for consumers freely available and interconnected with SkyDrive.

For users who are simply throwing documents back and forth, but not relying either on deep features in Office document formats or the Office applications, there’s a possibility that they can switch to Google Docs, iWork, or another Office suite. But if an organization has been using Office for some time, odds are there are documents and document templates they rely upon that require actual Microsoft Office applications or even require applications that interoperate with Office, but have no direct competitor on non-Windows platforms or the Web (see Access, Visio, or InfoPath).

You’ll often hear “document fidelity” discussed when the topic of Microsoft Office comes up. This is an important thing to understand. If I give you a complex Word format document (doc or docx) to edit, and ask you to use track changes to send it back, I’m going to be a bit upset if you a) send it back to me with the changes inline because your alternative word processor doesn’t support track changes, b) mangle the document because some formatting I had wasn’t understood by your alternative word processor or c) send it back to me in a .garble document or some other document format that Word doesn’t understand. Microsoft Office documents – both the original formats and the new xml-based documents – are the lingua franca of office productivity. Third-party tools may be able to open them. What they do with them from that point on is anybody’s guess.

Surely at some point, you’ve found a Web page that was interesting to you but was in a foreign language. If you translated it using Bing or Google, you got a result that was close to, but not an exact match for, the actual translated text as a human would have performed. More importantly, if you translate the result back to the source language, the result isn’t the same as the source text was to begin with. This is the same thing that happens with Microsoft Office documents (or WordPerfect documents among some professional fields – even today). If you want to tick people off or annoy them to the point of generating passive-aggressive behavior from them, screw up the formatting or the document type of an Office document that you’re supposed to look at and hand back to them.

For many organizations today, Office isn’t something they can just swap out – they depend on features and formatting capabilities buried in the Office applications – features that sometimes it even seems like Microsoft forgets are there (like Word outlining). When you must send Office documents back and forth between users and have the formatting and document type remain consistent, there are few choices other than… Office. I’ve tried numerous third party Web and mobile Office suites, and not really found one that doesn’t break documents here or there (often in undetectable ways), or only support <feature x> if you convert it into some other proprietary format.

The final scenario for Office users is that third case. In this case, you’re talking actual server-side code (SharePoint or other) or custom Office code that reads the Office document and could actually break if a document is incorrectly formatted or submitted as the wrong document type. Much like a user who is expecting a well-formatted document to be returned from review, applications centered around client or server-side consumption of Office documents don’t handle bad formatting or incorrect documents types well (though they respond logically, rather than emotionally as many users would).

I think Office, like Windows, is at an interesting inflection point. While some consumers and a smaller percentage of businesses may want to consider (and a small amount may actually be able to consider) not using Microsoft Office, their ability to do so will be directly in relation to how broadly they use Office documents today, and how deeply into the document format and type the features they depend upon are. In addition, many Web-apps are a no-op for truly mobile users as they need the ability to work completely offline – something that Office 365, being a streamed, but completely installed version of Office 2013, can do quite well. For most organizations, replacing Office with <application> is about as likely in the short term as replacing Windows with a Mac, an iPad, or a Chromebook. It’s possible, but you may be looking at ripping out deeply embedded line-of-business applications the organization has depended upon for years just to say you got rid of Office. You’re also usually then buying into someone else’s locked in hardware ecosystem or subscription-based software ecosystem.

I think there is opportunity for someone to do an Office suite better. But I don’t think most vendors so far are focused on that. Instead, most seem to be largely aping Office with locally installed or mobile apps, or aping Office with light-featured Web apps. Nobody is really pushing the boundaries, and making collaboration better – they’re largely reimagining what we’ve been working with for 20 years. So what eventually replaces Office? I’m not sure yet – but I don’t think it looks like envelopes of text sent from one user to another, or individual silos stored in a proprietary collaboration storage bin.


06
Jul 13

The iWatch – boom or bust?

In my wife’s family, there is a term used to describe how many people can comfortably work in a kitchen at the same time. The measurement is described in “butts”, as in “this is a one-butt kitchen”, or the common, but not very helpful “1.5 butt kitchen”. Most American kitchens aren’t more than 2 butts. But I digress.

I bring this up for the following reason. There is a certain level of utility that you can exploit in a kitchen as it exists, and no more. You cannot take the typical American kitchen and shove 4 grown adults in it and expect them to be productive simultaneously. You also cannot take a single oven, with two racks or not, and roast two turkeys – it just doesn’t work.

It’s my firm belief that this idea – the idea of a “canvas size” applies to almost any work surface we come across. From a kitchen or appliances therein, and beyond. But there is one place that I find it applies incredibly well – to modern digital devices.

The other day, I took out four of my Apple devices, and sat them side-by-side in increasing size order, and pondered a bit.

  • First was my old-school Nano; the older square design without a click-wheel that everyone loved the idea of making a watch out of.
  • Second was my iPhone 5.
  • Third, my iPad 2.
  • Finally, My 13″ Retina Macbook Pro.

It’s really fascinating when you stop to look at tactile surfaces sorted like this. While the MacBook Pro has a massively larger screen than the iPhone 5, the touch-surface of the TrackPad is only marginally larger than that of the iPhone. I’ve discussed touch and digits before, but the recent discussion of the “iWatch” has me pondering this yet again.

While many people are bullish on Google Glass (disregarding the high-end price that is sure to come down someday) or see the appeal of an Apple “iWatch”, I’m not so sure at this point. For some reason, the idea of a smart watch (aside from as a token peripheral), or an augmented reality headset like Glass doesn’t fly for me.

That generation iPod Nano was a neat device, and worked alright – but not great – as a watch. Among the key problems the original iOS Nano had when strapped down as a watch?

  1. It was huge – in the same ungainly manner as Microsoft’s SPOT watches, Suunto watches, or (the king of schlock), Swatch Pop watches.
  2. It had no WiFi or Bluetooth, so couldn’t easily be synched to any other media collection.

Outside of use as a watch, for as huge as it was, the UI was hamstrung in terms of touch. I believe navigation of this model was unintuitive and clumsy – one of the reasons I think Apple went back to a larger display on the current Nano.

I feel like many people who get excited about Google Glass or the “iWatch” are in love with the idea of wearables, without thinking about the state of technology and – more importantly, simple physical limitations. Let’s discard Google Glass for a bit, and focus on the iWatch.

I mentioned how the Nano model used as a watch was big, for its size (stay with me). But simply because of screen real-estate, it was limited to one-finger input. Navigating the UI of this model can get rather frustrating, so it’s handy that it doesn’t matter which finger you use. <rimshot/>

Because of their physical canvas size available for touch, each of the devices I mentioned above has different bounds of what kinds of gestures it can support:

  • iPod Nano – Single finger (generally index, while holding with other index/thumb)
  • iPhone 5 – Two fingers (generally index and thumb, while holding with other hand)
  • iPad 2 – Up to five fingers for gesturing, up to 8/10 for typing if your hands are small enough.
  • MacBook Pro – Up to five fingers for gesturing (though the 5-finger “pinch” gesture works with only 4 as well).

I don’t have an iPad Mini, but for a long time I was cynical about the device for anything but an e-reader due to the fact that it can’t be used with two-hands for typing. Apparently there are enough people just using it as an e-reader or typing with thumbs that they don’t mind the limitations.

So if we look at the size constraints of the Nano and ponder an “iWatch”, just what kind of I/O could it even offer? The tiny Nano wasn’t designed first as a watch – so the bezel was overly large, it featured a clip on the back, it needed a 30-pin connector and headphone jack… You could eliminate all of those with work – though the headphone jack would likely need to stay for now. But even with a slightly larger display, an “iWatch” would still be limited to the following types of input:

  1. A single finger (or a stylus – not likely from Apple).
  2. Voice (both through a direct microphone and through the phone, like Glass).

Though it could support other Bluetooth peripherals, I expect that they’ll pair to the iPhone or iPod Touch, rather than the watch itself – and the input would be monitoring, not keyboard/mouse/touchpad. The idea of watching someone try to type significant text on a smart watch screen with an Apple Bluetooth keyboard is rather amusing, frankly. Even more critically, I imagine that an “iWatch” would use Bluetooth Low Energy in order to not require charging every single day. It’d limit what it could connect to, but that’s pretty much a required tradeoff in my book.

In terms of output, it would again be limited to a screen about the same size as the old Nano, or smaller. AirPlay in or out isn’t likely.

My cynicism about the “iWatch” is based primarily around the limited utility I see for the device. In many ways if Apple makes the device, I see it being largely limited to a status indicator for the iPhone/iPod Touch/iPad that it is “paired” with. Likely serving to provide push notifications for mail/messaging/phone calls, or very simple I/O control for certain apps on the phone. For example, taking Siri commands, play/pause/forward for Pandora or Spotify, tracking your calendar, tasks, or mapping directions, etc. But as I’ve discussed before, and above, the “iWatch” would likely be a poor candidate for either long-form text entry whether typed or dictated. (Dictate a blog post or book through Siri? I’ll poke my eyes with a sharp stick instead, thanks.) For some reason, some people are fascinated by the Dick Tracy approach of issuing commands to your watch (or your glasses, or your shoe phone). But the small screen of the “iWatch” means it will be good for very narrow input, and very limited output. I like Siri a lot, and use it for some very specific tasks. But it will be a while before it or any other voice command is suitable for anything but short-form command-response tasks. Looking back at Glass, Google’s voice command in Glass may be nominally better, but again, will likely be most useful as an augmented reality heads-up-display/recorder.

Perhaps the low interest I have in the “iWatch”, Pebble Watch, or Google Glass can be traced back to my post discussing live tiles a few weeks ago. While I think there is some value to be had with an interconnected watch – or smartphone command peripherals like this, I think people are so in love with the idea that they’re not necessarily seeing how constrained the utility actually will be. One finger. Voice command. Perhaps a couple of buttons – but not many. Possibly pulse and pedometer. It’s not a smartphone on your wrist, it’s a remote control (and a constrained remote display) for your phone. I believe it’ll be handy for some scenarios, but it certainly won’t replace smartphones themselves anytime soon, nor will it become a device used by the general populace – not unless it comes free in the box with each iPhone (it won’t).

I think we’re in the early dawn of how we interact with devices and the world around us. I’m not trying to be overly cynical – I think we’ll see massive innovation over time, and see computing become more ubiquitous and spread throughout a network of devices around and on us.

For now, I don’t believe that any “iWatch” will be a stellar success – at least in the short run – but it could as it evolves over time to provide interfaces we can’t fathom today.


22
May 13

Beware of strangers bearing subscriptions

Stop for a second and think about everything you subscribe to. These are things that you pay monthly or annually for, that if you didn’t pay for, some service would discontinue.

The list probably includes everything from utilities to reading material, and most likely a streaming or media service like Netflix or Hulu, or a subscription to Amazon Prime, Xbox Live or iTunes Match.

I’ve been noticing a tendency for seemingly everything to move towards subscriptions. Frankly, it irritates me and I’m not really excited about the idea.

I understand and accept that natural gas, electricity, waste management, and (ick) even insurance need to be paid for regularly so we can maintain a certain lifestyle. But the tendency to treat software as a utility, while somewhat logical, isn’t necessarily a win for the consumer or the business (it depends on the package being offered, and how often you would upgrade if you weren’t being offered a subscription).

That puzzle, of course, depends on the consumer or business to not bother to do the math and just assume it’s a better deal (or get befuddled trying to decode the comparison), and just subscribing. Consumers, and frankly many businesses, are not great at doing that math. Many subscriptions are also – literally – incomparable with any peer perpetual license. Trying to compare Office 365 and Office 2013 for consumers is actually relatively easy. Even comparing simple business licensing of Office 365 vs. on-premises isn’t that hard. Trying to do it in a large business, where it can intertwine with an Enterprise Agreement (enterprise-wide licensing agreement), is horribly complex and hard to compare.

Most subscriptions are offered in the hope that they will become an evergreen – something that automatically renews on a monthly or annual basis. Most of these are, frankly, awful, in my opinion. Let me explain why.

Recall the label on the outside of many packaged foods in the US. You know the one. Think about the serving size. This is the soda bottle or bag of chips where it says 2.5 servings, though most consumers will drink or eat the whole thing at one sitting. Consumers (and again, many non-IT business decision makers) are not really great about doing the long-term accounting here. A little Hulu here. A little Amazon Prime there. An iTunes Match subscription. Add on Office 365… Eventually, all these little numbers add up to big numbers. But like calorie counting, people often lose track of the sunk costs they’re signing up for. We wonder why America has a debt problem? Because we eat consumer services like there’s no bill at the end of the meal.

You don’t need to count every calorie – but man, you need to be aware before you have a problem.

I’ve become a big fan over the last several years of Willard Cochrane, an economist who spent most of his life analyzing and writing about the American family farm. Cochrane created an eponym, “Cochrane’s Treadmill”, which describes the never-ending treadmill that farmers are forced into. Simplistically, Cochrane’s Treadmill can be described as follows.

Farm A buys a new technology that gives them a higher yield, it forces down the market price of the commodity they produce. Farm B is then forced to buy that new technology in order to improve their yield in order to  even maintain the income they had before farm A bought that technology.

By acquiring the technology, Farm A starts an unwinnable race, where he (economically) is pitted against farmer B in trying to make more money, generally from the same amount of land. Effectively, it is mutually assured destruction. Work harder, pay more, earn less.

I’ve been spending a lot of time recently trying to simplify my life. I’ve been working to remove software, hardware, and services that add complexity, rather than simplicity, to my life. As humans, we often buy things on a whim thinking (incorrectly), “this new <thing> will dramatically improve my life”. After all, the commercial told you it would! Often this isn’t the case.

Without getting off on an environmentalist hippie trip here, I’d like to circle back to farming for a second. Agricultural giants like Monsanto have inserted themselves into the farming input cycle in a very aggressive way. If we go back 100 years, farmers didn’t pay an industrial concern every year for pesticides, and they most certainly didn’t pay them an annual license fee for seeds (farmers are forbidden to save licensed genetically modified seeds every year, as they have done for millennia). As a result, farmers are not only creating a genetic monoculture that is likely more susceptible to disease, but they are subscribing to annual licensure of the seed and most likely an ever-increasing dosage of pesticides in order to defend against plants, insects, or other pests that have developed defenses against them. It is Cochrane’s Treadmill defined. Even worse, if a farmer wanted to discontinue use of the licensed seed, it’s unclear to me if they actually could. Monsanto has aggressively gone after farmers who may have even accidentally planted their seeds due to contamination. Can a farmer actually quit using licensed seed and not pay for it next year? I don’t know the answer.

I bring this up because I believe that it exemplifies the risks of subscriptions in general. Rather than a perpetual use right (farmers saving seed every year), farmers are licensing an annual subscription with no escape hatch. Imagine subscribing to a Software-as-a-Service (SaaS) offering and never being able to quit it? Whether in the form of carrots – “sweeteners” of sorts added to many subscriptions (such as the much more liberal 5 device use rights of Office 365), or sticks (virtualization or license reassignment rights only available with Microsoft Software Assurance), there are explicit risks of jumping into using almost any piece of software without carefully examining both the short-term use rights and long-term availability rights. It may appear I’m picking on Microsoft here. I’m not doing so intentionally – I’m just intimately, painfully, aware of how they license software. This could apply to Adobe, Oracle, or likely any ISV… and even some IHVs.

Google exemplifies another side of this, where you can’t really be certain how long they will continue to offer a service. Whether it’s discontinuing consumer-grade services like Reader, or discontinuing the free level of Apps for Business, before subscribing to Google’s services an organization should generally not only raise questions around privacy and security, but just consider the long-term viability of the service. “Will Google keep this service alive in the future?” Perhaps that sounds cynical – but I believe it’s a legitimate concern. If you’re moving yourself or your business to a subscription service (heck, even a free one), you owe it to yourself to try and ascertain how long you’ve got before you can’t even count on that service anymore.

While I may be an Apple fan, and Apple doesn’t seem to be as bullish on subscriptions, one can point to the hardware upgrade gravy train that they have created and see that it’s really just a hardware subscription. If you want the latest software and services from Apple, you have to buy a new phone, tablet, laptop, or desktop within Apple’s set intervals or be left behind. Businesses that are increasing their use of Apple technology – whether they pay for it or leave it to the employee to pay for – should be careful too. Staying up-to-date, including staying secure, with Apple generally means staying relatively up-to-date with hardware.

In The Development of American Agriculture, Cochrane reasoned that <profits> “will be captured by the business firm in financial control”, and would no longer go to farmers. Where initially the farm ecosystem consisted of supplier (farmer) and consumer, industrial agriculture giants have inserted themselves into the process of commodity creation – more and more industrialists demanding a growing annual cut from the income of (already struggling) American farmers.

Whether we’re talking seeds/pesticides, software, utilities, or any other subscription, there is a risk and a benefit that should be clearly understood. But I believe that even more than “this year”, where the immediate gratification is like consuming the 2.5 servings I mentioned earlier, both consumers and especially businesses need to think long-term; “Where will this service be in 3 years?”, “Will we be paying more and getting less?”, “If we go there, can we get out? How?”

When you subscribe to anything, you’re not taking on a product, you’re taking on a partner. Your ability to take on that partner depends upon your current financial position and your obligations to that partner, both now and in the future. While many businesses can surely find the risk/benefit analysis of a given subscription works out in the subscriber’s benefit (if they are really using the service regularly, and it provides an invaluable function that can’t be built internally or completed by perpetually licensed technology), I believe that companies should be cautious about taking on “subscription weight” without sufficiently examining and understanding 1) how much they really need the services offered by that subscription, 2) what the the short-term benefits and long-term costs of the subscription really are, 3) the risks of subscriptions (cost increase and service volatility among them), and 4) how that subscription compares in terms of use rights, costs, and risks, with any custom developed or perpetually licensed offering that can perform similar tasks.

If it seems like I’m anti-subscription, I guess you could say I am. If you want a cut of my income, earn it. Most evergreen subscriptions aren’t worth it to me. I think too many consumers and businesses fall prey to the fact that “just subscribing” rather than building and owning a solution, or buying a perpetually licensed one, sounds easier, so they go that route – and wind up stuck there.


14
May 13

The Cloud is the App is the Cloud.

During the last week, I have had an incredible number of conversations about Office 365 with press, customers, and peers. It’s apparent that with version 3.0 of their hosted services, as Microsoft has done many times before at v3.0, this is the one that could put some points on the board, if not take a lead in the game.

But one thing has been painfully clear to me for quite some time, and the last week only serves to reinforce it. As I’ve mentioned before, there’s not only confusion about Microsoft’s on-premises and hosted offerings, but simply confusion about what Office 365 is. The definitions are squishy, and Microsoft isn’t doing a great job of really enunciating what Office 365 brings to the table. Many assume that Office 365 is primarily about the Office client applications (when in fact only the premium business editions of Office 365 even include the desktop suite! Many others assume that Office 365 is only hosted services, and Web-based applications, along the lines of Google Apps for Business.

The truth is, there’s a medley of Office 365 editions among the 4 Office 365 “families” (Small Business, Midsize Business, Enterprise/Academic/Government, and Home Premium). But one thing is true – Office 365 is about hosted services (Exchange Online/Lync Online/SharePoint Online for businesses, or Outlook.com/Skype/SkyDrive for consumers), and – predominantly – the Office desktop application suite.

I bring this up because many people point at native applications and Web applications and say that there is a chasm growing… an unending rift that threatens to tear apart the ecosystem. I disagree. I think it is quite the opposite. Web apps (“cloud apps” if you like) and native apps (“apps”) are colliding at high speed. Even today it isn’t really that easy to tell them apart, and it’s only going to get harder.

When Adobe announced their Cloud Connect service last week, some people said there wasn’t much “cloud” about it. In general, I agree. To that same end, one can point a finger at Office 365 and say, “that’s not cloud either” because to deliver the most full-featured experience, it relies upon a so-called “fat client” locally installed on each endpoint, even though for a business, a huge amount of the value, and a large amount of the cost, is coming from the cloud services that those apps connect to.

To me, this is much ado about nothing. While it’s true that one can’t call Office 365 (or Cloud Connect) a 100% cloud solution, at least in the case of Office, each version of Microsoft’s hosted services has come closer than the one before to delivering much of the value of a cloud service, it continues to rely on these local bits, rather than running the entire application through a Web browser. With Office, this is quite intended. The day Office runs equally well on the Web as it does on Windows is the day that Microsoft announces they’re shutting down the Windows division.

But what’s interesting is that as we discuss/debate whether Microsoft and Adobe’s offerings are indeed “cloudy enough”, as they strive to provide more thick apps as a service, Google is working on the opposite, applications that run in the browser, but exploit more local resources. When we look at the high-speed collision of Android into ChromeOS, as well as Microsoft’s convergence of Web development into the WinRT application framework, this all begins to – as a goal – make sense.

In 1995, as the Web was dawning, it wasn’t about applications. It was about sites. It gradually became about applications and APIs – about getting things done, with the Web, not our new local networks, as the sole communication medium. Conversely, even the iPhone began with a very finite suite of actions that a user could perform. One screen of apps that Apple provided, and extensibility only by pinning Websites to the Home screen. Nothing that actually exploited the native power and functionality of the phone to help users complete tasks more readily. Apple eventually provided the full SDK that enabled native, local applications, which would still often connect out to the Internet to perform their role – when the Internet was available.

Windows has largely always been about “fat client” applications, even going so far as to have the now quite old – but once new and novel – Remote Desktop Protocol to enable fat clients to become light-ish weight, as long as a network connection back to the server (or eventually desktop) running the application was available.

I bring these examples up because the idea  of “cloud applications” or cloud services is, as I noted, becoming squishy and hard to explicitly define, though I have to personally consider whether I really care that deeply about when applications are or are not cloudy (or are partly cloudy?).

Users buy (or use) applications because they have a specific task they need to complete. Users don’t care what framework the application is written in, what languages were used, what operating system any back-end of the application is running on, or what Web server it is connecting to.

What users do care about is getting the task done that led them to that application to begin with. Importantly, they need productivity wherever it can be available. With applications that are cloud-only, when you have a slow, or nonexistent Internet connection, you are… dead. You have no productivity. Flying on a plane but editing a Word document? You need a fat client. Whether it’s Google Apps for Business running on a Chromebook (with caching), QuickOffice on an iPad, or Office 2013 Pro Plus running on a Windows 7 laptop, without some local logic and file caching, you’re SOL at 39,000 feet without an Internet connection.

Conversely, if you are solely using Microsoft Office (or Pages), and you’re editing that important doc at an airport that happens to have WiFi before a flight that does not have WiFi, you might be SOL if you don’t sync the document to the Web if you accidentally leave your laptop on board the flight afterwards, never to be seen again. Once upon a time, productivity meant storing files locally only, or hand-pushing files to the Web. Both Office 2013 and Apple’s iWork (through iCloud) offer great synchronization.

The point is that there is value to having a thicker client:

  • Can take advantage of local hardware, data, and services.
  • Can perform some level of role offline.

But there is value to taking advantage of the Web:

  • Saved state from application can be recovered from any other device with the application and correct credentials.
  • Can hook into other services and APIs available over the Web, pull in additional data sources, and collaborate with additional users inside or outside the organization.

But I believe that the merit of both mean that the future is in applications that are both local and cloudy – across the board. Many people are bullish that Chromebooks are the future. Many people think Chromebooks are bull. I think the truth is somewhere in the middle. As desktop productivity evolves, it will have deeper and deeper tentacles out to the Web – for storage and backup, for extensibility, and more. Conversely, as purely Web-based productivity evolves, expect the opposite. It will continue to have greater local storage and more ability to exploit local device capabilities, as we’re seeing Chrome and ChromeOS do.

Office 365 isn’t a cloud-only service in most tiers. Nor do I ever really expect it to be. Frankly, though, Google Apps isn’t really a cloud-only service today – and I don’t expect it to go any direction except towards a more offline capable story as well. Web apps and native apps aren’t a binary switch. We won’t have one or the other in the future. Before too long, most Web apps will have a local component, and most local applications will have a Web component. The best part is that when we reach this point, “cloud” will mean even less than it means today.

 

 

 


24
Mar 13

One release away from irrelevance

A few weeks ago on Twitter, I said something about Apple, and someone replied back something akin to, “Apple is only one release away from irrelevance.”

Ah, but you see… we all are. In terms of sustainability, if you believe “we get this version released, and we win”, you lose. Whether you have competitors today, or you have a market that is principally yours, if there is enough opportunity for you, there’s enough appeal for someone else to enter it too.

A book I recently read discussed the first generation Ford Taurus. Started at the cusp of the 1980’s, after a decade of largely mediocre vehicles from Ford, the Taurus (and a handful of other vehicles that arrived near the same time) changed the aesthetic experience we expected from cars. The book’s author comments that Ford had even largely stopped using it’s blue oval insignia during the 1970’s, perhaps out of concerns that the vehicles didn’t represent the quality values that the blue oval should represent. Thing is, you very clearly get a picture that as the vehicle neared completion, the team “hit the wall” in marathoning parlance. They shipped, congratulated each other, and moved on to other projects. Rather than turning around and immediately beginning work on a next model to iterate the design and own the market, they stalled out for nearly a decade, only to do the same massive run in order to get the next iteration of the vehicle out the door (documented in yet another book). But I digress.

Many people often ask who Microsoft’s biggest competitor is. It isn’t Oracle. It isn’t startups. It’s Microsoft. Every 2-5 years, Microsoft replaces (and sometimes displaces) their own shipped X-1 products with new versions. If those new versions don’t include enough features and value so that customers can feel they are getting their money’s worth, they’ll stall out on older versions. We’ve seen this with Windows, where many businesses – and consumers, have stalled out on a 12 year old OS because “it’s good enough”, or Office 2003, because not only is it “good enough”, but the Ribbon (and it’s half-completed existence in Office 2007) scared away many customers. It’s gotten better in each iteration since – but the key question is always, “is there enough value in there to pull customers forward”?

I believe that the first thing you have to firmly grasp in technology – or really in business as a whole – is that nothing is forever.  You must figure out how to out-innovate yourself, to evolve and grow, even if it means jettisoning or submarining entire product lines – in order to create new ones that can take you forward again. Or disappear.

I’ve been rather surprised when I’ve said this, how defensive some people have gotten. Most people don’t like to ponder their own mortality. They sure don’t like to ponder the mortality of their employer or the platform that they build their business upon. But I think it is imperative that people begin doing exactly that.

There will come a day when we will likely talk about every tech giant of today in the past tense. Many may survive, but will be shadows (red dwarves, as I said on Twitter last night) of themselves. Look at how many tech giants of the 1970’s-1990’s are gone today – or are largely services driven organizations rather than technological innovators.

When that follower said that Apple was only one release away from irrelevance, I replied back with something similar to, “Almost every company is. It’s just a question of whether they realize it and act on it or not.”


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?


06
Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.