07
Sep 15

How I learned to stop worrying and love the cloud

For years, companies have regularly asked me for my opinion on using cloud-based services. For the longest time, my response was one about, “You should investigate what types of services might fit best for your business,” followed by a selection of caveats reminding them about privacy, risk, and compliance, since their information will be stored off-premises.

But I’ve decided to change my tune.

Beginning now, I’m going to simply start telling them to use cloud where it makes sense, but use the same procedures for privacy, risk, and compliance that they use on-premises.

See what I did there?

The problem is that we’ve treated hosted services (née cloud) as something distinctly different from the way we do things on-premises. But… is it really? Should it be?

It’s hard to find a company today that doesn’t do some form of outsourcing. You’re trusting people who don’t work “for” you with some of your company’s key secrets. Every company I can think of does it. If you don’t want to trust a contract-based employee with your secrets, you don’t give them access, right? Deny them access to your network, key server, or files shares (or SharePoint servers<ahem/>). Protect documents with things like Azure Rights Management. Encrypt data that needs to be protected.

These are all things that you should have been doing anyway, even before you might have had any of your data or operations off-premises. If you had contract/contingent staff, those systems should have been properly secured in order to avoid <ahem/> an overzealous admin (see link above) from liberating information that they shouldn’t really have access to. Microsoft and Amazon (and to a lesser extent at this point), have been putting a lot of effort into securing your data while it lives within their clouds, and that’s going to continue over the next 2-5 years, to the point where, honestly, with a little investment in tech and process – and likely a handful of new subscription services that you won’t be able to leave – you’ll be able to secure data better than you can in your infrastructure today.

Yeah. I said it.

A lot of orgs talk big about how awesome their on-premises infrastructure is, and how uncompromisingly secure it is. And that’s nice. Some of them are right. Many of them aren’t. In the end, in addition to systems and employees you can name, you’re probably relying on a human element of contractors, vendors, part-time employees, “air-gapped” systems that really aren’t, sketchy apps that should have been retired years ago, and security software that promised the world, but that can’t really even secure a tupperware container. We assume that cloud is something distinctly different from on-premises outsourcing of labor. But it isn’t really that different. The only difference is that today, unsecured (or unsecurable) data may have to leave your premises. That will improve over time, if you work at it. The perimeter, like that of smart phones has since 2007, will allow you to secure data flow between systems you own, and on systems you own – whether those live on physical hardware in your datacenter, or in AWS or Azure. But it means recognizing this perimeter shift – and working to reinforce that new perimeter in terms of security and auditing.

Today, we tend to fear cloud because it is foreign. It’s not what we’re all used to. Yet. Within the next 10 years, that will change. It probably already has changed within the periphery (aka the rogue edges) or your organization today. Current technology lets users deploy “personal cloud” tools, whether business intelligence, synchronization, desktop access, and more – without letting you have veto power, unless you own and audit the entirety of your network (and any telecom access), and admin access to all PCs. And you don’t.

The future involves IT being proactive about providing cloud access ahead of rogue users. Deciding where to be more liberal about access to tools than orgs are used to, and being able to secure perimeters that you may not even be aware of. Otherwise, you get to be dragged along on the choose your own adventure that your employees decide on for you.


21
Aug 15

The curse of the second mover

When I lived in Alaska, there was an obnoxious shirt that I used to see all the time, with a group of sled dogs pictured on it. The cutesy saying on it was, “If you’re not the lead dog, the view never changes.” While driving home last night and considering multiple tech marketplaces today, it came to mind.

Consider the following. If you were:

  1. Building an application for phones and tablets today, whose OS would you build it for first?
  2. Building a peripheral device for smartphones, what device platform would you build it for?
  3. Selling music today, whose digital music store would you make sure it was in first?
  4. Selling a movie today, whose digital video store would you make sure it was in first?
  5. Publishing a book, whose digital book store would you make sure it was in first?

Unless you’ve got a lot of time or money on your hands, and feel like dealing with the bureaucracy of multiple stores, the answer to all of the above is going to be exactly the same.

Except that last one.

If you’re building apps, smartphone peripherals, or selling music or movies, you’re probably building for Apple first. If you’re publishing or self-publishing a book, you’re probably going to Amazon first. One could argue that you might go to Amazon with music or a movie – but I’m not sure that’s true – at least if you wanted to actually sell full-fare copies vs. getting them placed on Prime Music/Prime Instant Video.

In the list above, that doesn’t tell a great tale for second movers. If you’re building a marketplace, you’ve got to offer some form of exceptional value over Apple (or Amazon for 5) in order to dethrone them. You’ve also got to offer something to consumers to get them to use your technology, and content purveyors/device manufacturers to get them to invest in your platform(s).

For the first three, Apple won those markets through pure first mover advantage.

The early arrival of the iPhone and iOS, and the premium buyers who purchase them, ensure that 1 & 2 will be answered “Apple”. The early arrival of the iPod, iTunes, and “Steve’s compromise”, allowing iTunes on Windows – as horrible as the software was/is – ensures that iTunes Music is still the answer to 3.

Video is a squishy one – as the market is meandering between streaming content (Netflix/Hulu), over-the-top (OTT) video services like Amazon Instant Video, MLB At Bat, HBO Now, etc., and direct purchase video like iTunes or Google Play. But the wide availability of Apple TV devices, entrenchment of iTunes in the life of lots of music consumers, and disposable income mean that a video content purveyor is highly likely to hit iTunes first – as we often see happen with movies today.

The last one is the most interesting though.

If we look at eBooks, something interesting happened. Amazon wasn’t the first mover – not by a long shot. Microsoft made their Reader software available back in 2000. But their device strategy wasn’t harmonized with the ideas from the team building the software. It was all based around using your desktop (ew), chunky laptop (eventually chunky tablet), or Windows Pocket PC device for reading. Basically, it was trying to sell eBooks as a way to read content on Windows, not really trying to sell eBooks themselves. Amazon revealed their first Kindle in 2007. (This was the first in a line of devices that I personally loathe, because of the screen quality and flicker when you change pages.) Apple revealed the iPad, and rapidly launched iBooks in 2010, eventually taking it to the iPhone and OS X. But the first two generations of iPad were expensive, chunky device to try and read on, and iBooks not being available on the iPhone and OS X didn’t help. (Microsoft finally put down the Reader products in 2012, just ahead of the arrival of the best Windows tablets…<sigh/>) So even though Apple has a strong device story today, and a strong content play in so many other areas, they are (at least) the second fiddle in eBooks. They tout strong numbers of active iBooks users… but since every user of iOS and OS X can be an iBooks users, numbers mean little without book sales numbers behind them. Although Amazon’s value driven marketplace may not be the healthiest place for authors to publish their wares, it appears to be the number one place by far, without much potential for it to be displaced anytime soon.

If your platform isn’t in the leader for a specific type of content, pulling ahead from second place is going to be quite difficult, unless you’ve somehow found some silver bullet. If you’re in third, you have an incredible battle ahead.


12
Feb 15

Bring your own stuff – Out of control?

The college I went to had very small cells… I mean dorm rooms. Two people to a small concrete walled-room, with a closet, bed, and desk that mounted to the walls. The RA on my floor (we’ll call him “Roy”) was a real stickler about making us obey the rules – no televisions or refrigerators unless they were rented from the overpriced facility in our dorm. After all, he didn’t want anybody creating a fire hazard.

But in his room? A large bench grinder and a sanding table, among other toys. Perhaps it was a double standard… but he was the boss of the floor – and nobody in the administration knew about it.

Inside of almost every company, there are several types of Roy, bringing in toys that could potentially harm the workplace. Most likely, the harm will come in the form of data loss or a breach, not a fire as it might if they brought in a bench grinder. But I’m really starting to get concerned that too many companies aren’t mindful of the volume of toys that their own Roys have been bringing in.

Basically, there are three types of things that employees are bringing in through rogue or personal purchasing:

  • Smartphones, tablets, and other mobile devices (BYOD)
  • Standalone software as a service
  • Other cloud services

It’s obvious that we’ve moved to a world where employees are often using their own personal phones or tablets for work – whether it becomes their main device or not. But the level of auditing and manageability offered by these devices, and the level of controls that organizations are actively enforcing on them, all leave a lot to be desired. I can’t fathom the number of personal devices today, most of them likely equipped with no passcode or a weak one, that are currently storing documents that they shouldn’t be. That document that was supposed to be kept only on the server… That billing spreadsheet with employee salaries or patient SSNs… all stored on someone’s phone, with a horrible PIN if one at all, waiting for it to be lost or stolen.

Many “freemium” apps/services offer just enough rope for an employee to hang their employer with. Sign up with your work credentials and work with colleagues – but your management cannot do anything to manage them – without (often) paying.

Finally, we have developers and IT admins bringing in what we’ll call “rogue cloud”. Backing up servers to Azure… spinning up VMs in AWS… all with the convenience of a credit card. Employees with the best of intentions can smurf their way through, getting caught by internal procedures or accounting. A colleague tells a story about a CFO asking, “Why are your developers buying so many books?” The CFO was, of course, asking about Amazon Web Services, but had no idea, since the charges were small irregular amounts every month across different developers, from Amazon.com. I worry that the move towards “microservices” and cloud will result in stacks that nobody understands, that run from on-premises to one or more clouds – without an end-to-end design or security review around them.

Whether we’re talking about employees bringing devices, applications, or cloud services, the overarching problem here is the lack of oversight that so many businesses seem to have over these rapidly growing and evolving technologies, and the few working options they have to remediate them. In fact, many freemium services are feeding on this exact problem, and building business models around it. “I’m going to give your employees a tool that will solve a problem they’re having. But in order for you to solve the new problem that your employees will create by using it, you’ll need to buy yet another tool, likely for everybody.”

If you aren’t thinking about the devices, applications, and services that your employees are bringing in without you knowing, or without you managing them, you really might want to go take a look and see what kinds of remodeling they’ve been doing to your infrastructure without you noticing. Want to manage, secure, integrate, audit, review, or properly license the technology your employees are already using? You may need to get your wallet ready.


17
Jun 14

Is the Web really free?

When was the last time you paid to read a piece of content on the Web?

Most likely, it’s been a while. The users of the Web have become used to the idea that Web content is (more or less) free. And outside of sites that put paywalls up, that indeed appears to be the case.

But is the Web really free?

I’ve had lots of conversations lately about personal privacy, cookies, tracking, and “getting scroogled“. Some with technical colleagues, some with non-technical friends. The common thread is that most people (that world full of normal people, not the world that many of my technical readers likely live in) have no idea what sort of information they give up when they use the Web. They have no idea what kind of personal information they’re sharing when they click <accept> on that new mobile app that wants to upload their (Exif geo-encoded) photos, that wants to track their position, or wants to harmlessly upload their phone’s address book to help “make their app experience better”.

My day job involves me understanding technology at a pretty deep level, being pretty familiar with licensing terms, and previous lives have made me deeply immersed in the world of both privacy and security. As a result, it terrifies me to see the crap that typical users will click past in a licensing agreement to get to the dancing pigs. But Pavlov proved this all long ago, and the dancing pigs problem has highlighted this for years, to no avail. Click through software licenses exist primarily as a legal CYA, and terms of service agreements full of legalese gibberish could just as well say that people have to eat a sock if they agree to the terms – they’ll still agree to them (because they won’t read them).

On Twitter, the account for Reputation.com posted the following:

A few days later, they posted this:

I responded to the first post with the statement that accurate search results have intrinsic value to users, but most users can’t actually quantify a loss of privacy. What did I mean by that? I mean that most normal people will tell you they value their privacy if you ask them, but if you take away the free niblets all over the Web that they get for giving up their privacy little by little, they’ll actually renege on how important privacy really is.

Imagine the response if you told a friend, family member, or colleague that you had a report/blog/study you were working on, and asked them, “Hey, I’m going to shoulder-surf you for a day and write down which Websites you visit, how often and how long you visit them, and who you send email to, okay?” In most cases, they’d tell you no, or tell you that you’re being weird.

Then ask them how much you’d need to pay them in order for them to let you shoulder-surf. Now they’ll be creeped out.

Finally, tell them you installed software on their computer last week, so you’ve already got the data you need, is it okay if you use that for your report. Now they’re going to probably completely overreact, and maybe even get angry (so tell them you were kidding).

More than two years ago, I discussed why do-not-track would stall out and die, and in fact, it has. This was completely predictable, and I would have been completely shocked if this hadn’t happened. It’s because there is one thing that makes the Web work at all. It’s the cycle of micropayments of personally identifiable information (PII) that, in appropriate quantities, allow advertisers (and advertising companies) to tune their advertising. In short, everything you do is up for grabs on the Web to help profile you (and ideally, sell you something). Some might argue that you searching for “schnauzer sweaters” isn’t PII. The NSA would beg to differ. Metadata is just as valuable, if not more, than data itself, to uniquely identify an individual.

When Facebook tweaked privacy settings to begin “liberating” personal information, it was all about tuning advertising. When we search using Google (or Bing, or Yahoo), we’re explicitly profiling ourselves for advertisers. The free Web as we know it is sort of a mirage. The content appears free, but isn’t. Back in the late 1990’s, the idea of micropayments was thrown about, and has in my opinion come and gone. But it is far from dead. It just never arrived in the form that people expected. Early on, the idea was that individuals might pay a dollar here for a news story, a few dollars there for a video, a penny to send an email, etc. Personally, I never saw that idea actually taking off, primarily because the epayment infrastructure wasn’t really there, and partially because, well, consumers are cheap and won’t pay for almost anything.

In 1997, Nathan Myhrvold, Microsoft’s CTO, had a different take. Nathan said, “Nobody gets a vig on content on the Internet today… The question is whether this will remain true.”

Indeed, putting aside his patent endeavors, Nathan’s reading of the tea leaves at that time was very telling. My contention is that while users indeed won’t pay cash (payments or micropayments) for the activities they perform on the Web, they’re more than willing to pay for their use of the Web with picopayments of personal information.

If you were to ask a non-technical user how much they would expect to be paid for an advertiser to know their home address, how many children they have, or what the ages of their children are, or that they suffer from psoriasis, most people would be pretty uncomfortable (even discounting the psoriasis). People like to assume, incorrectly, that their privacy is theirs, and the little lock icon on their browser protects all of the niblets of data that matter. While it conceptually does protect most of the really high financial value parts of an individual’s life (your bank account, your credit card numbers, and social security numbers), it doesn’t stop the numerous entities across the Web from profiling you. Countless crumbs you leave around the Web do allow you to be identified, and though they may not expose your personal, financial privacy, do expose your personal privacy for advertisers to peruse. It’s easy enough for Facebook (through the ubiquitous Like button) or Google (through search, Analytics, and AdSense) to know your gender, age, marital/parental status, any medical or social issues you’re having, what political party you favor, and what you were looking at on that one site that you almost placed an order on, but wound up abandoning.

If you could truly visualize all of the personal attributes you’ve silently shared with the various ad players through your use of the Web, you’d probably be quite uncomfortable with the resulting diagram. Luckily for advertisers, you can’t see it, and you can’t really undo it even if you could understand it all. Sure, there are ways to obfuscate it, or you could stay off the Web entirely. For most people, that’s not a tradeoff they’re willing to make.

The problem here is that human beings, as a general rule, stink at assessing intangible risk, and even when it is demonstrated to us in no uncertain terms, we do little to rectify it. Free search engines that value your privacy exist. Why don’t people switch? Conditioning to Google and the expected search result quality, and sheer laziness (most likely some combination of the two). Why didn’t people flock from Facebook to Diaspora or other alternatives when Facebook screwed with privacy options? Laziness, convenience, and most likely, the presence of a perceived valuable network of connections.

It’s one thing to look over a cliff and sense danger. But as the dancing pigs phenomenon (or the behavior of most adolescents/young adults, and some adults on Facebook) demonstrates, a little lost privacy here and a little lost privacy there is like the metaphoric frog in a pot. Over time it may not feel like it’s gotten warmer to you. But little by little, we’ve all sold our privacy away to keep the Web “free”.


20
May 14

Engage or die

I’m pretty lucky. For now, this is the view from my office window. You see all those boats? I get to look out at the water, and those boats, all the time (sun, rain, or snow). But those boats… honestly, I see most of those boats probably hundreds of days per year more than their owners do. I’d bet there’s a large number of them that haven’t moved in years.

IMG_0224The old adage goes “The two happiest days in a boat owner’s life are the day he buys it, and the day he sells it.”

All too often, the tools that we acquire in order to solve our problems or “make our lives better” actually add new problems or new burdens to our lives instead. At least that’s what I have found. You buy the best hand mixer you can find, but the gearing breaks after a year and the beaters won’t stay in, so you have to buy a new one. You buy a new task-tracking application, but the act of changing your work process to accommodate it actually results in lower efficiency than simply using lined paper with a daily list of tasks. As a friend says about the whole Getting Things Done (GTD) methodology, “All you have to do is change the way you work, and it will completely change the way you work.”

Perhaps that’s an unfair criticism of GTD, but the point stands for many tools or technologies. If the investment required to take advantage of, and maintain, a given tool exceeds the value returned by it (the efficiency it provides), it’s not really worth acquiring or using.

Technology promises you the world, but then winds up making the best part of using it when you cut yourself taking it out of the hermetically sealed package it was shipped in from China. Marketing will never tell you about the sharp edges, only the parts of the product that work within the narrow scenarios product management understood and defined.

Whether it’s software or hardware, I’ve spent a lot of time over the last year or so working to eliminate tools that fail to make me more productive or reduce day-to-day friction in my work or personal life. Basically looking around, pondering, “how often do I use this tool?”, and discarding it if the answer isn’t “often” or “all the time.” Tangentially, if there’s a tool that I even use at all because it’s the best option, but rarely do so, I’ll keep it around. PaperKarma is a good example of this, because there’s honestly no other tool that does what it does.

However, a lot of software and hardware that I might’ve found indispensable at one point is open for consideration, and I’m tired of being a technology pack-rat. If a tool isn’t something that I really want to (or have to) use all the time, if there’s no reason to keep it around, then why should I keep it? If it’s taking up space on my phone, tablet, or computer, but I never use it, why would I keep it at all?

As technology moves forward at a breakneck pace, with new model smartphones, tablets, and related peripherals for both arriving at incredible speed and with amazing frequency, we all have to make considered choices about when to acquire technology, when to retire it, and when to replace it. Similarly, as software purveyors all move to make you part of their own walled app and content gardens and mimic or pass each other, they also must fight to maintain relevance in the mind of their users every day.

This is why we see Microsoft building applications for iOS and Android, along with Web-based Office applications – to try and address scenarios that Apple and Google already do. It’s why we saw Apple do a reset on the iWork applications, add Web-based versions (to give PC users something to work with). Finally, it’s why we see Google building Hangout plug-ins for Outlook. It’s trying to inject your tools into a workflow where you are a foreign player.

The problem with this is that it is well-intended, but can only be modestly successful at best. As with the comment about GTD, you have to organically become a part of a user’s workflow. You can’t assert yourself into the space with your own workflow and expect to succeed. Great examples of this include Apple’s iWork applications where users on Macs are trying to collaborate with Microsoft Office users on Windows or Mac. Pages won’t seamlessly interact with Word documents – it always wants to save as a Pages document. The end result is that users are constantly frustrated throwing the documents back and forth, and will usually wind up caving and simply using Office.

Tools, whether hardware, or more likely software, that want to succeed over the long run must follow the below “rules of engagement”:

  1. Solve an actual problem faced by your potential users
  2. Seamlessly inject yourself into the workflow of the user any any collaborators the user must work with to solve that problem
  3. Deliver enough value such that users must engage regularly with your application
  4. Don’t create more friction than you remove for your users.

For me, I find that games are easily dismissed. They never solve a real problem, and are an idle-time consumer. Entertain the user or be dismissed and discarded. I downloaded a few photo synchronization apps, in the hopes that one could solve my fundamental annoyances with iPhoto. Both claimed to synchronize all of your photos from your iOS devices to their cloud. The problems with this were two-fold.

  1. They didn’t reliably synchronize on their own in the background. Both regularly nagged me to open the app so it could sync
  2. They synchronized to a cloud service, when I’ve already made a significant investment in iPhoto.

In the end, I stopped using both apps. They didn’t help me with the task I wanted to accomplish, and in fact made it more burdensome for the little value they did provide.

My primary action item out of this post, then, is a call to action for product managers (or anybody designing app[lication]s):

Make your app easy to learn, easy to engage with, friction-free, and valuable. You may think that the scenario you’ve decided to solve is invaluable, but it may actually be nerd porn that most users could care less about. Nerd porn as I define it is features that geeks creating things add to their technology that most normal users never care about (or miss if they’re omitted).

Solving a real-world problem with a general-use application means doing so in a simple, trivial, non-technical manner, and doing it in a way that makes users fall in love with the tool. It makes them want to engage with it as a tool that feels irreplaceable – that they couldn’t live without. When you’re building a tool (app/hardware/software or other), make your tool truly engaging and frictionless, or prepare to watch users acquire it, attempt to use it, and abandon it – and your business potential going with it.


12
Mar 14

The trouble with DaaS

I recently read a blog post entitled DaaS is a Non-Starter, discussing how Desktop as a Service (DaaS) is, as the title says, a non-starter. I’ll have to admit, I agree. I’m a bit of a naysayer about DaaS, just as I have long been about VDI itself.

In talking with a colleague the other day, as well as customers at a recent licensing boot camp, it sure seems like VDI, like “enterprise social” is a burger with a whole lot of bun, and not as much meat as you might hope for (given your investment). The promise as I believe it to be is that by centralizing your desktops, you get better manageability. To a degree, I believe that to be true. To a huge degree, I don’t. It really comes down to how standardized you make your desktops, how centrally you manage user document storage, and how much sway your users have (are they admin or can they install their own Win32 apps).

With VDI, the problem is, well… money. First you have server hardware and software costs, second, you have the appropriate storage and networking to actually execute a a VDI implementation, and third, you finally have to spend the money to hire people who can glue it all together in an end-user experience that isn’t horrible. It feels to me that a lot of businesses fall in love with VDI (true client OS-based VDI) without taking the complete cost into account.

With DaaS, you pay a certain amount per month, and your users can access a standardized desktop image hosted on a service provider’s server and infrastructure – which is created and managed by them. The OS here is actually usually Windows Server, not a Windows desktop OS – I’ll discuss that in a second. But as far as infrastructure, using DaaS from a service provider means you usually don’t have to invest the cash in corporate standard Windows desktops or laptops (or Windows Server hardware if you’re trying VDI on-premises), or the high-end networking and storage, or the people to glue that architecture together. Your users, in turn, get (theoretically) the benefits of VDI, regardless of what device they come at it with (a personally owned PC, tablet, whatever).

However, as with any *aaS, you’re then at the mercy of your DaaS purveyor. In turn, you’re also at the mercy of their licensing limitations as it regards Windows. This is why  most of them run Windows Server; it’s the only version of Windows that can generally be made available by hosting providers, and Windows desktop OSs can’t be. You also have to live within the constraints of their DaaS implementation (HW/SW availability, infrastructure, performance, and architecture, etc). To date, most DaaS offerings I’ve seen focused on “get up and running fast!”, not “we’ll work with you to make sure your business needs are solved!”.

Andre’s blog post, mentioned at the beginning of my post here, really hit the nail on the head. In particular, he mentioned good points about enterprise applications, access to files and folders the user needs, adequate bandwidth for real-world use, and DaaS vs. VDI.

To me, the main point here is that with a DaaS, your service provider, not you, get to call a lot of the shots here, and not many of them consider the end-to-end user workflow necessary for your business.

Your users need to get tasks done, wherever they are. Fine. Can they get access to their applications that live on premises, through VDI in the cloud, from a tablet at the airport? How about their files? Does your DaaS require a secondary logon, or does it support SSO from their tablet or other non-company owned/managed device? How fat of a pipe is necessary for your users before they get frustrated? How close can your DaaS come to on-premises functionality (as if the user was sitting at an actual PC with an actual keyboard and mouse (or touch)?

On Twitter, I mentioned to Andre that Microsoft’s own entry into the DaaS space would surely change the game. I don’t know anything (officially or unofficially) here, but it has been long suspected that Microsoft has planned their own DaaS offering.

When you combine the technologies available in Windows Server 2012 R2, Windows Azure, and Office 365, the scenario for a Microsoft DaaS actually starts to become pretty amazing. There are implementation costs to get all of this deployed, mind you – including licensing and deployment/migration. That isn’t free. But it might be worth it if DaaS sounds compelling and I’m right about Microsoft’s approach.

Microsoft’s changes to Active Directory in Server 2012 R2 (AD FS, the Web Application Proxy [WAP]) mean that users can get to AD from wherever they are, and Office 365 and third party services (including a Microsoft DaaS) can have seamless SSO.

Workplace Join can provide that SSO experience, even from a Windows 7, iOS, or Samsung Knox device, and the business can control which assets and applications the user can connect to, even if they’re on the inside of the firewall and the user is not (through WAP, mentioned previously), or available through another third party.

Work Folders enables synchronized access to files and folders that are stored on-premises in Windows file shares, to user devices. This could conceptually be extended to work with a Microsoft (or third-party) DaaS as well, and I have to think OneDrive for Business could be made to work as well given the right VDI/DaaS model.

In a DaaS, applications the user needs could be provided through App-V, RemoteApp running from an on-premises Remote Desktop server (a bit of redundancy, I know), or again, published out through WAP so users could connect to them as if the DaaS servers were on-premises.

When you add in Office 365, it continues building out the solution, since users can again be authenticated using their AD credentials, and OneDrive for Business can provide synchronization to their work PCs and DaaS, or access on their personally owned device.

Performance is of course a key bottleneck here, assuming all of the above pieces are in place, and work as advertised (and beyond). Microsoft’s RemoteFX technology has been advancing in terms of offering a desktop-like experience regardless of the device (and is now supported by Microsoft’s recently acquired RDP clients for OS X, iOS, and Android). While Remote Desktop requires a relatively robust connection to the servers, it degrades relatively gracefully, and can be tuned down for connections with bandwidth/latency issues.

All in all, while I’m still a doubter about VDI, and I think there’s a lot of duct tape you’d need to put in place for a DaaS to be the practical solution to user productivity that many vendors are trying to sell it as, there is promise here, and given the right vendor, things could get interesting.


02
Dec 13

Jeff Bezos on Disruption

In general, the 60 Minutes interview of Jeff Bezos felt largely like a marketing piece. But what Bezos says at 13:30 is great.

“Companies have short lifespans… And Amazon will be disrupted one day…
I don’t worry about it because I know it is inevitable. Companies come and go. And the companies that are the shiniest and most important of any era, you wait a few decades and they’re gone.” – Jeff Bezos on 60 Minutes, Dec. 1, 2013

 

 


06
Jul 13

The iWatch – boom or bust?

In my wife’s family, there is a term used to describe how many people can comfortably work in a kitchen at the same time. The measurement is described in “butts”, as in “this is a one-butt kitchen”, or the common, but not very helpful “1.5 butt kitchen”. Most American kitchens aren’t more than 2 butts. But I digress.

I bring this up for the following reason. There is a certain level of utility that you can exploit in a kitchen as it exists, and no more. You cannot take the typical American kitchen and shove 4 grown adults in it and expect them to be productive simultaneously. You also cannot take a single oven, with two racks or not, and roast two turkeys – it just doesn’t work.

It’s my firm belief that this idea – the idea of a “canvas size” applies to almost any work surface we come across. From a kitchen or appliances therein, and beyond. But there is one place that I find it applies incredibly well – to modern digital devices.

The other day, I took out four of my Apple devices, and sat them side-by-side in increasing size order, and pondered a bit.

  • First was my old-school Nano; the older square design without a click-wheel that everyone loved the idea of making a watch out of.
  • Second was my iPhone 5.
  • Third, my iPad 2.
  • Finally, My 13″ Retina Macbook Pro.

It’s really fascinating when you stop to look at tactile surfaces sorted like this. While the MacBook Pro has a massively larger screen than the iPhone 5, the touch-surface of the TrackPad is only marginally larger than that of the iPhone. I’ve discussed touch and digits before, but the recent discussion of the “iWatch” has me pondering this yet again.

While many people are bullish on Google Glass (disregarding the high-end price that is sure to come down someday) or see the appeal of an Apple “iWatch”, I’m not so sure at this point. For some reason, the idea of a smart watch (aside from as a token peripheral), or an augmented reality headset like Glass doesn’t fly for me.

That generation iPod Nano was a neat device, and worked alright – but not great – as a watch. Among the key problems the original iOS Nano had when strapped down as a watch?

  1. It was huge – in the same ungainly manner as Microsoft’s SPOT watches, Suunto watches, or (the king of schlock), Swatch Pop watches.
  2. It had no WiFi or Bluetooth, so couldn’t easily be synched to any other media collection.

Outside of use as a watch, for as huge as it was, the UI was hamstrung in terms of touch. I believe navigation of this model was unintuitive and clumsy – one of the reasons I think Apple went back to a larger display on the current Nano.

I feel like many people who get excited about Google Glass or the “iWatch” are in love with the idea of wearables, without thinking about the state of technology and – more importantly, simple physical limitations. Let’s discard Google Glass for a bit, and focus on the iWatch.

I mentioned how the Nano model used as a watch was big, for its size (stay with me). But simply because of screen real-estate, it was limited to one-finger input. Navigating the UI of this model can get rather frustrating, so it’s handy that it doesn’t matter which finger you use. <rimshot/>

Because of their physical canvas size available for touch, each of the devices I mentioned above has different bounds of what kinds of gestures it can support:

  • iPod Nano – Single finger (generally index, while holding with other index/thumb)
  • iPhone 5 – Two fingers (generally index and thumb, while holding with other hand)
  • iPad 2 – Up to five fingers for gesturing, up to 8/10 for typing if your hands are small enough.
  • MacBook Pro – Up to five fingers for gesturing (though the 5-finger “pinch” gesture works with only 4 as well).

I don’t have an iPad Mini, but for a long time I was cynical about the device for anything but an e-reader due to the fact that it can’t be used with two-hands for typing. Apparently there are enough people just using it as an e-reader or typing with thumbs that they don’t mind the limitations.

So if we look at the size constraints of the Nano and ponder an “iWatch”, just what kind of I/O could it even offer? The tiny Nano wasn’t designed first as a watch – so the bezel was overly large, it featured a clip on the back, it needed a 30-pin connector and headphone jack… You could eliminate all of those with work – though the headphone jack would likely need to stay for now. But even with a slightly larger display, an “iWatch” would still be limited to the following types of input:

  1. A single finger (or a stylus – not likely from Apple).
  2. Voice (both through a direct microphone and through the phone, like Glass).

Though it could support other Bluetooth peripherals, I expect that they’ll pair to the iPhone or iPod Touch, rather than the watch itself – and the input would be monitoring, not keyboard/mouse/touchpad. The idea of watching someone try to type significant text on a smart watch screen with an Apple Bluetooth keyboard is rather amusing, frankly. Even more critically, I imagine that an “iWatch” would use Bluetooth Low Energy in order to not require charging every single day. It’d limit what it could connect to, but that’s pretty much a required tradeoff in my book.

In terms of output, it would again be limited to a screen about the same size as the old Nano, or smaller. AirPlay in or out isn’t likely.

My cynicism about the “iWatch” is based primarily around the limited utility I see for the device. In many ways if Apple makes the device, I see it being largely limited to a status indicator for the iPhone/iPod Touch/iPad that it is “paired” with. Likely serving to provide push notifications for mail/messaging/phone calls, or very simple I/O control for certain apps on the phone. For example, taking Siri commands, play/pause/forward for Pandora or Spotify, tracking your calendar, tasks, or mapping directions, etc. But as I’ve discussed before, and above, the “iWatch” would likely be a poor candidate for either long-form text entry whether typed or dictated. (Dictate a blog post or book through Siri? I’ll poke my eyes with a sharp stick instead, thanks.) For some reason, some people are fascinated by the Dick Tracy approach of issuing commands to your watch (or your glasses, or your shoe phone). But the small screen of the “iWatch” means it will be good for very narrow input, and very limited output. I like Siri a lot, and use it for some very specific tasks. But it will be a while before it or any other voice command is suitable for anything but short-form command-response tasks. Looking back at Glass, Google’s voice command in Glass may be nominally better, but again, will likely be most useful as an augmented reality heads-up-display/recorder.

Perhaps the low interest I have in the “iWatch”, Pebble Watch, or Google Glass can be traced back to my post discussing live tiles a few weeks ago. While I think there is some value to be had with an interconnected watch – or smartphone command peripherals like this, I think people are so in love with the idea that they’re not necessarily seeing how constrained the utility actually will be. One finger. Voice command. Perhaps a couple of buttons – but not many. Possibly pulse and pedometer. It’s not a smartphone on your wrist, it’s a remote control (and a constrained remote display) for your phone. I believe it’ll be handy for some scenarios, but it certainly won’t replace smartphones themselves anytime soon, nor will it become a device used by the general populace – not unless it comes free in the box with each iPhone (it won’t).

I think we’re in the early dawn of how we interact with devices and the world around us. I’m not trying to be overly cynical – I think we’ll see massive innovation over time, and see computing become more ubiquitous and spread throughout a network of devices around and on us.

For now, I don’t believe that any “iWatch” will be a stellar success – at least in the short run – but it could as it evolves over time to provide interfaces we can’t fathom today.


22
May 13

Beware of strangers bearing subscriptions

Stop for a second and think about everything you subscribe to. These are things that you pay monthly or annually for, that if you didn’t pay for, some service would discontinue.

The list probably includes everything from utilities to reading material, and most likely a streaming or media service like Netflix or Hulu, or a subscription to Amazon Prime, Xbox Live or iTunes Match.

I’ve been noticing a tendency for seemingly everything to move towards subscriptions. Frankly, it irritates me and I’m not really excited about the idea.

I understand and accept that natural gas, electricity, waste management, and (ick) even insurance need to be paid for regularly so we can maintain a certain lifestyle. But the tendency to treat software as a utility, while somewhat logical, isn’t necessarily a win for the consumer or the business (it depends on the package being offered, and how often you would upgrade if you weren’t being offered a subscription).

That puzzle, of course, depends on the consumer or business to not bother to do the math and just assume it’s a better deal (or get befuddled trying to decode the comparison), and just subscribing. Consumers, and frankly many businesses, are not great at doing that math. Many subscriptions are also – literally – incomparable with any peer perpetual license. Trying to compare Office 365 and Office 2013 for consumers is actually relatively easy. Even comparing simple business licensing of Office 365 vs. on-premises isn’t that hard. Trying to do it in a large business, where it can intertwine with an Enterprise Agreement (enterprise-wide licensing agreement), is horribly complex and hard to compare.

Most subscriptions are offered in the hope that they will become an evergreen – something that automatically renews on a monthly or annual basis. Most of these are, frankly, awful, in my opinion. Let me explain why.

Recall the label on the outside of many packaged foods in the US. You know the one. Think about the serving size. This is the soda bottle or bag of chips where it says 2.5 servings, though most consumers will drink or eat the whole thing at one sitting. Consumers (and again, many non-IT business decision makers) are not really great about doing the long-term accounting here. A little Hulu here. A little Amazon Prime there. An iTunes Match subscription. Add on Office 365… Eventually, all these little numbers add up to big numbers. But like calorie counting, people often lose track of the sunk costs they’re signing up for. We wonder why America has a debt problem? Because we eat consumer services like there’s no bill at the end of the meal.

You don’t need to count every calorie – but man, you need to be aware before you have a problem.

I’ve become a big fan over the last several years of Willard Cochrane, an economist who spent most of his life analyzing and writing about the American family farm. Cochrane created an eponym, “Cochrane’s Treadmill”, which describes the never-ending treadmill that farmers are forced into. Simplistically, Cochrane’s Treadmill can be described as follows.

Farm A buys a new technology that gives them a higher yield, it forces down the market price of the commodity they produce. Farm B is then forced to buy that new technology in order to improve their yield in order to  even maintain the income they had before farm A bought that technology.

By acquiring the technology, Farm A starts an unwinnable race, where he (economically) is pitted against farmer B in trying to make more money, generally from the same amount of land. Effectively, it is mutually assured destruction. Work harder, pay more, earn less.

I’ve been spending a lot of time recently trying to simplify my life. I’ve been working to remove software, hardware, and services that add complexity, rather than simplicity, to my life. As humans, we often buy things on a whim thinking (incorrectly), “this new <thing> will dramatically improve my life”. After all, the commercial told you it would! Often this isn’t the case.

Without getting off on an environmentalist hippie trip here, I’d like to circle back to farming for a second. Agricultural giants like Monsanto have inserted themselves into the farming input cycle in a very aggressive way. If we go back 100 years, farmers didn’t pay an industrial concern every year for pesticides, and they most certainly didn’t pay them an annual license fee for seeds (farmers are forbidden to save licensed genetically modified seeds every year, as they have done for millennia). As a result, farmers are not only creating a genetic monoculture that is likely more susceptible to disease, but they are subscribing to annual licensure of the seed and most likely an ever-increasing dosage of pesticides in order to defend against plants, insects, or other pests that have developed defenses against them. It is Cochrane’s Treadmill defined. Even worse, if a farmer wanted to discontinue use of the licensed seed, it’s unclear to me if they actually could. Monsanto has aggressively gone after farmers who may have even accidentally planted their seeds due to contamination. Can a farmer actually quit using licensed seed and not pay for it next year? I don’t know the answer.

I bring this up because I believe that it exemplifies the risks of subscriptions in general. Rather than a perpetual use right (farmers saving seed every year), farmers are licensing an annual subscription with no escape hatch. Imagine subscribing to a Software-as-a-Service (SaaS) offering and never being able to quit it? Whether in the form of carrots – “sweeteners” of sorts added to many subscriptions (such as the much more liberal 5 device use rights of Office 365), or sticks (virtualization or license reassignment rights only available with Microsoft Software Assurance), there are explicit risks of jumping into using almost any piece of software without carefully examining both the short-term use rights and long-term availability rights. It may appear I’m picking on Microsoft here. I’m not doing so intentionally – I’m just intimately, painfully, aware of how they license software. This could apply to Adobe, Oracle, or likely any ISV… and even some IHVs.

Google exemplifies another side of this, where you can’t really be certain how long they will continue to offer a service. Whether it’s discontinuing consumer-grade services like Reader, or discontinuing the free level of Apps for Business, before subscribing to Google’s services an organization should generally not only raise questions around privacy and security, but just consider the long-term viability of the service. “Will Google keep this service alive in the future?” Perhaps that sounds cynical – but I believe it’s a legitimate concern. If you’re moving yourself or your business to a subscription service (heck, even a free one), you owe it to yourself to try and ascertain how long you’ve got before you can’t even count on that service anymore.

While I may be an Apple fan, and Apple doesn’t seem to be as bullish on subscriptions, one can point to the hardware upgrade gravy train that they have created and see that it’s really just a hardware subscription. If you want the latest software and services from Apple, you have to buy a new phone, tablet, laptop, or desktop within Apple’s set intervals or be left behind. Businesses that are increasing their use of Apple technology – whether they pay for it or leave it to the employee to pay for – should be careful too. Staying up-to-date, including staying secure, with Apple generally means staying relatively up-to-date with hardware.

In The Development of American Agriculture, Cochrane reasoned that <profits> “will be captured by the business firm in financial control”, and would no longer go to farmers. Where initially the farm ecosystem consisted of supplier (farmer) and consumer, industrial agriculture giants have inserted themselves into the process of commodity creation – more and more industrialists demanding a growing annual cut from the income of (already struggling) American farmers.

Whether we’re talking seeds/pesticides, software, utilities, or any other subscription, there is a risk and a benefit that should be clearly understood. But I believe that even more than “this year”, where the immediate gratification is like consuming the 2.5 servings I mentioned earlier, both consumers and especially businesses need to think long-term; “Where will this service be in 3 years?”, “Will we be paying more and getting less?”, “If we go there, can we get out? How?”

When you subscribe to anything, you’re not taking on a product, you’re taking on a partner. Your ability to take on that partner depends upon your current financial position and your obligations to that partner, both now and in the future. While many businesses can surely find the risk/benefit analysis of a given subscription works out in the subscriber’s benefit (if they are really using the service regularly, and it provides an invaluable function that can’t be built internally or completed by perpetually licensed technology), I believe that companies should be cautious about taking on “subscription weight” without sufficiently examining and understanding 1) how much they really need the services offered by that subscription, 2) what the the short-term benefits and long-term costs of the subscription really are, 3) the risks of subscriptions (cost increase and service volatility among them), and 4) how that subscription compares in terms of use rights, costs, and risks, with any custom developed or perpetually licensed offering that can perform similar tasks.

If it seems like I’m anti-subscription, I guess you could say I am. If you want a cut of my income, earn it. Most evergreen subscriptions aren’t worth it to me. I think too many consumers and businesses fall prey to the fact that “just subscribing” rather than building and owning a solution, or buying a perpetually licensed one, sounds easier, so they go that route – and wind up stuck there.


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?