Oct 15

Simulated gambling in the App Store? The only winning move is not to play.

From the arrival of Apple’s iPhone App Store, they’ve elected to keep the platform, shall we say, “Family Friendly”.

While the guidelines for developers who elect to sell their software through the App Store are always evolving, they seem much more constant and consistent versus when the store first opened. In general, it’s still about keeping it a warm fuzzy place, while allowing some evolution so the App Store can grow and thrive. Apps which which violate terms include those that offer pornography, violence (simulated or other), targeted defamatory or offensive content at a given race, ethnicity, or or culture, or include objectionable content. What’s objectionable? Ask Apple, as they use the Potter Stewart school of content screening. Things like Metadata+ and Ephemeral+, which provide information unavailable anywhere else, but which could be found “unpleasant” by some, are not available on Apple’s platforms. Personally, the justification of that is ridiculous, but that’s a matter for another day.

Instead, I want to talk about “simulated gambling” games. This week, I found myself on the App Store search page, and noticed among the Trending Searches, the string “777”. As someone who flies regularly (but doesn’t gamble), I was curious what this even was. I clicked, and I couldn’t have been more disappointed. I clicked the link, and discovered an endless parade of “simulated” slot machine games.

What’s really both fascinating and terrifying to me is how much the Trending Searches space seems to include “simulated gambling” titles at night, and how many of the Top Grossing apps in the App Store are simulated gambling.

I really dislike that much of the iOS ecosystem has become overgrown by free-to-play (F2P) apps and games. I’ve started referring to these as free-to-p(l)ay instead, as because they generally require you to pay if you want to actually get to the most desirable content or levels in the title. I’ve only ever interacted with a handful of F2P games, and as a general rule, they are basically a Skinner box that conditions the user into paying for content in order to receive gratification.

Here’s where the problems begin, though. I believe there are basically two ways to classify titles in the App Store that offer in-app purchase (IAP):

  1. À la carte IAP
  2. Bottomless IAP.

À la carte IAP apps offer one price for entry (either free or some base currency), and then a set menu of items that can enable a set collection of functionality within or interconnected to the app. For example, a drawing app could offer a set of pens or brushes, or a range of colors. The point is, a given amount of currency will buy you a set piece of functionality. One could argue that you can IAP subscribe to services and that can be ongoing, but I contend that is still a set currency over time.

Bottomless IAP apps, on the other hand, have an almost endless supply of offers to exchange currency for downloadable content, “lives”, “coins” or other virtual (but financially worthless) tchotchkes to help you progress within the app (game). While the apps may have a cap on how much can be spent over time, many offer ridiculously expensive IAP items that can be repeatedly purchased, ideal for targeting and manipulating impressionable individuals. These are the IAP titles that we’ve all heard about, where people of all ages get duped into paying real dollars, without realizing how big the financial hole is that they’ve created for themselves. Many of these simulated gaming titles offer IAP items up to US$99!

I have two problems with “simulated gambling” apps in the store. In reverse priority order:

  1. They might be violating numerous gaming laws around the world
  2. They are preying upon people, including those dealing with real-world gambling addiction problems.

As a general rule, Apple’s guidelines on apps that include gambling state:

“Apps that offer real money gaming (e.g. sports betting, poker, casino games, horse racing) or lotteries must have necessary licensing and permissions in the locations where the App is used, must be restricted to those locations, and must be free on the App Store.”


“Apps that use IAP to purchase credit or currency to use in conjunction with real money gaming will be rejected”

So “gaming” apps like simulated slot machines are in an interesting wedge. They ride a fine line, seemingly all following the first guideline, and making themselves free for download, but with the opportunity for the consumer to bleed out significant cash through bottomless IAP. They can’t ever convert any winnings in the app to actual real-world winnings, or arguably they’d violate the second term.

Now here’s where things get interesting. Let’s take a look at that first term more closely. These apps are supposed to be licensed according to the location where they are used. This is a distinct problem to me. Though these are “simulated gaming”, I believe that since they are simulated slot machines (among other categories of gambling available in the App Store), they should follow the jurisdiction where they are used.

Thing is, there are very specific rules in many jurisdictions on what the payout must be for a given device used for a given category of game. For example, on the Las Vegas strip on the percentage of cash that must be paid out to gamblers, which ranges from 88.06% (penny slots) to 93.69% (US$1 slots). Arguably, the old line that “the house always wins” isn’t completely true. But statistically, it isn’t going to be you, either.

But these games are all “simulated”. There is literally no opportunity for payout. Any winnings are generally returned as an opportunity to play again. There are no winnings. None. Arguably, by being a “simulation”, these titles do not need to abide by payout terms within the locales they are being used. But as they aren’t real gaming, I personally feel Apple should reconsider having this category of title in the store at all.

Gambling addiction is a real thing. People get sucked in by the lure of easy money, and can quickly lose more than they had to begin with. The National Council on Problem Gambling has an interesting survey, the 2013 National Survey of Problem Gambling Services discusses how much money is spent on gambling addiction services across the U.S. The App Store lets consumers link credit cards to IAP. By offering bottomless IAP, these titles are effectively allowed to shake out the wallets of vulnerable consumers to an extent they cannot financially bear.

The problem with these “games”, is that they play upon the same emotions as real slot machines, luring the consumer into wasting untold dollars on a game that is completely unwinnable, financially. To quote the movie WarGames, “The only winning move is not to play.”

My contention is that if games offering simulated gambling must be allowed on the App Store at all, they should not be allowed to offer bottomless IAP, or perhaps even offer IAP at all. Take a look at this game review from a user of top-tier “simulated slot” game Slotomania from Playtika Games (A division of Caesars Interactive Entertainment):


That makes me so sad. These games offer no redeeming value (literally). From Playtika’s own Terms of Service (linked incorrectly within their App Store entry, by the way):

“The Service may include an opportunity to purchase virtual, in-game currency (“Coins”) that may require you to pay a fee using real money to obtain the Coins. Coins can never be redeemed for real money, goods, or any other item of monetary value from Playtika or any other party. You understand that you have no right or title in the virtual in-game items, spins or Coins.”

These titles are all about killing time while burning your wallet at the same time. They’re all about taking money from the easily impressionable – youth, adults, retirees… across the board. When I posted on Twitter about simulated gambling, a follower of mine replied back with the following:

“grandma uses her iPad 2 almost exclusively for slot apps. :*(“

If Apple is going to hold up the App Store as a family friendly place for commerce, with reasonable consumer protections, I think they need to re-examine what role, if any, simulated gambling apps with IAP are allowed to play there.

Sep 15

The Apple Watch is perfect. On paper.

This week, I’m doing something that I don’t remember ever actually doing before. I’m taking back an Apple device, for a refund.

After spending less than a week with the Apple Watch, I have to say, I’m disappointed. A bit in the device. But more in Apple. The software is simply not done. Perhaps it’s my use of a 5s as the host device for it. Perhaps my expectations are too high. Perhaps I’m right, that it’s not ready for prime time. Regardless, it’s definitely not worth the price of entry in the device’s current condition. As Nilay Patel said, “If you’ll like toys, you’ll like it.

As I checked out at a grocery this week, and performed my first Apple Pay transaction, the  following interchange happened between the cashier and myself:

Her: “Ooh. Is that it (the Watch)?”

Me: “Yes.”

Her: “How do you like it?”

Me: “It’s okay. I’ve only had it for about a day.”

Her: “What can it do?”

Me: <silence/>

I hesitated, struggling to really list out the things that the Watch could do that were relevant to me. It was in that moment that I think I switched from “I think I’ll return it.” to “I’m going to return it.” I understand that apparently most normals are quite happy with their Watches, and that only technophiles (if you can still call me that) like myself found all the foibles in the way the device works.

The Watch isn’t without positive attributes. I just don’t feel that they outweigh the negatives.

What’s good:

  • As a bauble, it is gorgeous. I bought the stainless Watch, with the new, more traditional Saddle Brown Classic Buckle. As a piece of jewelry, I think it looks really good. (Although my 14YO would tell you that my free opinions on style are worth what you pay me for them.)
  • As a watch, it’s pretty good. I mean, it keeps time, and the interchangeable faces are fun for a bit.
  • Given the space, the user interface works pretty well.
  • When it all works, there are some neat conveniences that you can’t do (or can’t do as easily) with an iPhone. Apple Pay and other Wallet (nee Passbook) features on your wrist are handy. But not “OMG!” useful.
  • There’s a pretty amazing supply of Apple Watch apps that already exist. (See caveat to this, below.)

I was really hoping I could come up with some more positive aspects here. But honestly, I’ve run out already.

So… what’s bad about Apple Watch? In no explicit order:

  • It is very expensive, for what it does. My mind boggles that Apple has sold any of the Watch Edition models.
  • Updating sucked. It took over an hour and a half to install the 500MB update from my iPhone. That is inexcusable.
  • It’s heavy. I’ve got tiny T-rex arms, but the weight of the Watch on my ulnar styloid (the bump of bone on the outside of your wrists) was painful after only a few minutes.
  • It’s slow. In tandem with my 5s, there are far too many beach balls waiting for apps to launch. This may get better over time as apps are updated for the new version of the OS. But I fear that it may be indicative of the real resource constraints on such a small device. Time will tell.
  • I had hope that Watch would make my Phone better. That is, it would add utility to my phone. Instead, because of the app model, it made my phone’s battery life horrible.
  • The version of the SpringBoard shell used by the Apple watch is atrocious. I have small fingers, so don’t have much trouble selecting apps. But the UI of the Watch comes the closest to being the “sea of icons” on iOS that Microsoft derided for so long. Doing anything rapidly on the watch with this UI is… complicated.
  • Too many app developers don’t seem to understand what the Watch is, and is not, ideal for. I guess that’s both a good and bad thing. But to the caveat I mentioned earlier, there are a lot of apps for the Watch – many of which aren’t even on Windows Phone. But there’s a lot of crap – it seems many developers are lost in the wilderness.
  • It shows every single fingerprint you place on the face.
  • The packaging for the Apple Watch is… overwhelming. There’s plastic on plastic on plastic. Wrapping device subcomponents in one-time use plastic is horrifically wasteful.

The former product manager (and former development manager) in me sees how we arrived at this point. The Apple Watch team was established long ago, and started on their project. At one point, pressure from above, from outside, from investors, who knows… forced Apple to push up a launch date. The hardware was reasonably ready. But the software was a hot mess.

Traditionally, Apple excelled when they discarded features that weren’t ready, even if competitors already did them in a half-assed way – winning over consumers by delivering those features later when they’re actually ready. Unfortunately, you often get a product manager in the mix that pushes for a feature, even if it can’t really be implemented well or reliably. The Apple Watch feels like this. It offers a mix of checkbox features that, yes, you can argue, kind of work. But they don’t have the finish that they should. The software doesn’t respect the hardware. In fact, it’s giving a middle finger to the hardware. Even WatchOS 2 fails to deliver adequate finish. The list of features that the Watch promises sound nifty. But actually living with the Watch is disappointing. It isn’t what it should be, given the Apple brand on the outside. I expect better from Apple. Maybe next time.

Aug 15

The curse of the second mover

When I lived in Alaska, there was an obnoxious shirt that I used to see all the time, with a group of sled dogs pictured on it. The cutesy saying on it was, “If you’re not the lead dog, the view never changes.” While driving home last night and considering multiple tech marketplaces today, it came to mind.

Consider the following. If you were:

  1. Building an application for phones and tablets today, whose OS would you build it for first?
  2. Building a peripheral device for smartphones, what device platform would you build it for?
  3. Selling music today, whose digital music store would you make sure it was in first?
  4. Selling a movie today, whose digital video store would you make sure it was in first?
  5. Publishing a book, whose digital book store would you make sure it was in first?

Unless you’ve got a lot of time or money on your hands, and feel like dealing with the bureaucracy of multiple stores, the answer to all of the above is going to be exactly the same.

Except that last one.

If you’re building apps, smartphone peripherals, or selling music or movies, you’re probably building for Apple first. If you’re publishing or self-publishing a book, you’re probably going to Amazon first. One could argue that you might go to Amazon with music or a movie – but I’m not sure that’s true – at least if you wanted to actually sell full-fare copies vs. getting them placed on Prime Music/Prime Instant Video.

In the list above, that doesn’t tell a great tale for second movers. If you’re building a marketplace, you’ve got to offer some form of exceptional value over Apple (or Amazon for 5) in order to dethrone them. You’ve also got to offer something to consumers to get them to use your technology, and content purveyors/device manufacturers to get them to invest in your platform(s).

For the first three, Apple won those markets through pure first mover advantage.

The early arrival of the iPhone and iOS, and the premium buyers who purchase them, ensure that 1 & 2 will be answered “Apple”. The early arrival of the iPod, iTunes, and “Steve’s compromise”, allowing iTunes on Windows – as horrible as the software was/is – ensures that iTunes Music is still the answer to 3.

Video is a squishy one – as the market is meandering between streaming content (Netflix/Hulu), over-the-top (OTT) video services like Amazon Instant Video, MLB At Bat, HBO Now, etc., and direct purchase video like iTunes or Google Play. But the wide availability of Apple TV devices, entrenchment of iTunes in the life of lots of music consumers, and disposable income mean that a video content purveyor is highly likely to hit iTunes first – as we often see happen with movies today.

The last one is the most interesting though.

If we look at eBooks, something interesting happened. Amazon wasn’t the first mover – not by a long shot. Microsoft made their Reader software available back in 2000. But their device strategy wasn’t harmonized with the ideas from the team building the software. It was all based around using your desktop (ew), chunky laptop (eventually chunky tablet), or Windows Pocket PC device for reading. Basically, it was trying to sell eBooks as a way to read content on Windows, not really trying to sell eBooks themselves. Amazon revealed their first Kindle in 2007. (This was the first in a line of devices that I personally loathe, because of the screen quality and flicker when you change pages.) Apple revealed the iPad, and rapidly launched iBooks in 2010, eventually taking it to the iPhone and OS X. But the first two generations of iPad were expensive, chunky device to try and read on, and iBooks not being available on the iPhone and OS X didn’t help. (Microsoft finally put down the Reader products in 2012, just ahead of the arrival of the best Windows tablets…<sigh/>) So even though Apple has a strong device story today, and a strong content play in so many other areas, they are (at least) the second fiddle in eBooks. They tout strong numbers of active iBooks users… but since every user of iOS and OS X can be an iBooks users, numbers mean little without book sales numbers behind them. Although Amazon’s value driven marketplace may not be the healthiest place for authors to publish their wares, it appears to be the number one place by far, without much potential for it to be displaced anytime soon.

If your platform isn’t in the leader for a specific type of content, pulling ahead from second place is going to be quite difficult, unless you’ve somehow found some silver bullet. If you’re in third, you have an incredible battle ahead.

Aug 15

Continuum vs. Continuity – Seven letters is all they have in common

It’s become apparent that there’s some confusion between Microsoft’s Continuum feature in Windows 10, and Apple’s Continuity feature in OS X. I’ve even heard technical people get them confused.

But to be honest, the letters comprising “Continu” are basically all they have in common. In addition to different (but confusingly similar) names, the two features are platform exclusive to their respective platform, and perform completely different tasks that are interesting to consider in light of how each company makes money.

Apple’s Continuity functionality, which arrived first, on OS X Yosemite late in 2014, allows you to hand off tasks between multiple Apple devices. Start a FaceTime call on your iPhone, finish it on your Mac. Start a Pages document on your Mac, finish it on your iPad. If they’re on the same Wi-Fi network, it “just works”. The Handoff feature that switches between the two devices works by showing an icon for the respective app you were using, that lets you begin using the app on the other device. Switching from iOS to OS X is easy. Going the other way is a pain in the butt, IMHO, largely because of how iOS presents the app icon on the iOS login screen.

Microsoft’s Continuum functionality, which arrived in one form with Windows 10 in July, and will arrive in a different (yet similar) form with Windows 10 Mobile later this year, lets the OS adapt to the use case of the device you’re on. On Windows 10 PC editions, you can switch Tablet Mode off and on, or if the hardware provides it, it can switch automatically if you allow it. Windows 10 in Tablet Mode is strikingly similar to, but different from, Windows 8.1. Tablet mode delivers a full screen Start screen, and full-screen applications by default. Turning tablet mode off results in a Start menu and windowed applications, much like Windows 7.

When Windows 10 Mobile arrives later this year, the included incarnation of Continuum will allow phones that support the feature to connect to external displays in a couple of ways. The user will see an experience that will look like Windows 10 with Tablet mode off, and windowed universal apps. While it won’t run legacy Windows applications, this means a Windows 10 Mobile device could act as a desktop PC for a user that can live within the constraints of the Universal application ecosystem.

Both of these pieces of functionality (I’m somewhat hesitant to call either of them “features”, but I digress) provide strategic value for Apple, and Microsoft, respectively. But the value that they provide is different, as I mentioned earlier.

Continuity is sold as a “convenience” feature. But it’s really a great vehicle for hardware lock-in and upsell. It only works with iOS and OS X devices, so it requires that you use Apple hardware and iCloud. In short: Continuity is intended to help sell you more Apple hardware. Shocker, I know.

Continuum, on the other hand, is designed to be more of a “flexibility” feature. It adds value to the device you’re on, even if that is the only Windows device you own. Yes, it’s designed to be a feature that could help sell PCs and phones too – but the value is delivered independently, on each device you own.

With Windows 8.x, your desktop PC had to have the tablet-based features of the OS, even if they worked against your workflow. Your tablet couldn’t adapt well if you plugged it into an external display and tried to use it as a desktop. Your phone was… well… a phone. Continuum is intended to help users make the most of any individual Windows device, however they use it. Want a phone or tablet to be a desktop and act like it? Sure. Want a desktop to deliver a desktop-like experience and a tablet to deliver a tablet-like experience? No problem. Like Continuity, Continuum is platform-specific, and features like Continuum for Windows 10 Mobile will require all-new hardware. I expect that this Fall’s hardware season will likely continue to bring many new convertibles that automatically switch, helping to make the most of the feature, and could help sell new hardware.

Software vendors made Continuity-like functionality before Apple did it, and that’ll surely continue. We’ll see more and more device to device bridging in Android and Windows. However, Apple has an advantage here, with their premium consumer, and owning their entire hardware and software stack.

People have asked me for years if I see Apple making features that look like Continuum. I don’t. At least not trying to make OS X into iOS. We may see Apple try and bridge the tablet and small laptop market here in a few weeks with an iOS device that can act like a laptop, but arguably that customer wouldn’t be a MacBook (Air) customer anyway. It’ll be interesting to see how the iPad evolves/collides into the low-end laptop market.

Hopefully if you were confused about these two features, that helps clarify what they are – and that they’re actually completely different things, designed to accomplish completely different things.

Feb 15

Bring your own stuff – Out of control?

The college I went to had very small cells… I mean dorm rooms. Two people to a small concrete walled-room, with a closet, bed, and desk that mounted to the walls. The RA on my floor (we’ll call him “Roy”) was a real stickler about making us obey the rules – no televisions or refrigerators unless they were rented from the overpriced facility in our dorm. After all, he didn’t want anybody creating a fire hazard.

But in his room? A large bench grinder and a sanding table, among other toys. Perhaps it was a double standard… but he was the boss of the floor – and nobody in the administration knew about it.

Inside of almost every company, there are several types of Roy, bringing in toys that could potentially harm the workplace. Most likely, the harm will come in the form of data loss or a breach, not a fire as it might if they brought in a bench grinder. But I’m really starting to get concerned that too many companies aren’t mindful of the volume of toys that their own Roys have been bringing in.

Basically, there are three types of things that employees are bringing in through rogue or personal purchasing:

  • Smartphones, tablets, and other mobile devices (BYOD)
  • Standalone software as a service
  • Other cloud services

It’s obvious that we’ve moved to a world where employees are often using their own personal phones or tablets for work – whether it becomes their main device or not. But the level of auditing and manageability offered by these devices, and the level of controls that organizations are actively enforcing on them, all leave a lot to be desired. I can’t fathom the number of personal devices today, most of them likely equipped with no passcode or a weak one, that are currently storing documents that they shouldn’t be. That document that was supposed to be kept only on the server… That billing spreadsheet with employee salaries or patient SSNs… all stored on someone’s phone, with a horrible PIN if one at all, waiting for it to be lost or stolen.

Many “freemium” apps/services offer just enough rope for an employee to hang their employer with. Sign up with your work credentials and work with colleagues – but your management cannot do anything to manage them – without (often) paying.

Finally, we have developers and IT admins bringing in what we’ll call “rogue cloud”. Backing up servers to Azure… spinning up VMs in AWS… all with the convenience of a credit card. Employees with the best of intentions can smurf their way through, getting caught by internal procedures or accounting. A colleague tells a story about a CFO asking, “Why are your developers buying so many books?” The CFO was, of course, asking about Amazon Web Services, but had no idea, since the charges were small irregular amounts every month across different developers, from Amazon.com. I worry that the move towards “microservices” and cloud will result in stacks that nobody understands, that run from on-premises to one or more clouds – without an end-to-end design or security review around them.

Whether we’re talking about employees bringing devices, applications, or cloud services, the overarching problem here is the lack of oversight that so many businesses seem to have over these rapidly growing and evolving technologies, and the few working options they have to remediate them. In fact, many freemium services are feeding on this exact problem, and building business models around it. “I’m going to give your employees a tool that will solve a problem they’re having. But in order for you to solve the new problem that your employees will create by using it, you’ll need to buy yet another tool, likely for everybody.”

If you aren’t thinking about the devices, applications, and services that your employees are bringing in without you knowing, or without you managing them, you really might want to go take a look and see what kinds of remodeling they’ve been doing to your infrastructure without you noticing. Want to manage, secure, integrate, audit, review, or properly license the technology your employees are already using? You may need to get your wallet ready.

Dec 14

Mobile devices or cloud as a solution to the enterprise security pandemic? Half right.

This is a response to Steven Sinofsky’s blog post, “Why Sony’s Breach Matters”. While I agree with parts of his thesis – the parts about layers of complexity leaving us where we are, and secured, legacy-free mobile OS’s helping alleviate this on the client side, I’m not sure I agree with his points about the cloud being a path forward – at least in any near term, or to the degree of precision he alludes to.

The bad news is that the Sony breach is not unique. Not by a long shot. It’s not the limit. It’s really the beginning. It’s the shot across the bow for companies that will let them see one example of just how bad this can get. Of course, they should’ve been paying attention to Target, Home Depot, Michaels, and more by this point already.

Instead, the Sony breach is emblematic of the security breaking point that has become increasingly visible over the last 2 years. It would be the limit if the industry turned a corner tomorrow and treated security like their first objective. But it won’t. I believe what I’ve said before – the poor security practices demonstrated by Sony aren’t unique. They’re typical  of how too many organizations treat security. Instead of trying to secure systems, they grease the skids just well enough to meet their compliance bar, turning an eye to security that’s just “too hard”.


While the FBI has been making the Sony attack sound rather unique, the only unique aspect of this one, IMHO, is the scale of success it appears to have achieved. This same attack could be replayed pretty easily. A dab of social engineering… a selection of well-chosen exploits (they’re not that hard to get), and Windows’ own management infrastructure appears to have been used to distribute it.


I don’t necessarily see cloud computing yet as the holy grail that you do. Mobile? Perhaps.


The personal examples you discussed were all interesting examples, but indeed were indicative of more of a duct-tape approach, similar to what we had to do with some things in Windows XP during the security push that led up to XPSP2 after XPSP1 failed to fill the holes in the hull of the ship. A lot of really key efforts, like run as non-admin just couldn’t have been done in a short timeframe to work with XP – had to be pushed to Vista (where they honestly still hurt users) or Windows 7 where the effort could be taken to really make them work for users from the ground up. But again, much of this was building foundations around the Win32 legacy, which was getting a bit sickly in a world with ubiquitous networking and everyone running as admin.


I completely agree as well that we’re long past adding speed bumps. It is immediately apparent, based upon almost every breach I can recall over the past year, that management complexity as a security vector played a significant part in the breach.

If you can’t manage it, you can’t secure it. No matter how many compliance regs the government or your industry throws at you. It’s quite the Gordian knot. Fun stuff.



I think we also completely agree about how the surface area exposed by today’s systems is to blame for where we are today as well. See my recent Twitter posts. As I mentioned, “systems inherently grow to become so complex nobody understands them.” – whether you’re talking about programmers, PMs, sysadmins, or compliance auditors.



I’m inclined to agree with your point about social and the vulnerabilities of layer 8, and yet we also do live in a world where most adults know not to stick a fork into an AC outlet. (Children are another matter.)

Technology needs to be more resilient to user-error or malignant exploitation, until we can actually solve the dancing pigs problem where it begins. Mobile solves part of that problem.


When Microsoft was building UAC during Longhorn -> Vista, Mark Russinovich and I were both frustrated that Microsoft wasn’t really doing anything with Vista to really nail security down, and so we built a whitelisting app at Winternals to do this for Windows moving forward. (Unfortunately, Protection Manager was crushed for parts after our acquisition, and AppLocker was/is too cumbersome to accomplish this for Win32. Outside of the longshot of ditching the Intel processor architecture completely, whitelisting is the only thing that can save Win32 from the security mayhem it is experiencing at the moment.


I do agree that moving to hosted IaaS really does nothing for an organization, except perhaps drive them to reduce costs in a way that on-premises hosting can’t.

But I guess if there was one statement in particular that I would call out in your blog as something I heartily disagree with, it’s this part:


“Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company.”


This is a wonderful goal, and it’ll be great for startups that have no legacy codebase (and don’t bring in hundreds of open-source or shared libraries that none of their dev team understands down to the bottom of the stack). But most existing companies can’t do what they should, and cut back the overgrowth in their systems.

I believe pretty firmly that what I’ve seen in the industry over the last decade since I left Microsoft is also, unfortunately, the norm – that management – as demonstrated by Sony’s leadership in that interview, will all too often let costs win over security.


For organizations that can redesign for a PaaS world, the promise offered by Azure was indeed what you’ve suggested – that designing new services and new applications for a Web-first world can lead to much more well-designed, refined, manageable, and securable applications and systems overall. But the problem is that that model only works well for new applications – not applications that stack refinement over legacy goo that nobody understands. So really, clean room apps only.

The slow uptake of Azure’s PaaS offerings unfortunately demonstrates that this is the exception, and an ideal, not necessarily anything that we can expect to see become the norm in the near future.


Also, while Web developers may not be integrating random bits of executable code into their applications, the amount of code reuse across the Web threatens to do the same, although the security perimeter is winnowed down to the browser and PII shared within it. Web developers can and do grab shared .js libraries off the Web in a heartbeat.

Do they understand the perimeter of these files? Absolutely not. No way.

Are the risks here as big as those posed by an unsecured Win32 perimeter? Absolutely not – but I wouldn’t trivialize them either.

There are no more OS hooks, but I’m terrified about how JS is evolving to mimic many of the worst behaviors that Win32 picked up over the years. The surface has changed, as you said – but the risks – loss of personal information, loss of data, phishing, DDOS, are so strikingly similar, especially as we move to a “thicker”, more app-centric Web.


Overall, I think we are in for some changes, and I agree with what I believe you’ve said both in your blog and on Twitter, that modern mobile OS’s with a perimeter designed in them are the only safe path forward. The path towards a secure Web application perimeter seems less clear, far less immediate, and perhaps less explicit than your post seemed to allude to.


There is much that organizations can learn from the Sony breach.


But will they?


Dec 14

Shareholder Shackles

Recently, Michael Dell wrote about the after-effects of taking his company private. I think his words are quite telling:

“I’d say we got it right. Privatization has unleashed the passion of our team members who have the freedom to focus first on innovating for customers in a way that was not always possible when striving to meet the quarterly demands of Wall Street.”, and “The single most important thing a company can do is invest and innovate to help customers succeed…”

Early on in my career at Microsoft, executives would often exclaim “our employees are our best asset.” By the time I left in 2004, however, it was pointedly clear that “shareholder value!” was the priority of the day. Problem is, most underling employees aren’t significant shareholders. In essence, executive leadership’s number one priority wasn’t building great products or retaining great employees, but in making money for shareholders. That’s toxic.

I distinctly recall the day that SteveB held an all-hands meeting where the move to deliver a dividend was announced for the first time in 2003. He was ecstatic, as he should have been. It was a huge jab in the side of institutional investors that had been pushing him to pass on the cash hoard to them. Being the second most significant shareholder at the time, it of course was a windfall for him, financially.

But most employees? They held some stock, sure. But not massive quantities. So this was, in effect, taking the cash that employees had worked their asses off to earn, and chucking it out at shareholders, whose most significant investment had been cash to try and keep the stock, stuck in a dead calm for years (and for years after), moving up.

After Steve announced the dividend in the “town hall” meeting that day, he asked if there were any questions from the room full of employees physically present there. There were no questions. Literally zero questions. For some reason, he seemed surprised.

I was watching the event from my office with a colleague, now also separated from Microsoft. I turned to him and asked, “Do you know why there are no questions?” He replied “no”, and I stated, “because this change he just announced means effectively nothing to more than 95% of the people in that room.

I’m not a big fan of the stock market – especially on short-term investments. I’m okay with you getting a return on a longer-term investment that you’ve held while a company grows. I think market pressures can lead a company to prioritize and deemphasize the wrong things just to appease the vocal masses. Fire a CEO and lose their institutional knowledge? SURE! (Not that every CEO change is all good or all bad.) Give you the cash instead of investing it in new products, technologies, people and processes to grow the business? SURE! But I’m really not a fan of fair-weather shareholders coming along and pushing for cash back on an investment they just made. Employees sweat their asses off for years building the business in order to get equity that takes years again to vest, and shareholders get the cash for doing almost nothing. Alrighty then. That makes sense.

While Tim Cook has taken some steps to appease certain drive-by activist investors who bloviate about wanting more cash through more significant dividends or bigger buybacks, he has pushed back as well, and has also been explicitly outspoken when people challenge the company’s priorities.

One can argue that Microsoft’s flat stock price from 2001-2013 was the cause of the reprioritization and capitulation to investors, but one can also argue that significant holdings by executives could also have tainted the priorities to focus on shareholder innovation shareholder value.

While Microsoft’s financial results do generally continue to move in a positive direction, I personally worry that too much of that growth could be coming in part with price increases, not with net-new sales. It’s always hard to decode which is which, as prices have generally been rising, and underlying numbers generating them aren’t always terrifically clear to decode (I’m being kind).

As organizations grow, and sales get tight, you have two choices to make money. You 1) get new customers, or 2) charge your existing customers more.

The first position is easy, as long as you’re experiencing organic sales to new customers, or you’re adding new products and services that don’t completely replace, but can and likely do erode, prior products in order to deliver longer-term growth opportunities for the business as a whole.

Most companies, over time, plateau and move into the second position and have to tighten the belt. It just happens. There’s just only so far you can go in terms of obtaining new customers for your existing products and services or building new products and services that risk your existing lines. This is far from unique to Microsoft. It’s a common occurrence. As this article in The New Yorker shows, United is doing this as well (and they’re certainly not alone). Even JetBlue is facing the music and chopping up their previously equitable seating plans to accommodate a push for earnings growth.

Read that last section quoting Hayes very carefully again: “long-term plan to drive shareholder returns through new and existing initiatives.” and “We believe the plan laid out today benefits our three key stakeholders … It delivers improved, sustainable profitability for our investors, the best travel experience for our customers and ensures a strong, healthy company for our crewmembers.”

Just breathe in those priorities for a moment. It’s not about the customers that pay the bills (and he left out “our highest paying” in the statement about customers). It’s not about the employees that keep the planes flying and on time. Nope. It’s about shareholder value. Effectively all about shareholder value. I would argue those priorities are completely ass-backwards. I’m also not sure I concur that it ensures a strong, healthy company for the long term, either. JetBlue has many dedicated fliers due to the distinct premium, but price-conscious product it has delivered from the beginning. JetBlue will find themselves with great difficulty retaining existing customers. Sure, they’ll make money. But a lot of people who used to prefer JetBlue are now likely to not be so preferential.

My personal opinion is that Michael Dell is spot on – the benefit of being a private company is that, now that he survived the ordeal of re-privatizing his company, he can ignore the market at large, and do what’s best for the company. Rather than focusing on short-term goals quarter to quarter, and worrying about a certain year’s fourth quarter being slightly down over the previous year’s, he, his leadership team, and his employees can focus on building products and services that customers will buy because they solve a problem for them.

I worry about a world where the “effectiveness” of a CEO is in any way judged by the stock price. It’s a bullshit measurement. Price growth doesn’t gauge whether the company will be alive or dead in 5, 10, or 15 years. It doesn’t gauge whether a CEO is willing to put a product line on a funeral pyre so a new one can grow in it’s place. Most importantly, it doesn’t gauge whether a company’s sales pipeline is organically growing or not in any form.

When you focus on just pleasing the cacaphony of shareholders, you get hung up on driving earnings up at all costs. This is the price a public company faces.

When you focus on just driving earnings up at all costs, you get hung up on driving numbers that may well not be in line with the long-term goals of your company. This is the price a public company faces.

Build great products and services. Kick ass. Take names. Watch customers buy your tools to solve their problems. When shareholders with no immediate concern for your company other than how you’ll pad their wallet come knocking, as long as you’re making a profit, invest that cash in future growth for your company, and tell them you’re too busy building great things to talk.

Nov 14

Is Office for mobile devices free?

As soon as I saw today’s news, I thought that there would be confusion about what “Office for tablets and smartphones going free” would mean. There certainly has been.

Office for iOS and Android smartphones and tablets is indeed free, within certain bounds. I’m going to attempt to succinctly delinate the cases under which it is, and is not, free.

Office is free for you to use on your smartphone or tablet if, and only if:

  1. You are not using it for commercial purposes
  2. You are not performing “advanced editing“.

If you want to use the advanced editing features of Office for your smartphone or tablet as defined in the link above, you need one of the following:

  • An Office 365 Personal or Home subscription
  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

If you’re using Office on your smartphone or tablet for any commercial purpose, you need the following:

  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

For consumers, this change is great, and convenient. You’ll be able to use Office for basic edits on almost any mobile device for free. For commercial organizations, I’m concerned about how they can prevent this becoming a large license compliance issue when employees bring their own iPads in to work.

For your reference, here are the license agreements for Excel for iOSPowerPoint for iOS, and Word for iOS.

*I wanted to add a footnote here to clarify one vagary. The new “Business” Office 365 plans don’t technically include Office 365 ProPlus – they are more akin to “Office 365 Standard”, but appears to have no overarching branding. Regardless, if you have Office 365 Business or Office 365 Business Premium, which include the desktop suite, you also have rights to the Office mobile applications.

Learn more about how to properly license Office for smartphones and tablets at a Directions on Microsoft Licensing Boot Camp. Next event is Seattle, on Dec. 8-9, 2014. We’ll cover the latest info on Office 365, Windows Per User licensing, and much more.

Oct 14

On the Design of Toasterfridges

On my flight today, I rewatched the documentary Objectified. I’ve seen it a few times before, but it has been several years. While I don’t jibe with 100% of the sentiment of the documentary, it made me think a bit about design, as I was headed to Dallas. In particular, it made me consider Apple, Microsoft, and Google, and their dramatically different approaches to design – which are in fact a reflection of the end goal of each of the companies.

One of my favorite moments in the piece is Jony Ive’s section, early on. I’ve mentioned this one before. If you haven’t read that earlier blog post, you might want to before you read on.

Let’s pause for a moment and consider Apple, Microsoft, and Google. What does each make?

  • Apple – Makes hardware.
  • Microsoft – Makes software.
  • Google – Makes information from data.

Where does each one make the brunt of its money?

  • Apple – Consumer hardware and content.
  • Microsoft – Enterprise software licensing.
  • Google – Advertising.

What does each one want more of from the user?

  • Apple – Buy more of their devices and more content.
  • Microsoft – Use their software, everywhere.
  • Google – Share more of your information.

You can also argue that Apple makes software, Microsoft makes hardware, and Google makes both. Some of you will surely do so. But at the end of the day, software is a hobby for Apple to sell more hardware and content (witness the price of their OS and productivity apps), hardware is a hobby for Microsoft to try and sell more software and content, and hardware and software are both hobbies for Google to try and get you more firmly entrenched into their data ecosystem.

Some people were apparently quite sad that Apple didn’t introduce a ~12” so-called “iPad Pro” at their recent October event. People expecting such a device were hoping for a removable keyboard, perhaps like Microsoft’s Surface (ARM) and Surface Pro (Intel) devices. Hopes were there that such a device would be the best of both worlds… a large professional-grade tablet (because those are selling well), and a laptop of sorts, and it would feature side-by side application windows, as have been available on Windows nearly forever, and many Android devices for some time. In many senses, it would be Apple’s own version of the Surface Pro 3 with Windows 8.1 on it. Reporters have insisted, and keep insisting that Apple’s future will be based upon making a Surface clone of sorts. I’m not so sure.

I have a request for you. Either to yourself, in the comments below, or on Twitter, consider the following. When was the last time (since the era of Steve Jobs return) that you saw Apple hardware lean away, in order to let the software compromise it? Certainly, the hardware may defer to the software, as Ive says earlier about the screen and touch on the iPhone; but the role of the hardware is omnipresent – even if you don’t notice it.

I’ve often wondered what Microsoft’s tablets would look like today if Microsoft didn’t own Office as well as Windows; if they weren’t so interested in preserving the role of both at the same time. Could the device have been a pure tablet that deferred to touch, and didn’t try so hard to be a laptop? Could it have done better in such a scenario?

Much has been said about the “lapability” of the Surface family of devices. I really couldn’t disagree more.

More than one person I know has used either a cardboard platform or other… <ahem> surface as a flattop for their Surface to rest upon while sitting on their lap. I’ve seen innumerable reporters contort themselves while sitting in chairs at conferences to balance the device between the ultra-thin keyboards and the kickstand. A colleague recently stopped using his Surface Pro 2 because he was tired of the posture required to use the device while it is on your lap. It may be an acceptable tablet, especially in Surface Pro 3 guise – but I don’t agree that it’s a very good “laptop”.

The younger people that follow me on Twitter or read this blog may not get all of these examples, but hopefully will get several. Consider all of the following devices (that actually existed).

  • TV/VCR combination
  • TV/DVD combination
  • Stand mixers with pasta-making attachments
  • Smart televisions
  • Swiss Army Knife

Each of these devices has something in common. Absent a better name to apply to it, I will call that property toasterfridgality. Sure. Toasterfridge was a slam that Tim Cook came up with to describe Microsoft’s Surface devices. But regardless of the semi-derogatory term, the point is, I believe, valid.

Each of the devices above compromises the integrity with which it performs one or more roles in order to try and perform two or more roles. The same is true of Microsoft’s Surface and Surface Pro line.

For Microsoft, it was imperative that the Surface and Surface Pro devices, while tablets first and foremost (witness the fact that they are sold sans keyboard), be able to run Office and the rest of Win32 that couldn’t be ported in time for Windows 8 – even if it meant a sacrifice of software usability in order to do so. Microsoft’s fixation on selling the devices not as tablets but as laptop replacements (even though they come with no keyboard) leads to a real incongruity. There’s the device Microsoft made, the device consumers want, and the way Microsoft is trying to sell it. Even taking price out of the equation, is there any wonder that Surface sales struggled until Surface Pro 3?

Lenovo more harmoniously balances their toasterfridgality. Their design always seems to focus first on the device being a laptop – then how to incorporate touch. (And on some models, “tabletude”.) Take for example, the Lenovo ThinkPad Yoga  or Lenovo ThinkPad Helix. These devices are laptops, with a comprehensive hinge that enables them to have some role as a tablet while not completely sacrificing… well… lapability. In short, the focus is on the hinge, not on the keyboard.

To view the other end of the toasterfridge spectrum, check out the Asus Padfone X, device that tries to be your tablet by glomming on a smartphone. I’m a pretty strong believer that the idea of “cartridge” style computing isn’t the future, as I’ve also said before. Building devices that integrate with each other to transmogrify into a new role sounds good. But it’s horrible. It results in a device that performs two or more roles, but isn’t particularly good at either one. It’s a DVD/VCR combo all over again. Phone breaks, and now you don’t have either device anymore. If there was such a model that converted your phone into a desktop, one can only imagine how awesome it would be reporting to work on Monday, having lost your “work brain” by dropping your phone into the river.

I invite you to reconsider the task I asked of you earlier, to tell me where Apple’s hardware defers to the software. Admittedly, One can make the case that Apple is constantly deferring the software to the hardware; just try and find an actual fan of iTunes or the Podcasts app, or witness Apple’s recent software quality issues (a problem not unique to Apple). But software itself isn’t their highest priority; it’s the marriage of that software and the hardware (sometimes compromising them both a bit). Look at the iPhone 6 Plus and the iPad Air 2. Look how Apple moved – or completely removed – switchgear on them to align with both use cases (big phones are held differently) and evolving priorities (switches break, and the role of the side-switch in iOS devices is now completely made redundant by software).

Sidebar: Many people, including me, have complained that iOS devices start at 16GB of storage now. This is ridiculous. With the bloat of iOS, requirements for upgrading, and any sort of content acquisition by their users, these devices will be junk before the end of CY2016. Apple, of course, has made cohesive design, not upgradability, paramount in their iOS devices. This has earned them plenty of low scores for reparability and consumer serviceability/upgradeability in reviews. I think it is irresponsible of Apple, given that they have no upgradeability story, to sell these devices with 16GB. The minimum on any new iOS device should be 32GB. Upgradability or the ability to add peripherals is often touted by those dissing Apple as limitations of the platform. It’s true. They are limitations. But these limitations and a tight, cohesive hardware design, are what let these devices have value 4 years after you buy them. I recently got $100 credit from AT&T for my daughter’s iPhone 4 (from June, 2010). A device that I had used for two years, she had used for two more, and it still worked. It was just gasping for air under the weight of iOS 6, let alone iOS 7 (and the iPhone 4 can’t run 8). There is a reason why these devices aren’t upgradeable. Adding upgradeability means building the device with serviceability in mind, and compromising the integrity of the whole device just to make it expandable. I have no issue with Apple making devices user non-serviceable for their lifespan, as I believe it tends to result in devices that actually last longer rather than falling apart when screws unwind and battery or memory doors stop staying seated.

I’ve had several friends mention a few recent tablets and the fact that USB ports on the devices are very prone to failure. This isn’t new to me. In 2002, when I was working to make Windows boot from USB, I had a Motion Computing M1200 tablet. Due to constant insertion and removal of UFDs for testing and creation, both of the USB ports on the tablet had come unseated off of the motherboard and were useless. Motion wanted over $700 to repair a year old (admittedly somewhat abused) tablet. With <ahem> persuasion from an executive at Microsoft, Motion agreed to repair it for me for free. But this forever highlighted to me that more ports aren’t necessarily always something to be looked at in a positive light. The more things you add, the more complex the design becomes, and the more likely it becomes that one of these overwrought features added to please a product manager who has a list of competitive boxes to check will lead to a disappointed customer, product support issues and costs, or both. USB was never originally designed to have plugs inserted and removed willy-nilly (as Lightning and the now dead Apple 30-pin connector were), and I don’t think most boards are manufactured to have devices inserted and removed as often (and perhaps as haphazardly) as they are on modern PC tablets.

Every day, we use things made of components. These aren’t experiences, and they aren’t really even designed (at least not with any kind of cohesive aesthetic). Consider the last time you used a Windows-based ATM or point-of-sale/point-of-service device. It may not seem fair that I’m  glomming Windows into this, but Windows XP Embedded helped democratize embedded devices, and allowed for cheap devices to handle cash, digital currency, rent DVDs on demand, and make a heretofore unimaginable self-service soda fountain.

But there’s a distinct feel of toaster fridge every time I used one of these devices. You feel the sharp edges where the subcomponents it is made of come together (but don’t align). Where the designer compromised the design of the whole in order to accommodate the needs of the subcomponents.

The least favorite device I use with any regularity is the Windows-based ATM at my credit union. It has all of the following components:

  • A display screen (which at least supports touch)
  • An input slot for your ATM/credit/debit card
  • A numeric keypad
  • An input slot for one or more checks or cash
  • An output slot for cash
  • An output slot for receipts.

As you use this device, there are a handful of pain points that will start to drive you crazy if you actually consider the way you use it. When I say left or right, I mean in relation to the display.

  • The input slot for your card is on the right side.
  • The input slot for checks is on the left side.
  • The receipt printer is on the right side.
  • The output slots for cash are both below.

Arguably, there is no need for a keypad given that there is a touchscreen; but users with low visibility would probably disagree with that. Besides that, my credit union has not completely replaced the role of the keypad with the touchscreen. Entering PINs, for example, still requires the keypad.

So to deposit a check, you first put in your card (right), enter your pin (below), specify your transaction type (on-screen), deposit a stack of checks (no envelope, which is nice) on the left. Wait, get your receipt (top right), and get your card (next down on the right). My favorite part is that the ATM starts beeping at you to retrieve your card before it has released it.

This may all seem like a pedantic rant. But my primary point is that every day, we use devices that prioritize the business needs, requirements, or limitations of their creator or assembler, rather than their end user.

Some say that good design begins with the idea of creating experiences rather than products. I am inclined to agree with this ideology, one that I’ve also evangelized before. But to me, the most important role in designing a product is to pick the thing that your product will do best, and do that thing. If it can easily adapt to take on another role without compromising the first role? Then do that too. If adding the new features means compromising the product? Then it is probably time to make an additional product. I must admit – people who clamor for an Apple iPad Pro that would be a bit of (big) tablet and (small) notebook confuse me a bit. I have a 2013 iPad Retina Mini and a 2013 Retina MacBook Pro. Each device serves a specific purpose and does it exceptionally well.

I write for a living. I can never envision doing that just on an iPad, let alone my Mini (or even without the much larger Acer display that my rMBP connects to). In the same vein, I can’t really visualize myself laying down, turning on some music, and reading an eBook on my Mac. Yes. I had to pay twice to get these two different experiences. But if the alternative is getting a device that compromises both experiences just to save a bit of money? I don’t get that.

Sep 14

On the death of files and folders

As I write this, I’m on a plane at 30,000+ feet, headed to Chicago. Seatmates include a couple from Toronto headed home from a cruise to Alaska. The husband and I talk technology a bit, and he mentions that his wife particularly enjoys sending letters as they travel. He and I both smile as we consider the novelty in 2014 of taking a piece of paper, writing thoughts to friends and family, and putting it in an envelope to travel around the world to be warmly received by the recipient.

Both Windows and Mac computers today are centered around the classic files and folders nomenclature we’ve all worked with for decades. From the beginning of the computer, mankind has struggled to insert metaphors from the physical world into our digital environments. The desktop, the briefcase, files that look like paper, folders that look like hanging file folders. Even today as the use of removable media decreases, we hang on to the floppy diskette icon, a symbol that means nothing to pre-teens of today, to command an application to “write” data to physical storage.


It’s time to stop using metaphors from the physical world – or at least to stop sending “files” to collaborators in order to have them receive work we deign to share with them.

Writing this post involves me eating a bit of crow – but only a bit. Prior to me leaving Microsoft in 2004, I had a rather… heated… conversation with a member of the WinFS team about a topic that is remarkably close to this. WinFS was an attempt to take files as we knew them and treat them as “objects”. In short, WinFS would take the legacy .ppt files as you knew them, and deserialize (decompose) them into a giant central data store within Windows based upon SQL Server, allowing you to search, organize, and move them in an easier manner. But a fundamental question I could never get answered by that team (the core of my heated conversation) was how that data would be shared with people external to your computer. WinFS would always have to serialize the data back out into a .ppt file (or some other “container”) in order to be sent to someone else. The WinFS team sought to convert everything on your system into a URL, as well – so you would have navigated the local file system almost as if your local machine was a Web server rather than using the local file and folder hierarchy that we had all become used to since the earliest versions of Windows or the Mac.

So as I look back on WinFS, some of the ideas were right, but in classic Microsoft form, at best it may have been a bit of premature innovation, and at worst it may have been nerd porn relatively disconnected from actual user scenarios and use cases.

From the dawn of the iPhone, power users have complained that iOS lacked something as simple as a file explorer/file picker. This wasn’t an error on Apple’s part; a significant percentage of Apple’s ease of use (largely aped by Android and Windows (at least with WinRT and Windows Phone applications) is by abstracting away the legacy file and folder bird’s nest of Windows, the Mac, etc.

As we enter the fall cavalcade of consumer devices ahead of the holiday, one truth appears plainly clear; that standalone “cloud storage” as we know it is largely headed for the economic off-ramp. The three main platform players have now put cloud storage in as a platform pillar, not an opportunity to be filled by partners. Apple (iCloud Drive), Google (Google Drive), and Microsoft (OneDrive and OneDrive for Business – their consumer and business offerings, respectively), have all been placed firmly in as a part of their respective platform. Lock-in now isn’t just a part of the device or the OS, it’s about where your files live, as that can help create a platform network effect (AT&T Friends and Family, but in the cloud). I know for me, my entire family is iOS based. I can send a link from iCloud drive files to any member of my family and know they can see the photo I took or the words I wrote.

But that’s just it. Regardless of how my file is stored in Apple’s, Google’s, or Microsoft’s hosted storage, I share it through a link. Every “document” envelope as we knew it in the past is now a URL, with applications on each device capable of opening their file content.

Moreover, today’s worker generally wants their work:

  1. Saved automatically
  2. Backed up to the cloud automatically (within reason, and protected accordingly)
  3. Versioned and revertible
  4. Accessible anywhere
  5. Coauthoring capable (work with one or more colleagues concurrently without needing to save and exchange a “file”)
  6. As these sorts of features become ubiquitous across productivity tools, the line between a “file” and a “URL” becomes increasingly blurred, and the more, well, the more our computers start acting just like the WinFS team wanted them to over a decade ago.

    If you look at the typical user’s desktop, it’s a dumping ground of documents. It’s a mess. So are their favorites/bookmarks, music, videos, and any other “file type” they have.

    On the Mac, iTunes (music metadata), iPhoto (face/EXIF, and date info), and now the finder itself (properties and now tags) are a complete mess of metadata. A colleague in the Longhorn Client Product Management Group was responsible for owning the photo experience for WinFS. Even then I think I crushed his spirit by pointing out what a pain in the ass it was going to be to enter in all of the metadata for photos as users returned for trips, in order to make the photos be anything more than a digital shoebox that sits under the bed.

    I’m going to tell all the nerds in the world a secret. Ready? Users don’t screw around entering metadata. So anything you build that is metadata-centric that doesn’t populate the metadata for the user is… largely unused.

    I mention this because, as we move towards vendor-centered repositories of our documents, it becomes an opportunity for vendors to do much of what WinFS wanted to do, and help users catalog and organize their data; but it has to be done almost automatically for them. I’m somewhat excited about Microsoft’s Delve (nee Oslo) primarily because if it is done right (and if/when Google offers a similar feature), users will be able to discover content across the enterprise that can help them with their job. Written word will in so many ways become a properly archived, searchable, and collaboration-ready tool for businesses (and users themselves, ideally).

    Part of the direction I think we need to see is tools that become better about organizing and cataloging our information as we create it, and keeping track of the lineage of written word and digital information. Create a file using a given template? That should be easily visible. Take a trip with family members? Photos should be easily stitched together into a searchable family album.

    Power users, of course, want to feel a sense of control over the files and folders on their computing devices (some of them even enjoy filling in metadata fields). These are the same users who complained loudly that iOS didn’t have a Finder or traditional file picker, and who persuaded Microsoft to add a file explorer of sorts to Windows Phone, as Windows 8 and Microsoft’s OneDrive and OneDrive for Business services began blurring out the legacy Windows File Explorer. There’s a good likelihood that next year’s release of Windows 9 could see the legacy Win32 desktop disappear on touch-centric Windows devices (much like Windows Phone 8.x, where Win32 still technically exists, but is kept out of view. I firmly expect this move will (to say it gently) irk Windows power users. These are the same type of users who freaked out when Apple removed the save functionality from Pages/Numbers/Keynote. Yet that approach is now commonplace for the productivity suites of all of the “big 3” productivity players (Microsoft, Google, and Apple), where real-time coauthoring requires an abstraction of the traditional “Save” verb we all became used to since the 1980’s. For Windows to succeed as a novice-approachable touch environment as iOS is, it means jettisoning a visible Win32 and the File Explorer. With this, OneDrive and the simplified file pickers in Windows become the centerpiece of how users will interact with local files.

    I’m not saying that files and folders will disappear tomorrow, or that they’ll really ever disappear entirely at all. But increasingly, especially in collaboration-based use cases, the file and folder metaphors will largely move to the wayside, replaced by Web-based experiences and the use of URLs with dedicated platform-specific local, mobile or online apps interacting with them.