22
May 15

Farewell, floppy diskette

I never would have imagined myself in an arm-wrestling match with the floppy disk drive. But sitting where I did in Windows setup, that’s exactly what happened. A few times.

When I had started at Microsoft, a boot floppy was critical to setting up a new machine. Not by the time I was in setup. Since Remote Installation Services (RIS) could start with a completely blank machine, and you could now boot a system to WinPE using a CD, there were two good-sized nails in the floppy diskette’s coffin.

Windows XP was actually the first version of Windows that didn’t ship with boot floppies. It only shipped with a CD. While you could download a tool that would build boot floppies for you, most computers that XP happily ran on supported CD boot by that time. The writing was on the wall for the floppy diskette. In the months after XP released, Bill Gates made an appearance on the American television sitcom Frasier. Early in the episode, a caller asks about whether they need diskettes to install Windows XP. For those of us on the team, it was amusing. Unfortunately, the reality was that behind the scenes, there were some issues with customers whose systems didn’t boot from CD, or didn’t boot properly, anyway. We made it through most of those those birthing pains, though.

It was both a bit amusing and a bit frustrating to watch OEMs early on during the early days of Windows XP; while customers often said, “I want a legacy free system”, they didn’t know what that really meant. By “legacy free”, customers usually meant they wanted to abandon all of the legacy connectors (ports) and peripherals used on computers before USB had started to hit its stride with Windows 98.

While USB had replaced serial in terms of mice – which were at one time primarily serial – the serial port, parallel port, and floppy disk controller often came integrated together in the computer. We saw some OEMs not include a parallel port, and eventually not include a floppy diskette, but still include a serial port – at least inside – for when you needed to debug the computer. When a Windows machine has software problems, you often hook it up to a debugger, an application on another computer, where the developer can “step through” the programming code to figure out what is misbehaving. When Windows XP shipped, a serial cable connection was the primary way to debug.  Often, to make the system seem more legacy free than it actually was, this serial port was tucked inside the computer’s case – which made consumers “think” it was legacy free when it technically wasn’t. PCs often needed BIOS updates, too – and even when Windows XP shipped with them, these PCs would still usually boot to an MS-DOS diskette in order to update the BIOS.

My arrival in the Windows division was timely; when I started, USB Flash Drives (UFDs) were just beginning to catch on, but had very little storage space, and the cheapest ones were slow and unreliable. 32MB and 64MB drives were around, but still not commonplace. In early 2002, the idea of USB booting an OS began circling around the Web, and I talked with a few developers within The Firm about it. Unfortunately, there wasn’t a good understanding of what would need to happen for it to work, nor was the UFD hardware really there yet. I tabled the idea for a year, but came back to it every once in a while, trying to research the missing parts.

As I tinkered with it, I found that while many computers supported boot from USB, they only supported USB floppy drives (a ramshackle device that had come about, and largely survived for another 5-10 years, because we were unable to make key changes to Windows that would have helped killed it). I started working with a couple of people around Microsoft to try and glue the pieces together to get WinPE booting from a UFD. I was able to find a PC that would try to boot from the disk, and failed because the disk wasn’t prepared for boot as a hard disk normally would be. I worked with a developer from the Windows kernel team and one of our architects to get a disk formatted correctly. Windows didn’t like to format UFDs as bootable because they were removable drives; even Windows to Go in Windows 8.1 today boots from special UFDs which are exceptionally fast, and actually lie to the operating system about being removable disks. Finally, I worked with another developer who knew the USB stack when we hit a few issues booting. By early 2003, we had a pretty reliable prototype that worked on my Motion Computing Tablet PC.

Getting USB boot working with Windows was one of the most enjoyable features I ever worked on, although it wasn’t a formal project in my review goals (brilliant!). USB boot was even fun to talk about, amongst co-workers and Microsoft field employees. You could mention the idea to people and they just got it. We were finally killing the floppy diskette. This was going to be the new way to boot and repair a PC. Evangelists, OEM representatives, and UFD vendors came out of the woodwork to try and help us get the effort tested and working. One UFD manufacturer gave me a stash of 128MB and larger drives – very expensive at the time – to prepare and hand out to major PC OEMs. It gave us a way to test, and gave the UFD vendor some face time with the OEMs.

For a while, I had a shoebox full of UFDs in my office which were used for testing; teammates from the Windows team would often email or stop by asking to get a UFD prepped so they could boot from it. I helped field employees get it working so many times that for a while, my nickname from some in the Microsoft field was “thumbdrive”, one of the many terms used to refer to UFDs.

Though we never were able to get UFD booting locked in as an official feature until Windows Vista, OEMs used it before then, and it began to go mainstream. Today, you’d be hard pressed to find a modern PC that can’t boot from UFD, though the experience of getting there is a bit of a pain, since the PC boot experience, even with new EFI firmware, still (frankly) sucks.

Computers usually boot from their HDD all the time. But when something goes wrong, or you want to reinstall, you have to boot from something else; a UFD, CD/DVD, PXE server like RIS/WDS, or sometimes an external HDD. Telling your Windows computer what to boot from if something happens is a pain. You have to hit a certain key sequence that is often unique to each OEM. Then you often have to hit yet another key (like F12) to PXE boot. It’s a user experience only a geek could love. One of my ideas was to try and make it easier not only for Windows to update the BIOS itself, but for the user to more easily say what they wanted to boot the PC from (before they shut it down, or selecting from a pretty list of icons or a set of keys – like Macs can do). Unfortunately, this effort largely stalled out for over a decade until Microsoft delivered a better recovery, boot, and firmware experience with their Surface tablets. Time will tell whether we’re headed towards a world where this isn’t such a nuisance anymore.

It’s actually somewhat amusing how much of my work revolved around hardware even though I worked in an area of Windows which only made software. But if there was one commonly requested design change request that I wish I could have accommodated but couldn’t ever get done, it was F6 from UFD. Let me explain.

When you install Windows, it attempts to use the drivers it ships with on the CD to begin copying Windows down onto the HDD, or to connect over the network to start setup through RIS.

This approach worked alright, but it had one little problem which became significant. Not long after Windows XP shipped, new categories of networking and storage devices began arriving on high-end computers and rapidly making their way downmarket; these all required new drivers in order for Windows to work. Unfortunately, none of these drivers were “in the box” (on the Windows CD) as we liked to say. While Windows Server often needed special drivers to install on some high-end storage controllers before, this was really a new problem for the Windows consumer client. All of a sudden we didn’t have drivers on the CD for the devices that were shipping on a rapidly increasing number of new PCs.

In other words, even with a new computer and a stock Windows XP CD in your hand, you might never get it working. You needed another computer and a floppy diskette to get the ball rolling.

Early on during Windows XP’s setup, it asks you to press the keyboard’s F6 function key if you have special drivers to install. If it can’t find the network and you’re installing from CD, you’ll be okay through setup – but then you have no way to add new drivers or connect to Windows Update. If you were installing through RIS and you had no appropriate network driver, setup would fail. Similarly, if you had no driver for the storage controller on your PC, it wouldn’t ever find find a HDD where it could install Windows – so it would terminally fail too. It wasn’t pretty.

Here’s where it gets ugly. As I mentioned, we were entering an era where OEMs wanted to ship, and often were shipping, those legacy-free PCs. These computers often had no built-in floppy diskette – which was the only place we could look for F6 drivers at the time. As a result, not long after we shipped Windows XP, we got a series of design change requests (DCRs) from OEMs and large customers to make it so Windows setup could search any attached UFD for drivers as well. While this idea sounds easy, it isn’t. This meant having to add Windows USB code into the Windows kernel so it could search for the drives very early on, before Windows itself has actually loaded and started the normal USB stack. While we could consider doing this for a full release of Windows, it wasn’t something that we could easily do in a service pack – and all of this came to a head in 2002.

Dell was the first company to ever request that we add UFD F6 support. I worked with the kernel team, and we had to say no – the risk of breaking a key part of Windows setup was too great for a service pack or a hotfix, because of the complexity of the change, as I mentioned. Later, a very large bank requested it as well. We had to say no then as well. In a twist of fate, at Winternals I would later become friends with one of the people who had triggered that request, back when he was working on a project onsite at that bank.

Not adding UFD F6 support was, I believe, a mistake. I should have pushed harder, and we should have bitten the bullet in testing it. As a result of us not doing it, a weird little cottage industry of USB floppy diskette drives continued for probably a decade longer than it should have.

So it was, several years after I left, that the much maligned Windows Vista brought both USB boot of WinPE and also brought USB F6 support so you could install the operating system on hardware with drivers newer than Windows XP, and not need a floppy diskette drive to get through setup.

As I sit here writing this, it’s interesting to consider the death of CD/DVD media (“shiny media”, as I often call it) on mainstream computers today. When Apple dropped shiny media on the MacBook Air, people called them nuts – much as they did when Apple dropped the floppy diskette on the original iMac years before. As tablets and Ultrabooks have finally dropped shiny media drives, there’s an odd echo of the floppy drive from years ago. Where external floppy drives were needed for specific scenarios (recovery and deployment), external shiny media drives are still used today for movies, some storage and installation of legacy software. But in a few years, shiny media will be all but dead – replaced by ubiquitous high-speed wired and wireless networking and pervasive USB storage. Funny to see the circle completed.


21
May 15

Comments closed

I’m tired of filtering out spam from the comments. As a result, if you want to comment on a post, find me on Twitter.

Thanks for reading.


12
Feb 15

Bring your own stuff – Out of control?

The college I went to had very small cells… I mean dorm rooms. Two people to a small concrete walled-room, with a closet, bed, and desk that mounted to the walls. The RA on my floor (we’ll call him “Roy”) was a real stickler about making us obey the rules – no televisions or refrigerators unless they were rented from the overpriced facility in our dorm. After all, he didn’t want anybody creating a fire hazard.

But in his room? A large bench grinder and a sanding table, among other toys. Perhaps it was a double standard… but he was the boss of the floor – and nobody in the administration knew about it.

Inside of almost every company, there are several types of Roy, bringing in toys that could potentially harm the workplace. Most likely, the harm will come in the form of data loss or a breach, not a fire as it might if they brought in a bench grinder. But I’m really starting to get concerned that too many companies aren’t mindful of the volume of toys that their own Roys have been bringing in.

Basically, there are three types of things that employees are bringing in through rogue or personal purchasing:

  • Smartphones, tablets, and other mobile devices (BYOD)
  • Standalone software as a service
  • Other cloud services

It’s obvious that we’ve moved to a world where employees are often using their own personal phones or tablets for work – whether it becomes their main device or not. But the level of auditing and manageability offered by these devices, and the level of controls that organizations are actively enforcing on them, all leave a lot to be desired. I can’t fathom the number of personal devices today, most of them likely equipped with no passcode or a weak one, that are currently storing documents that they shouldn’t be. That document that was supposed to be kept only on the server… That billing spreadsheet with employee salaries or patient SSNs… all stored on someone’s phone, with a horrible PIN if one at all, waiting for it to be lost or stolen.

Many “freemium” apps/services offer just enough rope for an employee to hang their employer with. Sign up with your work credentials and work with colleagues – but your management cannot do anything to manage them – without (often) paying.

Finally, we have developers and IT admins bringing in what we’ll call “rogue cloud”. Backing up servers to Azure… spinning up VMs in AWS… all with the convenience of a credit card. Employees with the best of intentions can smurf their way through, getting caught by internal procedures or accounting. A colleague tells a story about a CFO asking, “Why are your developers buying so many books?” The CFO was, of course, asking about Amazon Web Services, but had no idea, since the charges were small irregular amounts every month across different developers, from Amazon.com. I worry that the move towards “microservices” and cloud will result in stacks that nobody understands, that run from on-premises to one or more clouds – without an end-to-end design or security review around them.

Whether we’re talking about employees bringing devices, applications, or cloud services, the overarching problem here is the lack of oversight that so many businesses seem to have over these rapidly growing and evolving technologies, and the few working options they have to remediate them. In fact, many freemium services are feeding on this exact problem, and building business models around it. “I’m going to give your employees a tool that will solve a problem they’re having. But in order for you to solve the new problem that your employees will create by using it, you’ll need to buy yet another tool, likely for everybody.”

If you aren’t thinking about the devices, applications, and services that your employees are bringing in without you knowing, or without you managing them, you really might want to go take a look and see what kinds of remodeling they’ve been doing to your infrastructure without you noticing. Want to manage, secure, integrate, audit, review, or properly license the technology your employees are already using? You may need to get your wallet ready.


24
Dec 14

Mobile devices or cloud as a solution to the enterprise security pandemic? Half right.

This is a response to Steven Sinofsky’s blog post, “Why Sony’s Breach Matters”. While I agree with parts of his thesis – the parts about layers of complexity leaving us where we are, and secured, legacy-free mobile OS’s helping alleviate this on the client side, I’m not sure I agree with his points about the cloud being a path forward – at least in any near term, or to the degree of precision he alludes to.

The bad news is that the Sony breach is not unique. Not by a long shot. It’s not the limit. It’s really the beginning. It’s the shot across the bow for companies that will let them see one example of just how bad this can get. Of course, they should’ve been paying attention to Target, Home Depot, Michaels, and more by this point already.

Instead, the Sony breach is emblematic of the security breaking point that has become increasingly visible over the last 2 years. It would be the limit if the industry turned a corner tomorrow and treated security like their first objective. But it won’t. I believe what I’ve said before – the poor security practices demonstrated by Sony aren’t unique. They’re typical  of how too many organizations treat security. Instead of trying to secure systems, they grease the skids just well enough to meet their compliance bar, turning an eye to security that’s just “too hard”.

 

While the FBI has been making the Sony attack sound rather unique, the only unique aspect of this one, IMHO, is the scale of success it appears to have achieved. This same attack could be replayed pretty easily. A dab of social engineering… a selection of well-chosen exploits (they’re not that hard to get), and Windows’ own management infrastructure appears to have been used to distribute it.

 

I don’t necessarily see cloud computing yet as the holy grail that you do. Mobile? Perhaps.

 

The personal examples you discussed were all interesting examples, but indeed were indicative of more of a duct-tape approach, similar to what we had to do with some things in Windows XP during the security push that led up to XPSP2 after XPSP1 failed to fill the holes in the hull of the ship. A lot of really key efforts, like run as non-admin just couldn’t have been done in a short timeframe to work with XP – had to be pushed to Vista (where they honestly still hurt users) or Windows 7 where the effort could be taken to really make them work for users from the ground up. But again, much of this was building foundations around the Win32 legacy, which was getting a bit sickly in a world with ubiquitous networking and everyone running as admin.

 

I completely agree as well that we’re long past adding speed bumps. It is immediately apparent, based upon almost every breach I can recall over the past year, that management complexity as a security vector played a significant part in the breach.

If you can’t manage it, you can’t secure it. No matter how many compliance regs the government or your industry throws at you. It’s quite the Gordian knot. Fun stuff.

 

 

I think we also completely agree about how the surface area exposed by today’s systems is to blame for where we are today as well. See my recent Twitter posts. As I mentioned, “systems inherently grow to become so complex nobody understands them.” – whether you’re talking about programmers, PMs, sysadmins, or compliance auditors.

 

 

I’m inclined to agree with your point about social and the vulnerabilities of layer 8, and yet we also do live in a world where most adults know not to stick a fork into an AC outlet. (Children are another matter.)

Technology needs to be more resilient to user-error or malignant exploitation, until we can actually solve the dancing pigs problem where it begins. Mobile solves part of that problem.

 

When Microsoft was building UAC during Longhorn -> Vista, Mark Russinovich and I were both frustrated that Microsoft wasn’t really doing anything with Vista to really nail security down, and so we built a whitelisting app at Winternals to do this for Windows moving forward. (Unfortunately, Protection Manager was crushed for parts after our acquisition, and AppLocker was/is too cumbersome to accomplish this for Win32. Outside of the longshot of ditching the Intel processor architecture completely, whitelisting is the only thing that can save Win32 from the security mayhem it is experiencing at the moment.

 

I do agree that moving to hosted IaaS really does nothing for an organization, except perhaps drive them to reduce costs in a way that on-premises hosting can’t.

But I guess if there was one statement in particular that I would call out in your blog as something I heartily disagree with, it’s this part:

 

“Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company.”

 

This is a wonderful goal, and it’ll be great for startups that have no legacy codebase (and don’t bring in hundreds of open-source or shared libraries that none of their dev team understands down to the bottom of the stack). But most existing companies can’t do what they should, and cut back the overgrowth in their systems.

I believe pretty firmly that what I’ve seen in the industry over the last decade since I left Microsoft is also, unfortunately, the norm – that management – as demonstrated by Sony’s leadership in that interview, will all too often let costs win over security.

 

For organizations that can redesign for a PaaS world, the promise offered by Azure was indeed what you’ve suggested – that designing new services and new applications for a Web-first world can lead to much more well-designed, refined, manageable, and securable applications and systems overall. But the problem is that that model only works well for new applications – not applications that stack refinement over legacy goo that nobody understands. So really, clean room apps only.

The slow uptake of Azure’s PaaS offerings unfortunately demonstrates that this is the exception, and an ideal, not necessarily anything that we can expect to see become the norm in the near future.

 

Also, while Web developers may not be integrating random bits of executable code into their applications, the amount of code reuse across the Web threatens to do the same, although the security perimeter is winnowed down to the browser and PII shared within it. Web developers can and do grab shared .js libraries off the Web in a heartbeat.

Do they understand the perimeter of these files? Absolutely not. No way.

Are the risks here as big as those posed by an unsecured Win32 perimeter? Absolutely not – but I wouldn’t trivialize them either.

There are no more OS hooks, but I’m terrified about how JS is evolving to mimic many of the worst behaviors that Win32 picked up over the years. The surface has changed, as you said – but the risks – loss of personal information, loss of data, phishing, DDOS, are so strikingly similar, especially as we move to a “thicker”, more app-centric Web.

 

Overall, I think we are in for some changes, and I agree with what I believe you’ve said both in your blog and on Twitter, that modern mobile OS’s with a perimeter designed in them are the only safe path forward. The path towards a secure Web application perimeter seems less clear, far less immediate, and perhaps less explicit than your post seemed to allude to.

 

There is much that organizations can learn from the Sony breach.

 

But will they?

 


15
Dec 14

Who shot Sony?

I’m curious about the identity of the group that broke in to Sony, apparently caused massive damage, and compromised a considerable amount of information that belongs to the company.

For some reason, journalists aren’t focusing on this, however. Probably because it doesn’t generate the clicks and ad views that publishing embarrassing emails, salary disclosures, and documented poor security practices do. Instead, they’re primarily focusing on revealing Sony’s confidential information, conveniently provided in multiple, semi-regular doc dumps by the party behind the breach.

Sony’s lawyers recently sent several publications a cease & desist letter, to get reporters to stop publishing the leaked information, since Sony “does not consent to your possession, review, copying, dissemination, publication, uploading, downloading or making any use” of the documents”. There’s been quite a stir that in doing this, Sony is likely invoking the Streisand effect, and it will probably not only backfire, but result in more, not less, coverage of the information.

In information available long before the breach, Sony’s executive director of information security was quoted as saying,“it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss”. Given that sort of security posture, it’s not surprising that even though he was able to talk an auditor out of dinging them for SOX compliance, Sony organizations have faced not one, but two rather devastating hacks in recent years.

So it would seem that Sony’s management is likely to blame for leaving doors open by reinforcing poor security practices and actually fighting back against well-intentioned compliance efforts (thus reinforcing what I’ve long said, “Compliance and security can go hand in hand. But security is never achieved by stamping a system as ‘compliant’.”)

It’s also obvious that the group that hacked in to Sony (perhaps with the assistance of either existing or previous employees), compromised confidential information and destroyed systems deserves a huge amount of blame in terms of the negative effects Sony is currently experiencing. Again, if Sony had proper security in place (and execs more interested in security than rubber-stamping systems), perhaps these people wouldn’t have stood a chance. In terms of media coverage, this is what I’d like to know more about. Who  actually broke in?

However, years from now, when people are looking back at the broad damage caused by the breach and the leaked information, I believe it’ll be important to really note who caused the most damage to Sony over the long run. Yes, the people who broke in started it all. But the damage being caused by journalists taking advantage of the document dumps is, and will continue to, result in significant damage to Sony. For myself, from now on, I’m only linking to, and reposting articles that appear to be using information that has not been sourced from the breach from now on.

I’m no longer feeding the clickbait machine that enthusiastically awaits the next doc drop of Sony confidential information, like a vulture ready to pick them while they’re weak, and expose the inner disfunction of an organization (not something unique to Sony – every org has some level of dysfunction).

On Twitter this morning, I pondered whether the NYT would be so enthusiastic and supportive about the journalistic value of confidential info that was regularly being pushed out by hackers if they themselves had been breached, and it was their secrets, their dysfunction, their personal information, their source lists that was being taken advantage of to generate ad views.

For some reason, I have to think the answer is no. So why are journalists so enthusiastic about kicking Sony while they’re down after a breach?


03
Dec 14

Shareholder Shackles

Recently, Michael Dell wrote about the after-effects of taking his company private. I think his words are quite telling:

“I’d say we got it right. Privatization has unleashed the passion of our team members who have the freedom to focus first on innovating for customers in a way that was not always possible when striving to meet the quarterly demands of Wall Street.”, and “The single most important thing a company can do is invest and innovate to help customers succeed…”

Early on in my career at Microsoft, executives would often exclaim “our employees are our best asset.” By the time I left in 2004, however, it was pointedly clear that “shareholder value!” was the priority of the day. Problem is, most underling employees aren’t significant shareholders. In essence, executive leadership’s number one priority wasn’t building great products or retaining great employees, but in making money for shareholders. That’s toxic.

I distinctly recall the day that SteveB held an all-hands meeting where the move to deliver a dividend was announced for the first time in 2003. He was ecstatic, as he should have been. It was a huge jab in the side of institutional investors that had been pushing him to pass on the cash hoard to them. Being the second most significant shareholder at the time, it of course was a windfall for him, financially.

But most employees? They held some stock, sure. But not massive quantities. So this was, in effect, taking the cash that employees had worked their asses off to earn, and chucking it out at shareholders, whose most significant investment had been cash to try and keep the stock, stuck in a dead calm for years (and for years after), moving up.

After Steve announced the dividend in the “town hall” meeting that day, he asked if there were any questions from the room full of employees physically present there. There were no questions. Literally zero questions. For some reason, he seemed surprised.

I was watching the event from my office with a colleague, now also separated from Microsoft. I turned to him and asked, “Do you know why there are no questions?” He replied “no”, and I stated, “because this change he just announced means effectively nothing to more than 95% of the people in that room.

I’m not a big fan of the stock market – especially on short-term investments. I’m okay with you getting a return on a longer-term investment that you’ve held while a company grows. I think market pressures can lead a company to prioritize and deemphasize the wrong things just to appease the vocal masses. Fire a CEO and lose their institutional knowledge? SURE! (Not that every CEO change is all good or all bad.) Give you the cash instead of investing it in new products, technologies, people and processes to grow the business? SURE! But I’m really not a fan of fair-weather shareholders coming along and pushing for cash back on an investment they just made. Employees sweat their asses off for years building the business in order to get equity that takes years again to vest, and shareholders get the cash for doing almost nothing. Alrighty then. That makes sense.

While Tim Cook has taken some steps to appease certain drive-by activist investors who bloviate about wanting more cash through more significant dividends or bigger buybacks, he has pushed back as well, and has also been explicitly outspoken when people challenge the company’s priorities.

One can argue that Microsoft’s flat stock price from 2001-2013 was the cause of the reprioritization and capitulation to investors, but one can also argue that significant holdings by executives could also have tainted the priorities to focus on shareholder innovation shareholder value.

While Microsoft’s financial results do generally continue to move in a positive direction, I personally worry that too much of that growth could be coming in part with price increases, not with net-new sales. It’s always hard to decode which is which, as prices have generally been rising, and underlying numbers generating them aren’t always terrifically clear to decode (I’m being kind).

As organizations grow, and sales get tight, you have two choices to make money. You 1) get new customers, or 2) charge your existing customers more.

The first position is easy, as long as you’re experiencing organic sales to new customers, or you’re adding new products and services that don’t completely replace, but can and likely do erode, prior products in order to deliver longer-term growth opportunities for the business as a whole.

Most companies, over time, plateau and move into the second position and have to tighten the belt. It just happens. There’s just only so far you can go in terms of obtaining new customers for your existing products and services or building new products and services that risk your existing lines. This is far from unique to Microsoft. It’s a common occurrence. As this article in The New Yorker shows, United is doing this as well (and they’re certainly not alone). Even JetBlue is facing the music and chopping up their previously equitable seating plans to accommodate a push for earnings growth.

Read that last section quoting Hayes very carefully again: “long-term plan to drive shareholder returns through new and existing initiatives.” and “We believe the plan laid out today benefits our three key stakeholders … It delivers improved, sustainable profitability for our investors, the best travel experience for our customers and ensures a strong, healthy company for our crewmembers.”

Just breathe in those priorities for a moment. It’s not about the customers that pay the bills (and he left out “our highest paying” in the statement about customers). It’s not about the employees that keep the planes flying and on time. Nope. It’s about shareholder value. Effectively all about shareholder value. I would argue those priorities are completely ass-backwards. I’m also not sure I concur that it ensures a strong, healthy company for the long term, either. JetBlue has many dedicated fliers due to the distinct premium, but price-conscious product it has delivered from the beginning. JetBlue will find themselves with great difficulty retaining existing customers. Sure, they’ll make money. But a lot of people who used to prefer JetBlue are now likely to not be so preferential.

My personal opinion is that Michael Dell is spot on – the benefit of being a private company is that, now that he survived the ordeal of re-privatizing his company, he can ignore the market at large, and do what’s best for the company. Rather than focusing on short-term goals quarter to quarter, and worrying about a certain year’s fourth quarter being slightly down over the previous year’s, he, his leadership team, and his employees can focus on building products and services that customers will buy because they solve a problem for them.

I worry about a world where the “effectiveness” of a CEO is in any way judged by the stock price. It’s a bullshit measurement. Price growth doesn’t gauge whether the company will be alive or dead in 5, 10, or 15 years. It doesn’t gauge whether a CEO is willing to put a product line on a funeral pyre so a new one can grow in it’s place. Most importantly, it doesn’t gauge whether a company’s sales pipeline is organically growing or not in any form.

When you focus on just pleasing the cacaphony of shareholders, you get hung up on driving earnings up at all costs. This is the price a public company faces.

When you focus on just driving earnings up at all costs, you get hung up on driving numbers that may well not be in line with the long-term goals of your company. This is the price a public company faces.

Build great products and services. Kick ass. Take names. Watch customers buy your tools to solve their problems. When shareholders with no immediate concern for your company other than how you’ll pad their wallet come knocking, as long as you’re making a profit, invest that cash in future growth for your company, and tell them you’re too busy building great things to talk.


06
Nov 14

Is Office for mobile devices free?

As soon as I saw today’s news, I thought that there would be confusion about what “Office for tablets and smartphones going free” would mean. There certainly has been.

Office for iOS and Android smartphones and tablets is indeed free, within certain bounds. I’m going to attempt to succinctly delinate the cases under which it is, and is not, free.

Office is free for you to use on your smartphone or tablet if, and only if:

  1. You are not using it for commercial purposes
  2. You are not performing “advanced editing“.

If you want to use the advanced editing features of Office for your smartphone or tablet as defined in the link above, you need one of the following:

  • An Office 365 Personal or Home subscription
  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

If you’re using Office on your smartphone or tablet for any commercial purpose, you need the following:

  • A commercial Office 365 subscription which includes Office 365 ProPlus (the desktop suite.)*

For consumers, this change is great, and convenient. You’ll be able to use Office for basic edits on almost any mobile device for free. For commercial organizations, I’m concerned about how they can prevent this becoming a large license compliance issue when employees bring their own iPads in to work.

For your reference, here are the license agreements for Excel for iOSPowerPoint for iOS, and Word for iOS.

*I wanted to add a footnote here to clarify one vagary. The new “Business” Office 365 plans don’t technically include Office 365 ProPlus – they are more akin to “Office 365 Standard”, but appears to have no overarching branding. Regardless, if you have Office 365 Business or Office 365 Business Premium, which include the desktop suite, you also have rights to the Office mobile applications.

Learn more about how to properly license Office for smartphones and tablets at a Directions on Microsoft Licensing Boot Camp. Next event is Seattle, on Dec. 8-9, 2014. We’ll cover the latest info on Office 365, Windows Per User licensing, and much more.


19
Oct 14

On the Design of Toasterfridges

On my flight today, I rewatched the documentary Objectified. I’ve seen it a few times before, but it has been several years. While I don’t jibe with 100% of the sentiment of the documentary, it made me think a bit about design, as I was headed to Dallas. In particular, it made me consider Apple, Microsoft, and Google, and their dramatically different approaches to design – which are in fact a reflection of the end goal of each of the companies.

One of my favorite moments in the piece is Jony Ive’s section, early on. I’ve mentioned this one before. If you haven’t read that earlier blog post, you might want to before you read on.

Let’s pause for a moment and consider Apple, Microsoft, and Google. What does each make?

  • Apple – Makes hardware.
  • Microsoft – Makes software.
  • Google – Makes information from data.

Where does each one make the brunt of its money?

  • Apple – Consumer hardware and content.
  • Microsoft – Enterprise software licensing.
  • Google – Advertising.

What does each one want more of from the user?

  • Apple – Buy more of their devices and more content.
  • Microsoft – Use their software, everywhere.
  • Google – Share more of your information.

You can also argue that Apple makes software, Microsoft makes hardware, and Google makes both. Some of you will surely do so. But at the end of the day, software is a hobby for Apple to sell more hardware and content (witness the price of their OS and productivity apps), hardware is a hobby for Microsoft to try and sell more software and content, and hardware and software are both hobbies for Google to try and get you more firmly entrenched into their data ecosystem.

Some people were apparently quite sad that Apple didn’t introduce a ~12” so-called “iPad Pro” at their recent October event. People expecting such a device were hoping for a removable keyboard, perhaps like Microsoft’s Surface (ARM) and Surface Pro (Intel) devices. Hopes were there that such a device would be the best of both worlds… a large professional-grade tablet (because those are selling well), and a laptop of sorts, and it would feature side-by side application windows, as have been available on Windows nearly forever, and many Android devices for some time. In many senses, it would be Apple’s own version of the Surface Pro 3 with Windows 8.1 on it. Reporters have insisted, and keep insisting that Apple’s future will be based upon making a Surface clone of sorts. I’m not so sure.

I have a request for you. Either to yourself, in the comments below, or on Twitter, consider the following. When was the last time (since the era of Steve Jobs return) that you saw Apple hardware lean away, in order to let the software compromise it? Certainly, the hardware may defer to the software, as Ive says earlier about the screen and touch on the iPhone; but the role of the hardware is omnipresent – even if you don’t notice it.

I’ve often wondered what Microsoft’s tablets would look like today if Microsoft didn’t own Office as well as Windows; if they weren’t so interested in preserving the role of both at the same time. Could the device have been a pure tablet that deferred to touch, and didn’t try so hard to be a laptop? Could it have done better in such a scenario?

Much has been said about the “lapability” of the Surface family of devices. I really couldn’t disagree more.

More than one person I know has used either a cardboard platform or other… <ahem> surface as a flattop for their Surface to rest upon while sitting on their lap. I’ve seen innumerable reporters contort themselves while sitting in chairs at conferences to balance the device between the ultra-thin keyboards and the kickstand. A colleague recently stopped using his Surface Pro 2 because he was tired of the posture required to use the device while it is on your lap. It may be an acceptable tablet, especially in Surface Pro 3 guise – but I don’t agree that it’s a very good “laptop”.

The younger people that follow me on Twitter or read this blog may not get all of these examples, but hopefully will get several. Consider all of the following devices (that actually existed).

  • TV/VCR combination
  • TV/DVD combination
  • Stand mixers with pasta-making attachments
  • Smart televisions
  • Swiss Army Knife

Each of these devices has something in common. Absent a better name to apply to it, I will call that property toasterfridgality. Sure. Toasterfridge was a slam that Tim Cook came up with to describe Microsoft’s Surface devices. But regardless of the semi-derogatory term, the point is, I believe, valid.

Each of the devices above compromises the integrity with which it performs one or more roles in order to try and perform two or more roles. The same is true of Microsoft’s Surface and Surface Pro line.

For Microsoft, it was imperative that the Surface and Surface Pro devices, while tablets first and foremost (witness the fact that they are sold sans keyboard), be able to run Office and the rest of Win32 that couldn’t be ported in time for Windows 8 – even if it meant a sacrifice of software usability in order to do so. Microsoft’s fixation on selling the devices not as tablets but as laptop replacements (even though they come with no keyboard) leads to a real incongruity. There’s the device Microsoft made, the device consumers want, and the way Microsoft is trying to sell it. Even taking price out of the equation, is there any wonder that Surface sales struggled until Surface Pro 3?

Lenovo more harmoniously balances their toasterfridgality. Their design always seems to focus first on the device being a laptop – then how to incorporate touch. (And on some models, “tabletude”.) Take for example, the Lenovo ThinkPad Yoga  or Lenovo ThinkPad Helix. These devices are laptops, with a comprehensive hinge that enables them to have some role as a tablet while not completely sacrificing… well… lapability. In short, the focus is on the hinge, not on the keyboard.

To view the other end of the toasterfridge spectrum, check out the Asus Padfone X, device that tries to be your tablet by glomming on a smartphone. I’m a pretty strong believer that the idea of “cartridge” style computing isn’t the future, as I’ve also said before. Building devices that integrate with each other to transmogrify into a new role sounds good. But it’s horrible. It results in a device that performs two or more roles, but isn’t particularly good at either one. It’s a DVD/VCR combo all over again. Phone breaks, and now you don’t have either device anymore. If there was such a model that converted your phone into a desktop, one can only imagine how awesome it would be reporting to work on Monday, having lost your “work brain” by dropping your phone into the river.

I invite you to reconsider the task I asked of you earlier, to tell me where Apple’s hardware defers to the software. Admittedly, One can make the case that Apple is constantly deferring the software to the hardware; just try and find an actual fan of iTunes or the Podcasts app, or witness Apple’s recent software quality issues (a problem not unique to Apple). But software itself isn’t their highest priority; it’s the marriage of that software and the hardware (sometimes compromising them both a bit). Look at the iPhone 6 Plus and the iPad Air 2. Look how Apple moved – or completely removed – switchgear on them to align with both use cases (big phones are held differently) and evolving priorities (switches break, and the role of the side-switch in iOS devices is now completely made redundant by software).

Sidebar: Many people, including me, have complained that iOS devices start at 16GB of storage now. This is ridiculous. With the bloat of iOS, requirements for upgrading, and any sort of content acquisition by their users, these devices will be junk before the end of CY2016. Apple, of course, has made cohesive design, not upgradability, paramount in their iOS devices. This has earned them plenty of low scores for reparability and consumer serviceability/upgradeability in reviews. I think it is irresponsible of Apple, given that they have no upgradeability story, to sell these devices with 16GB. The minimum on any new iOS device should be 32GB. Upgradability or the ability to add peripherals is often touted by those dissing Apple as limitations of the platform. It’s true. They are limitations. But these limitations and a tight, cohesive hardware design, are what let these devices have value 4 years after you buy them. I recently got $100 credit from AT&T for my daughter’s iPhone 4 (from June, 2010). A device that I had used for two years, she had used for two more, and it still worked. It was just gasping for air under the weight of iOS 6, let alone iOS 7 (and the iPhone 4 can’t run 8). There is a reason why these devices aren’t upgradeable. Adding upgradeability means building the device with serviceability in mind, and compromising the integrity of the whole device just to make it expandable. I have no issue with Apple making devices user non-serviceable for their lifespan, as I believe it tends to result in devices that actually last longer rather than falling apart when screws unwind and battery or memory doors stop staying seated.

I’ve had several friends mention a few recent tablets and the fact that USB ports on the devices are very prone to failure. This isn’t new to me. In 2002, when I was working to make Windows boot from USB, I had a Motion Computing M1200 tablet. Due to constant insertion and removal of UFDs for testing and creation, both of the USB ports on the tablet had come unseated off of the motherboard and were useless. Motion wanted over $700 to repair a year old (admittedly somewhat abused) tablet. With <ahem> persuasion from an executive at Microsoft, Motion agreed to repair it for me for free. But this forever highlighted to me that more ports aren’t necessarily always something to be looked at in a positive light. The more things you add, the more complex the design becomes, and the more likely it becomes that one of these overwrought features added to please a product manager who has a list of competitive boxes to check will lead to a disappointed customer, product support issues and costs, or both. USB was never originally designed to have plugs inserted and removed willy-nilly (as Lightning and the now dead Apple 30-pin connector were), and I don’t think most boards are manufactured to have devices inserted and removed as often (and perhaps as haphazardly) as they are on modern PC tablets.

Every day, we use things made of components. These aren’t experiences, and they aren’t really even designed (at least not with any kind of cohesive aesthetic). Consider the last time you used a Windows-based ATM or point-of-sale/point-of-service device. It may not seem fair that I’m  glomming Windows into this, but Windows XP Embedded helped democratize embedded devices, and allowed for cheap devices to handle cash, digital currency, rent DVDs on demand, and make a heretofore unimaginable self-service soda fountain.

But there’s a distinct feel of toaster fridge every time I used one of these devices. You feel the sharp edges where the subcomponents it is made of come together (but don’t align). Where the designer compromised the design of the whole in order to accommodate the needs of the subcomponents.

The least favorite device I use with any regularity is the Windows-based ATM at my credit union. It has all of the following components:

  • A display screen (which at least supports touch)
  • An input slot for your ATM/credit/debit card
  • A numeric keypad
  • An input slot for one or more checks or cash
  • An output slot for cash
  • An output slot for receipts.

As you use this device, there are a handful of pain points that will start to drive you crazy if you actually consider the way you use it. When I say left or right, I mean in relation to the display.

  • The input slot for your card is on the right side.
  • The input slot for checks is on the left side.
  • The receipt printer is on the right side.
  • The output slots for cash are both below.

Arguably, there is no need for a keypad given that there is a touchscreen; but users with low visibility would probably disagree with that. Besides that, my credit union has not completely replaced the role of the keypad with the touchscreen. Entering PINs, for example, still requires the keypad.

So to deposit a check, you first put in your card (right), enter your pin (below), specify your transaction type (on-screen), deposit a stack of checks (no envelope, which is nice) on the left. Wait, get your receipt (top right), and get your card (next down on the right). My favorite part is that the ATM starts beeping at you to retrieve your card before it has released it.

This may all seem like a pedantic rant. But my primary point is that every day, we use devices that prioritize the business needs, requirements, or limitations of their creator or assembler, rather than their end user.

Some say that good design begins with the idea of creating experiences rather than products. I am inclined to agree with this ideology, one that I’ve also evangelized before. But to me, the most important role in designing a product is to pick the thing that your product will do best, and do that thing. If it can easily adapt to take on another role without compromising the first role? Then do that too. If adding the new features means compromising the product? Then it is probably time to make an additional product. I must admit – people who clamor for an Apple iPad Pro that would be a bit of (big) tablet and (small) notebook confuse me a bit. I have a 2013 iPad Retina Mini and a 2013 Retina MacBook Pro. Each device serves a specific purpose and does it exceptionally well.

I write for a living. I can never envision doing that just on an iPad, let alone my Mini (or even without the much larger Acer display that my rMBP connects to). In the same vein, I can’t really visualize myself laying down, turning on some music, and reading an eBook on my Mac. Yes. I had to pay twice to get these two different experiences. But if the alternative is getting a device that compromises both experiences just to save a bit of money? I don’t get that.


12
Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.


07
Sep 14

On the death of files and folders

As I write this, I’m on a plane at 30,000+ feet, headed to Chicago. Seatmates include a couple from Toronto headed home from a cruise to Alaska. The husband and I talk technology a bit, and he mentions that his wife particularly enjoys sending letters as they travel. He and I both smile as we consider the novelty in 2014 of taking a piece of paper, writing thoughts to friends and family, and putting it in an envelope to travel around the world to be warmly received by the recipient.

Both Windows and Mac computers today are centered around the classic files and folders nomenclature we’ve all worked with for decades. From the beginning of the computer, mankind has struggled to insert metaphors from the physical world into our digital environments. The desktop, the briefcase, files that look like paper, folders that look like hanging file folders. Even today as the use of removable media decreases, we hang on to the floppy diskette icon, a symbol that means nothing to pre-teens of today, to command an application to “write” data to physical storage.

Why?

It’s time to stop using metaphors from the physical world – or at least to stop sending “files” to collaborators in order to have them receive work we deign to share with them.

Writing this post involves me eating a bit of crow – but only a bit. Prior to me leaving Microsoft in 2004, I had a rather… heated… conversation with a member of the WinFS team about a topic that is remarkably close to this. WinFS was an attempt to take files as we knew them and treat them as “objects”. In short, WinFS would take the legacy .ppt files as you knew them, and deserialize (decompose) them into a giant central data store within Windows based upon SQL Server, allowing you to search, organize, and move them in an easier manner. But a fundamental question I could never get answered by that team (the core of my heated conversation) was how that data would be shared with people external to your computer. WinFS would always have to serialize the data back out into a .ppt file (or some other “container”) in order to be sent to someone else. The WinFS team sought to convert everything on your system into a URL, as well – so you would have navigated the local file system almost as if your local machine was a Web server rather than using the local file and folder hierarchy that we had all become used to since the earliest versions of Windows or the Mac.

So as I look back on WinFS, some of the ideas were right, but in classic Microsoft form, at best it may have been a bit of premature innovation, and at worst it may have been nerd porn relatively disconnected from actual user scenarios and use cases.

From the dawn of the iPhone, power users have complained that iOS lacked something as simple as a file explorer/file picker. This wasn’t an error on Apple’s part; a significant percentage of Apple’s ease of use (largely aped by Android and Windows (at least with WinRT and Windows Phone applications) is by abstracting away the legacy file and folder bird’s nest of Windows, the Mac, etc.

As we enter the fall cavalcade of consumer devices ahead of the holiday, one truth appears plainly clear; that standalone “cloud storage” as we know it is largely headed for the economic off-ramp. The three main platform players have now put cloud storage in as a platform pillar, not an opportunity to be filled by partners. Apple (iCloud Drive), Google (Google Drive), and Microsoft (OneDrive and OneDrive for Business – their consumer and business offerings, respectively), have all been placed firmly in as a part of their respective platform. Lock-in now isn’t just a part of the device or the OS, it’s about where your files live, as that can help create a platform network effect (AT&T Friends and Family, but in the cloud). I know for me, my entire family is iOS based. I can send a link from iCloud drive files to any member of my family and know they can see the photo I took or the words I wrote.

But that’s just it. Regardless of how my file is stored in Apple’s, Google’s, or Microsoft’s hosted storage, I share it through a link. Every “document” envelope as we knew it in the past is now a URL, with applications on each device capable of opening their file content.

Moreover, today’s worker generally wants their work:

  1. Saved automatically
  2. Backed up to the cloud automatically (within reason, and protected accordingly)
  3. Versioned and revertible
  4. Accessible anywhere
  5. Coauthoring capable (work with one or more colleagues concurrently without needing to save and exchange a “file”)
  6. As these sorts of features become ubiquitous across productivity tools, the line between a “file” and a “URL” becomes increasingly blurred, and the more, well, the more our computers start acting just like the WinFS team wanted them to over a decade ago.

    If you look at the typical user’s desktop, it’s a dumping ground of documents. It’s a mess. So are their favorites/bookmarks, music, videos, and any other “file type” they have.

    On the Mac, iTunes (music metadata), iPhoto (face/EXIF, and date info), and now the finder itself (properties and now tags) are a complete mess of metadata. A colleague in the Longhorn Client Product Management Group was responsible for owning the photo experience for WinFS. Even then I think I crushed his spirit by pointing out what a pain in the ass it was going to be to enter in all of the metadata for photos as users returned for trips, in order to make the photos be anything more than a digital shoebox that sits under the bed.

    I’m going to tell all the nerds in the world a secret. Ready? Users don’t screw around entering metadata. So anything you build that is metadata-centric that doesn’t populate the metadata for the user is… largely unused.

    I mention this because, as we move towards vendor-centered repositories of our documents, it becomes an opportunity for vendors to do much of what WinFS wanted to do, and help users catalog and organize their data; but it has to be done almost automatically for them. I’m somewhat excited about Microsoft’s Delve (nee Oslo) primarily because if it is done right (and if/when Google offers a similar feature), users will be able to discover content across the enterprise that can help them with their job. Written word will in so many ways become a properly archived, searchable, and collaboration-ready tool for businesses (and users themselves, ideally).

    Part of the direction I think we need to see is tools that become better about organizing and cataloging our information as we create it, and keeping track of the lineage of written word and digital information. Create a file using a given template? That should be easily visible. Take a trip with family members? Photos should be easily stitched together into a searchable family album.

    Power users, of course, want to feel a sense of control over the files and folders on their computing devices (some of them even enjoy filling in metadata fields). These are the same users who complained loudly that iOS didn’t have a Finder or traditional file picker, and who persuaded Microsoft to add a file explorer of sorts to Windows Phone, as Windows 8 and Microsoft’s OneDrive and OneDrive for Business services began blurring out the legacy Windows File Explorer. There’s a good likelihood that next year’s release of Windows 9 could see the legacy Win32 desktop disappear on touch-centric Windows devices (much like Windows Phone 8.x, where Win32 still technically exists, but is kept out of view. I firmly expect this move will (to say it gently) irk Windows power users. These are the same type of users who freaked out when Apple removed the save functionality from Pages/Numbers/Keynote. Yet that approach is now commonplace for the productivity suites of all of the “big 3” productivity players (Microsoft, Google, and Apple), where real-time coauthoring requires an abstraction of the traditional “Save” verb we all became used to since the 1980’s. For Windows to succeed as a novice-approachable touch environment as iOS is, it means jettisoning a visible Win32 and the File Explorer. With this, OneDrive and the simplified file pickers in Windows become the centerpiece of how users will interact with local files.

    I’m not saying that files and folders will disappear tomorrow, or that they’ll really ever disappear entirely at all. But increasingly, especially in collaboration-based use cases, the file and folder metaphors will largely move to the wayside, replaced by Web-based experiences and the use of URLs with dedicated platform-specific local, mobile or online apps interacting with them.