22
Aug 17

A few thoughts on Windows 10 S…

A few months ago, before Microsoft announced their new Surface Laptop or Windows 10 S, I had several conversations with reporters and friends about what might be coming. In particular, some early reports had hinted that this might be a revision of Windows, something designed for robustness. Some thought it might be more Chromebook-like. Given the experiences of my daughters with Chromebooks, those last two sentences are oxymorons. But I digress. What arrived, Windows 10 S (AKA “Windows 10 Pro in S Mode”) wasn’t a revision or really much of a refinement. It was a nuanced iteration of Windows 10 Pro, with built-in Device Guard policies, and some carefully crafted changes to the underlying OS infrastructure.

Putting the Surface Laptop aside for now (it’s not my laptop, and I’m not its customer), Windows 10 S seems to me to be an OS full of peculiar compromises, with a narrow set of benefits for end users, at least at this time.

I saw this tweet go by on Twitter a bit ago, and several more followed, discussing the shortcomings of Windows 10 S.

In most conversations I’ve had with reporters recently about Windows, I’ve reemphasized my point that what most customers want isn’t “an OS that does <foo>”. They want a toaster.

What do I mean by that? Think about a typical four-slice toaster:

You use it Sunday morning. It toasts.
You use it Monday morning. It toasts.
You use it Wednesday morning. It toasts.

This is what a huge percentage of the populace wants. A toaster. Normals want it. Schools want it. Most IT workers want it. Frankly, I think a lot of IT wants it, because they’re constantly being asked to do more, and given less money to do it with.

The era of tinkering with PCs being fun for normals, and even some technical people, has passed.

So that in mind, what’s wrong with Windows 10 S? Nothing, I guess. In a way, It is at least a more toasterish model for Windows than we’ve seen before. It’s constrained, and attempts to put a perimeter around the Windows desktop OS to reduce the risk posed by the very features of the OS itself.

I encourage you to read Piotr’s thread, above, before reading further.

Windows 10 S is not:

  • A new edition of Windows (or version, for that matter). It’s effectively a specially configured installation of Windows 10 Pro
  • Redesigned for use with touch or tablets, any more than 10 itself is
  • Cloud-backup enabled or cloud recoverable (this one is a shame, IMHO)
  • Free of Win32 and the quirks and challenges that it brings.

Those last two are important. Consumers with iOS devices today are generally used to toaster-like experiences when it comes to backing up and recovering their devices (yes, exceptions exist) to iCloud ideally, or a Mac or PC in certain circumstances. The last one is important because most of the troublesome battery life issues that hit lightweight, low-energy Windows devices can be easily pointed back to the cumbersome baggage of Win32 itself, and Win32 applications engineered for a time when energy was cheap because PCs were plugged in all the time, and everything was about processor power.

So if Windows 10 S isn’t “all new”, what is it?

Technologically, Windows 10 S is designed for the future. Or at least the future Microsoft wants:

  • It offers almost all features of Pro, and can be easily “upgraded” to Pro
  • It natively supports Azure Active Directory domain join and authentication as Pro does, but does not support joining Active Directory at all
  • It supports Windows Store applications only (UWP, Desktop Bridge if crafted correctly, etc), otherwise, no use of Win32 applications not in-box and approved by Microsoft
  • Secure by default, at least in the sense that the previous objective and the implementation of Device Guard + policies built in can deliver.

So it’s an OS that supports the directory, app store, and legacy app distribution models of the future.

A question I’ve been asked several times was, “why no AD join?” – Initially I was just going with the “it’s the directory of the future” theory. But there’s more to it. From the day that AD and Group Policy came into Windows, there was an ongoing struggle in terms of performance and cost. Ask anyone who had a Windows 2000 PC how long they had to wait when they logged on every day. A giant chunk of that was Active Directory. Over time,  Windows added increasing amounts of messaging to tell you what the OS was doing during logon.

If you go back and look at the 10 S reveal, logon performance was a touted feature. I’ve even seen people on Twitter say that’s why they like 10 S better. Why is it better? I’m sure there are some other reasons as well, but by completely obliterating AD integration, I’m certain that a huge performance win was observed.

When I look at 10 S then, particularly the Device Guard-based security, the defenestration of Active Directory, and the use of Pro as an underlying OS rather than a new edition, 10 S feels… kind of like a science experiment that escaped the lab. Frankly, Device Guard always kind of looked that way to me too.

But there’s another angle here too, and it’s kind of a weird one.

I don’t know how much Microsoft is selling Windows 10 S to OEMs for, but price is clearly a factor here. Some have assumed that because it’s based on Pro, that 10 S costs the same, or even costs as much as Home. It is not clear whether that is actually the case.

When announced, Microsoft stated that it would ship on PCs starting at US$189. As I said, price is clearly a factor. Given the fact that a one-time upgrade from 10 S to Pro costs US$49, it seems pretty apparent to me that with 10 S, Microsoft has shifted some costs for Pro that used to be borne by OEMs to consumers. While this US$49 upgrade is basically moot for the remainder of this calendar year, eventually it must be considered, as consumers (and some businesses) will need to pay if they require Pro-only functionality.

So the net effect then is that Windows 10 S devices can be cheaper, at least up-front, than Windows 10 Pro devices (and maybe Home). Users who need Pro can “upgrade” to it.

Here’s where I think this gets really interesting. Before too long, we can expect to see ARM-based devices running Windows 10. I think that these devices could likely come with 10 S on them, resulting in lower purchase prices, as well as a reduced risk vector if users don’t actually need to run their own library of Win32 applications. In a way then, “Windows 10 S on ARM” offers most of the actual value that Windows RT ever delivered, but would offer far more, by supporting Desktop Bridge applications, and a complete upgrade to Pro with support for x86 Win32 applications.

Consumers could pay for the upgrade to Pro if they need to run full Win32, or need to upgrade the device to Enterprise for work. In this scenario, I imagine that Chrome will likely be the reason why a number of 10 S users pay for an upgrade.

Just as with the vaguely unannounced “Windows 10 Pro for Workstations”, there’s always a reason why these changes occur, and a strategic objective that Microsoft has planned. For me, I think that 10S, especially with a pilot launch on Microsoft’s own Surface Laptop hardware, is pretty clearly a sign of a few directions there the company wants to go.

 


08
Dec 16

Windows 10 on ARM. What does it mean?

Yesterday, when I heard the news from Microsoft’s WinHEC announcements stating, “Windows 10 is coming to ARM through a partnership with Qualcomm”, my brain went through a set of loops, trying to get what this really was, and what it really meant.

Sure, most of us have seen the leaks over the past few weeks about x86 on ARM, but I hadn’t seen enough to find much signal in the noise as to what this was.

But now that I’ve thought about it, most of it makes sense, and if we view the holistic Windows 10 brand as step 1, this is step 2 of blurring the line of what a Windows PC is.

Before we look forward, a bit of history is important. Windows RT was a complex equation to try and reduce – that is, why did it fail? The hardware was expensive, it wasn’t <ahem/> real Windows, it couldn’t run legacy applications at all, and the value proposition and branding were very confusing. Wait. Was I talking about Windows RT, or Windows on Itanium? Hah. Tricked you – it applies to both of them. But let’s let sleeping dogs be.

So if the lack of support for Windows legacy applications is a problem, and ARM processors are getting faster, how to best address this? Windows 10, the last version of Windows. Now available in a complex amalgam that will be ARM64 native, but run Win32 x86 applications through emulation.

Let’s take a look at a couple of things here, in terms of Q&A. I have received no briefing from Microsoft on this technology – I’m going to make some suppositions here.

Question 1: What is meant by x86 Win32 applications? Everything? How about 64-bit Win32 applications?

This is actually pretty straightforward. It is, as the name would imply, x86 Win32 applications. That means the majority of the legacy applications written during the lifetime of Windows (those capable of running on 32-bit Windows 10 on x86) should work when running on 64-bit Windows 10 on ARM. In general, unless there are some hardware shenanigans performed by the software, I assume that most applications will work. In many ways, I see this emulation behaving sort of like Win32 virtualization on AMD64 systems, albeit with very different internals.

Question 2: Ah, so this is virtualization?

No, this is emulation. You’re tricking x86 Win32 applications into thinking they’re running on a (low-powered) x86 processor.

Question 3: Why only 32-bit?

See a few of the next answers for a crucial piece of this answer, but in short, to save space. You could arguably have it add support for Win64 (x64, 64-bit) Windows desktop applications, but this would mean additional bloat for the operating system, and offer rapidly diminishing returns. You’re asking a low-powered ARM processor to really run 64-bit applications and make the most of them? No. Get an x64 processor and don’t waste your money.

Question 4: What is the intent here?

As I said on Twitter this morning, “This is not the future of personal computing. This is a nod to the past.” I have written far more words than justified on why Windows on ARM faced challenges. This is, in many ways, the much-needed feature to make it succeed. However, this feature is also a subtle admission of… the need for more. In order to drive Windows the platform forward on ARM, and help birth the forthcoming generations of UWP-optimal systems, there is a need to temper that future with the reality of the past – that businesses and consumers have an utterly wacky amount of time and money involved in legacy Windows desktop applications, and… something something, cold, dead hands. Thus, we will now see the beginning of x86 support on these ARM processors, and a unified brand of Windows that addresses “How do I get this?” For consumers, it will mean a lack of confusion. Buy this PC, and it will be a great tablet when you want a tablet, but it will also run all of that old stuff.

Question 5: Why not just use Project Centennial, and recompile these old desktop apps for UWP?

First, for this to succeed, it must be point-and-shoot. No repackaging. No certificate games. No weird PowerShell scripts. No recompilation. Take my ancient printer driver, and it just works. Take my old copy of MS Money that I shouldn’t be using. It just works. Etc. We’re talking old apps that should be out to pasture. On the consumer side, there is no code, and no ISV in their right mind will spend time going back and doing the work to support something like this. On the business side, there’s likely nobody around who understands the code or wants to break it. Centennial is a great idea if you are an ISV or enterprise and you want to take your existing Win32 app and begin transmogrifying it into a UWP application through the non-trivial steps needed. But it’s certainly not always the best answer, and doesn’t do the same thing this will.

Question 6: Wait. So won’t I be able to get ransomware too, then?

I would have to assume the answer to that is… yes. However, it is important to note that Terry showed off Windows 10 Enterprise edition in yesterday’s demo. Why does that matter? Because there, you have the option to use DeviceGuard to lock down the device, on these PCs that will ship with OEM Windows. That is one step, for orgs willing to pay for Enterprise. I also assume that there will be the option to turn off the Win32 layer through configuration and GPO.

Question 7: So this is like Virtual PC on PowerPC Macs?

Not exactly. That’s a fine example of emulation, but that was Windows stacked on top of the Mac OS. This looks to be, as it should be, a more side-by-side emulation. Run a UWP app, and all of your resources are running on the ARM side natively.  Run a legacy app, and all your resources are running on the x86 side. Again, the experience should be much like running 32-bit applications on 64-bit Windows, without directory tricks to do it. That’s certainly what I saw in Terry’s demo. Importantly, this means a couple of things. First, you service the whole thing together. This isn’t a VM, and doesn’t require additional steps to service it. Second, where Terry mentions “CHPE = Compiled Hybrid Portable Executable” here, unless I’m misunderstanding, he’s saying that Windows 10 on ARM is basically running fat binaries. It’s two, two, two OS’s in one.

Question 8: Wait. What does that mean?

Well, if I’m understanding their direction correctly, the build includes resources for A64 and x86 in one binary. Meaning that you only need to service one binary to service both… modes? of the OS. Significantly, this also means some on-disk bloat. You’re going to need to have more space for this to work, as you’ve basically got two installs of the OS glued together. Significantly, this is also why you don’t have x64 support too. Because if my theory above holds, adding Win64 would… do amazing things to your remaining disk space.

Question 9: Ah, so UWP is dead?

Heck no. If anything, as I said earlier, this helps UWP in the long run, by reestablishing what Windows is. UWP is still what developers must target if they care about selling anything new, designing for touch, or reaching the collection of devices that Microsoft is driving UWP forward on. I also can envision that this functionality only works when a device is Continuum’d. That is, when you’re docked and ready to work at your desk. This is all about legacy, and your desktop.

Question 10: Ah, so Intel processors are dead?

LOLNO. This is an ARM processor running x86 software. No x64 support. Performance may wind up being fair, but an ARM system will hardly be your destination if you want to do hardcore gaming, data work, development, run VMs… and then there’s the server side, where ARM still has a huge uphill battle ahead of it. This will fill a hole for consumers and low-mid tier knowledge workers. If you cared that the new MBP didn’t have more than 16GB of RAM, well… I digress.

Question 11: Ah, so Windows Mobile is dead?

No. At least not yet. Windows Mobile won’t include this layer, which will likely mean that it also won’t require the storage space. In the long run, a Windows-based ARM64 phone could indeed run Windows 10, and finally blur the line as to what is a Windows phone and what is a Windows PC – and also make Continuum incredibly useful.

 


27
Jun 16

Compute Stick PCs – Flash in the pan?

A few years ago, following the success of many other HDMI-connected computing devices, a new type of PC arrived – the “compute stick”. Also referred to sometimes as an HDMI PC or a stick PC, the device immediately made me scratch my head a bit.

If Windows 10 still featured a Media Center edition, I guess I could sort of see the point. But Windows, outside of Surface Hub (which seemingly runs a proprietary edition of Windows), no longer features a 10′ UI in the box. Meaning, without third-party software and nerd-porn duct tape, it’s a computer with a TV as a display, and a very limited use case.

Unlike Continuum on Windows 10 Mobile, I’ve never had a licensing boot camp attendee ask me about compute sticks (almost none ever asked us about Windows To Go, the mode of booting Windows Enterprise edition off of USB on a random PC).

The early sticks featured 2GB of RAM or less, really limiting their use case even further. With 4GB, more modern versions will run Windows 10 well, but to what end?

I can see some cases where compute sticks might make sense for point of service, but a NUC is likely to be more affordable, powerful, and expandable, and not suffer from heat exhaustion like a compute stick is likely to.

I’ve also heard it suggested that a compute stick is a good potential for the business traveler. But I don’t get that. Using a compute stick requires you to have a keyboard and pointing device with you, and find an AC power source behind a hotel TV or shared workspace. Now I don’t know about you, but while I used to travel with a keyboard to use with my iPad, I don’t anymore… and I never travel with a spare pointing device. And as to finding AC power behind a hotel TV? Shoot me now.

The stick PC has some use cases, sure. Home theater where the user is willing to assemble the UX they want. But that’s nerd porn, not a primary use case, and not a long-term use case (see Media Center edition).

You eventually reach a point where, if you want a PC while you’re on the go, you should haul a PC with you. Laptops, convertibles, and tablets are ridiculously small, and you don’t always have to tote peripherals with you to make them work.

In short, I can see a very limited segment of use cases where compute sticks make sense. (Frankly, it’s a longer list than Windows To Go.) But I think in most cases, upon closer inspection, a NUC (or larger PC), Windows 10 tablet or laptop, or <gasp/> a Windows 10 Mobile device running Continuum is likely to make more sense.

 


03
Feb 16

Surface Pro and iPad Pro – incomparable

0.12 of a pound less in weight. 0.6 inches more in display area.

That’s all that separates the iPad Pro from the Surface Pro (lightest model of each). Add in the fact that both feature the modifier “Pro” in their name, and that they look kind of similar, and it’s hard to not invite comparisons, right? (Of course, what tablets in 2016 don’t look like tablets?)

Over the past few weeks, several reports have suggested that perhaps Apple’s Tablet Grande and Microsoft’s collection of tablet and tablet-like devices may have affected at holiday quarter sales of tablet-like devices from the other. Given what I’ve said above, I’ve surely even suggested that I might cross-shop one with the other when shopping. But man, that would be a mistake.

I’m not going to throw any more numbers at you to try and explain why the iPad Pro and Surface devices aren’t competitors, and shouldn’t be cross-shopped. Okay, only a few more; but it’ll be a minute. Before I do, let’s take a step back and consider the two product lines we’re dealing with.

The iPad Pro is physically Apple’s largest iOS device, by far. But that’s just it. It runs iOS, not OS X. It does not include a keyboard of any kind. It does not include a stylus of any kind. It can’t be used with an external pointing device, or almost any other traditional PC peripheral. (There are a handful of exceptions.)

The Surface Pro 4 is Microsoft’s most recent tablet. It is considered by many pundits to be a “detachable” tablet, which it is – if you buy the keyboard, which is not included. (As an aside, inventing a category called detachables when the brunt of devices in the category feature removable, but completely optional keyboards seems slightly sketchy to me.) Unlike the iPad Pro, the Surface Pro 4 does include the stylus for the device. You can also connect almost any traditional PC peripheral to a Surface Pro 4 (or Surface 3, or Surface Book.)

Again, at this point, you might say, “See, look how much they have in common. 1) A tablet. 2) A standardized keyboard peripheral. 3) A Stylus.”

Sure. That’s a few similarities, but certainly not enough to say they’re the same thing. A 120 volt light fixture for use in your home and a handheld flashlight also both offer a standard way to have a light source powered by electrical energy. But you wouldn’t jumble the two together as one category, as they aren’t interchangeable at all. You use them to perform completely different tasks.

The iPad Pro can’t run any legacy applications at all. None for Windows (of course), and none for OS X. There is it’s Achilles heel; it’s great at running iOS apps that have been tuned for it. But if the application you want to run isn’t there, or lacks features found in the Windows or OS X desktop variant you’d normally use (glares at you, Microsoft Word), you’re up the creek. (Here’s where someone will helpfully point out VDI, which is a bogus solution to running legacy business-critical applications that you need with any regularity.)

The Surface Pro offers a contrast at this point. It can run universal Windows platform (UWP) applications, AKA Windows Store apps, AKA Modern apps, AKA Metro apps. (Visualize my hand getting slapped here by platform fans for belaboring the name shifts.) And while the Surface Pro may have an even more constrained selection of platform-optimized UWP apps to choose from, if the one you want isn’t available in the Windows Store, you’ve got over two decades worth of Win32 applications that you can turn to.

Anybody who tells you that either the iPad Pro or the Surface Pro are “no compromise” devices is either lying to you, or they just don’t know that they’re lying to you. They’re both great devices for what they try to be. But both come with compromises.

Several people have also said that the iPad Pro is a “companion device”. But it depends upon the use case as to whether that is true or not. If you’re a hard-core Windows power user, then yes, the iPad Pro must be a companion device. If you regularly need features only offered by Outlook, Excel, Access, or similar Win32 apps of old, then the iPad Pro is not the device for you. But if every app you need is either available in the App Store, you can live within the confines of the limited versions of Microsoft Office for Office 365 on the iPad Pro, or your productivity tools are all Web accessible, then the iPad Pro might not only be a good device for you, but it might actually be the only device you need. It all comes down to your own requirements. Some PC using readers at this point will helpfully chime in that the user I’ve identified above doesn’t exist. Not true – they’re just not that user.

If a friend or family member came to me and said, “I’m trying to decide which one to buy – an iPad Pro or Surface Pro.”, I’d step them through several questions:

  1. What do you want to do with it?
  2. How much will you type on it? Will you use it on your lap?
  3. How much will you draw on it? Is this the main thing you see yourself using it for
  4. How important is running older applications to you?
  5. How important is battery life?
  6. Do you ever want to use it with a second monitor?
  7. Do you have old peripherals that you simply can’t live without? (And what are they?)
  8. Have you bought or ripped a lot of audio or video content in formats that Apple won’t let you easily use anymore? (And how important is that to you?)

These questions will each have a wide variety of answers – in particular question 1. (Question 2 is a trap, as the need to use the device as a true laptop will lead most away from either the iPad Pro or the Surface Pro.) But these questions can easily steer the conversation, and their decision, the right direction.

I mentioned that I would throw a few more numbers at you:

  • US$1,028.99 and
  • US$1,067.00

These are the base prices for a Surface Pro 4 (Core m3) and iPad Pro, respectively, equipped with a stylus and keyboard. Just a few cups of Starbucks apart from each other. The Surface Pro 4 can go wildly north of this price, depending upon CPU options (iPad Pro offers none) or storage options (iPad Pro only offers one). The iPad Pro also offers cellular connectivity for an additional charge in the premium storage model (not available in the Surface Pro). My point is, at this base price, they’re close to each other, but that is a matter of convenience. It invites comparisons, but deciding upon these devices based purely on price is a fool’s errand.

The more you want the Surface Pro 4 (or a Surface Book) to act like a workstation PC, the more you will pay. But there is the rub; it can be a workstation too – the iPad Pro can’t ever be. Conversely, the iPad Pro can be a great tablet, where it offers few compromises as a tablet – you could read on it, it has a phenomenal stylus experience for artists, and it’s a great, big, blank canvas for whatever you want to run on it (if you can run it). But it will never run legacy software.

The iPad Pro may be your ideal device if:

  1. You want a tablet that puts power optimization ahead of everything else
  2. Every application you need is available in the App Store
  3. The are available in an iPad Pro optimized form
  4. The available version of the app has all of the features you need
  5. All of your media content is in Apple formats or available through applications blessed by Apple.

The Surface Pro may be your ideal device if:

  1. You want a tablet that is a traditional Windows PC first and foremost
  2. Enough of the applications you want to run on it as a tablet are available in the Windows Store
  3. They support features like Snap and resizing when the app is running on the desktop
  4. You need to run more full-featured, older, or more power hungry applications, or applications that cannot live within the sandboxed confines of an “app store” platform
  5. You have media content (or apps) that are in formats or categories that Apple will not bless, but will run on Windows.

From the introduction of both devices last year, many people have been comparing and contrasting these two “Pro” devices. I think that doing so is a disservice. In general, a consumer who cross-shops the two devices and buys the wrong one will wind up sorely disappointed. It’s much better to figure out what you really want to do with the device, and buy the right option that will meet your personal requirements.


22
Sep 15

You have the right… to reverse engineer

This NYTimes article about the VW diesel issue and the DMCA made me think about how, 10 years ago next month, the Digital Millennium Copyright Act (DMCA) almost kept Mark Russinovich from disclosing the Sony BMG Rootkit. While the DMCA provides exceptions for reporting security vulnerabilities, it does nothing to allow for reporting breaches of… integrity.

I believe that we need to consider an expansion of how researchers are permitted to, without question, reverse engineer certain systems. While entities need a level of protection in terms of their copyright and their ability to protect their IP, VW’s behavior highlights the risks to all of us when of commercial entities can ship black box code and ensure nobody can question it – technically or legally.

In October of 2005, Mark learned that a putting a particular Sony BMG CD in a Windows computer would result in it installing a rootkit. Simplistically, a rootkit is a piece of software – usually installed by malicious individuals – that sits at a low level within the operating system and returns forged results when a piece of software at a higher level asks the operating system to perform an action. Rootkits are usually put in place to allow malware to hide. In this case, the rootkit was being put in place to prevent CDs from being copied. Basically, a lame attempt at digital rights management (DRM) gone too far.

In late October, Mark researched this, and prepped a blog post outlining what was going on. We talked at length, as he was concerned that his debugging and disclosure of the rootkit might violate the DMCA, a piece of legislation put in place to protect copyrights and prevent reverse engineering of DRM software, among other things. So in essence, to stop exactly what Mark had done. I read over the DMCA several times during the last week of October, and although I’m not a lawyer, I was pretty satisfied that Mark’s actions fit smack dab within the part of the DMCA that was placed there to enable security professionals to diagnose and report security holes. The rootkit that Sony BMG had used to “protect” their CD media had several issues in it, and was indeed creating security holes that were endangering the integrity of Windows systems where the software had unwittingly been installed.

Mark decided to go ahead and publish the blog post announcing the rootkit on October 31, 2005 – Halloween. Within 48 hours, Mark was being pulled in on television interviews, quoted in major press publications, and was repeatedly a headline on Slashdot, the open-source focused news site over the next several months – an interesting occurrence for someone who had spent almost his entire career in the Windows realm.

The Sony BMG disclosure was very important – but it almost never happened. Exceptions that allow reverse engineering are great. But security isn’t the only kind of integrity that researchers need to diagnose today. I don’t think we should tolerate laws that keep researchers from ensuring our systems are secure, and that they operate the way that we’ve been told they do.


07
Sep 15

How I learned to stop worrying and love the cloud

For years, companies have regularly asked me for my opinion on using cloud-based services. For the longest time, my response was one about, “You should investigate what types of services might fit best for your business,” followed by a selection of caveats reminding them about privacy, risk, and compliance, since their information will be stored off-premises.

But I’ve decided to change my tune.

Beginning now, I’m going to simply start telling them to use cloud where it makes sense, but use the same procedures for privacy, risk, and compliance that they use on-premises.

See what I did there?

The problem is that we’ve treated hosted services (née cloud) as something distinctly different from the way we do things on-premises. But… is it really? Should it be?

It’s hard to find a company today that doesn’t do some form of outsourcing. You’re trusting people who don’t work “for” you with some of your company’s key secrets. Every company I can think of does it. If you don’t want to trust a contract-based employee with your secrets, you don’t give them access, right? Deny them access to your network, key server, or files shares (or SharePoint servers<ahem/>). Protect documents with things like Azure Rights Management. Encrypt data that needs to be protected.

These are all things that you should have been doing anyway, even before you might have had any of your data or operations off-premises. If you had contract/contingent staff, those systems should have been properly secured in order to avoid <ahem/> an overzealous admin (see link above) from liberating information that they shouldn’t really have access to. Microsoft and Amazon (and to a lesser extent at this point), have been putting a lot of effort into securing your data while it lives within their clouds, and that’s going to continue over the next 2-5 years, to the point where, honestly, with a little investment in tech and process – and likely a handful of new subscription services that you won’t be able to leave – you’ll be able to secure data better than you can in your infrastructure today.

Yeah. I said it.

A lot of orgs talk big about how awesome their on-premises infrastructure is, and how uncompromisingly secure it is. And that’s nice. Some of them are right. Many of them aren’t. In the end, in addition to systems and employees you can name, you’re probably relying on a human element of contractors, vendors, part-time employees, “air-gapped” systems that really aren’t, sketchy apps that should have been retired years ago, and security software that promised the world, but that can’t really even secure a tupperware container. We assume that cloud is something distinctly different from on-premises outsourcing of labor. But it isn’t really that different. The only difference is that today, unsecured (or unsecurable) data may have to leave your premises. That will improve over time, if you work at it. The perimeter, like that of smart phones has since 2007, will allow you to secure data flow between systems you own, and on systems you own – whether those live on physical hardware in your datacenter, or in AWS or Azure. But it means recognizing this perimeter shift – and working to reinforce that new perimeter in terms of security and auditing.

Today, we tend to fear cloud because it is foreign. It’s not what we’re all used to. Yet. Within the next 10 years, that will change. It probably already has changed within the periphery (aka the rogue edges) or your organization today. Current technology lets users deploy “personal cloud” tools, whether business intelligence, synchronization, desktop access, and more – without letting you have veto power, unless you own and audit the entirety of your network (and any telecom access), and admin access to all PCs. And you don’t.

The future involves IT being proactive about providing cloud access ahead of rogue users. Deciding where to be more liberal about access to tools than orgs are used to, and being able to secure perimeters that you may not even be aware of. Otherwise, you get to be dragged along on the choose your own adventure that your employees decide on for you.


21
Aug 15

The curse of the second mover

When I lived in Alaska, there was an obnoxious shirt that I used to see all the time, with a group of sled dogs pictured on it. The cutesy saying on it was, “If you’re not the lead dog, the view never changes.” While driving home last night and considering multiple tech marketplaces today, it came to mind.

Consider the following. If you were:

  1. Building an application for phones and tablets today, whose OS would you build it for first?
  2. Building a peripheral device for smartphones, what device platform would you build it for?
  3. Selling music today, whose digital music store would you make sure it was in first?
  4. Selling a movie today, whose digital video store would you make sure it was in first?
  5. Publishing a book, whose digital book store would you make sure it was in first?

Unless you’ve got a lot of time or money on your hands, and feel like dealing with the bureaucracy of multiple stores, the answer to all of the above is going to be exactly the same.

Except that last one.

If you’re building apps, smartphone peripherals, or selling music or movies, you’re probably building for Apple first. If you’re publishing or self-publishing a book, you’re probably going to Amazon first. One could argue that you might go to Amazon with music or a movie – but I’m not sure that’s true – at least if you wanted to actually sell full-fare copies vs. getting them placed on Prime Music/Prime Instant Video.

In the list above, that doesn’t tell a great tale for second movers. If you’re building a marketplace, you’ve got to offer some form of exceptional value over Apple (or Amazon for 5) in order to dethrone them. You’ve also got to offer something to consumers to get them to use your technology, and content purveyors/device manufacturers to get them to invest in your platform(s).

For the first three, Apple won those markets through pure first mover advantage.

The early arrival of the iPhone and iOS, and the premium buyers who purchase them, ensure that 1 & 2 will be answered “Apple”. The early arrival of the iPod, iTunes, and “Steve’s compromise”, allowing iTunes on Windows – as horrible as the software was/is – ensures that iTunes Music is still the answer to 3.

Video is a squishy one – as the market is meandering between streaming content (Netflix/Hulu), over-the-top (OTT) video services like Amazon Instant Video, MLB At Bat, HBO Now, etc., and direct purchase video like iTunes or Google Play. But the wide availability of Apple TV devices, entrenchment of iTunes in the life of lots of music consumers, and disposable income mean that a video content purveyor is highly likely to hit iTunes first – as we often see happen with movies today.

The last one is the most interesting though.

If we look at eBooks, something interesting happened. Amazon wasn’t the first mover – not by a long shot. Microsoft made their Reader software available back in 2000. But their device strategy wasn’t harmonized with the ideas from the team building the software. It was all based around using your desktop (ew), chunky laptop (eventually chunky tablet), or Windows Pocket PC device for reading. Basically, it was trying to sell eBooks as a way to read content on Windows, not really trying to sell eBooks themselves. Amazon revealed their first Kindle in 2007. (This was the first in a line of devices that I personally loathe, because of the screen quality and flicker when you change pages.) Apple revealed the iPad, and rapidly launched iBooks in 2010, eventually taking it to the iPhone and OS X. But the first two generations of iPad were expensive, chunky device to try and read on, and iBooks not being available on the iPhone and OS X didn’t help. (Microsoft finally put down the Reader products in 2012, just ahead of the arrival of the best Windows tablets…<sigh/>) So even though Apple has a strong device story today, and a strong content play in so many other areas, they are (at least) the second fiddle in eBooks. They tout strong numbers of active iBooks users… but since every user of iOS and OS X can be an iBooks users, numbers mean little without book sales numbers behind them. Although Amazon’s value driven marketplace may not be the healthiest place for authors to publish their wares, it appears to be the number one place by far, without much potential for it to be displaced anytime soon.

If your platform isn’t in the leader for a specific type of content, pulling ahead from second place is going to be quite difficult, unless you’ve somehow found some silver bullet. If you’re in third, you have an incredible battle ahead.


22
May 15

Farewell, floppy diskette

I never would have imagined myself in an arm-wrestling match with the floppy disk drive. But sitting where I did in Windows setup, that’s exactly what happened. A few times.

When I had started at Microsoft, a boot floppy was critical to setting up a new machine. Not by the time I was in setup. Since Remote Installation Services (RIS) could start with a completely blank machine, and you could now boot a system to WinPE using a CD, there were two good-sized nails in the floppy diskette’s coffin.

Windows XP was actually the first version of Windows that didn’t ship with boot floppies. It only shipped with a CD. While you could download a tool that would build boot floppies for you, most computers that XP happily ran on supported CD boot by that time. The writing was on the wall for the floppy diskette. In the months after XP released, Bill Gates made an appearance on the American television sitcom Frasier. Early in the episode, a caller asks about whether they need diskettes to install Windows XP. For those of us on the team, it was amusing. Unfortunately, the reality was that behind the scenes, there were some issues with customers whose systems didn’t boot from CD, or didn’t boot properly, anyway. We made it through most of those those birthing pains, though.

It was both a bit amusing and a bit frustrating to watch OEMs early on during the early days of Windows XP; while customers often said, “I want a legacy free system”, they didn’t know what that really meant. By “legacy free”, customers usually meant they wanted to abandon all of the legacy connectors (ports) and peripherals used on computers before USB had started to hit its stride with Windows 98.

While USB had replaced serial in terms of mice – which were at one time primarily serial – the serial port, parallel port, and floppy disk controller often came integrated together in the computer. We saw some OEMs not include a parallel port, and eventually not include a floppy diskette, but still include a serial port – at least inside – for when you needed to debug the computer. When a Windows machine has software problems, you often hook it up to a debugger, an application on another computer, where the developer can “step through” the programming code to figure out what is misbehaving. When Windows XP shipped, a serial cable connection was the primary way to debug.  Often, to make the system seem more legacy free than it actually was, this serial port was tucked inside the computer’s case – which made consumers “think” it was legacy free when it technically wasn’t. PCs often needed BIOS updates, too – and even when Windows XP shipped with them, these PCs would still usually boot to an MS-DOS diskette in order to update the BIOS.

My arrival in the Windows division was timely; when I started, USB Flash Drives (UFDs) were just beginning to catch on, but had very little storage space, and the cheapest ones were slow and unreliable. 32MB and 64MB drives were around, but still not commonplace. In early 2002, the idea of USB booting an OS began circling around the Web, and I talked with a few developers within The Firm about it. Unfortunately, there wasn’t a good understanding of what would need to happen for it to work, nor was the UFD hardware really there yet. I tabled the idea for a year, but came back to it every once in a while, trying to research the missing parts.

As I tinkered with it, I found that while many computers supported boot from USB, they only supported USB floppy drives (a ramshackle device that had come about, and largely survived for another 5-10 years, because we were unable to make key changes to Windows that would have helped killed it). I started working with a couple of people around Microsoft to try and glue the pieces together to get WinPE booting from a UFD. I was able to find a PC that would try to boot from the disk, and failed because the disk wasn’t prepared for boot as a hard disk normally would be. I worked with a developer from the Windows kernel team and one of our architects to get a disk formatted correctly. Windows didn’t like to format UFDs as bootable because they were removable drives; even Windows to Go in Windows 8.1 today boots from special UFDs which are exceptionally fast, and actually lie to the operating system about being removable disks. Finally, I worked with another developer who knew the USB stack when we hit a few issues booting. By early 2003, we had a pretty reliable prototype that worked on my Motion Computing Tablet PC.

Getting USB boot working with Windows was one of the most enjoyable features I ever worked on, although it wasn’t a formal project in my review goals (brilliant!). USB boot was even fun to talk about, amongst co-workers and Microsoft field employees. You could mention the idea to people and they just got it. We were finally killing the floppy diskette. This was going to be the new way to boot and repair a PC. Evangelists, OEM representatives, and UFD vendors came out of the woodwork to try and help us get the effort tested and working. One UFD manufacturer gave me a stash of 128MB and larger drives – very expensive at the time – to prepare and hand out to major PC OEMs. It gave us a way to test, and gave the UFD vendor some face time with the OEMs.

For a while, I had a shoebox full of UFDs in my office which were used for testing; teammates from the Windows team would often email or stop by asking to get a UFD prepped so they could boot from it. I helped field employees get it working so many times that for a while, my nickname from some in the Microsoft field was “thumbdrive”, one of the many terms used to refer to UFDs.

Though we never were able to get UFD booting locked in as an official feature until Windows Vista, OEMs used it before then, and it began to go mainstream. Today, you’d be hard pressed to find a modern PC that can’t boot from UFD, though the experience of getting there is a bit of a pain, since the PC boot experience, even with new EFI firmware, still (frankly) sucks.

Computers usually boot from their HDD all the time. But when something goes wrong, or you want to reinstall, you have to boot from something else; a UFD, CD/DVD, PXE server like RIS/WDS, or sometimes an external HDD. Telling your Windows computer what to boot from if something happens is a pain. You have to hit a certain key sequence that is often unique to each OEM. Then you often have to hit yet another key (like F12) to PXE boot. It’s a user experience only a geek could love. One of my ideas was to try and make it easier not only for Windows to update the BIOS itself, but for the user to more easily say what they wanted to boot the PC from (before they shut it down, or selecting from a pretty list of icons or a set of keys – like Macs can do). Unfortunately, this effort largely stalled out for over a decade until Microsoft delivered a better recovery, boot, and firmware experience with their Surface tablets. Time will tell whether we’re headed towards a world where this isn’t such a nuisance anymore.

It’s actually somewhat amusing how much of my work revolved around hardware even though I worked in an area of Windows which only made software. But if there was one commonly requested design change request that I wish I could have accommodated but couldn’t ever get done, it was F6 from UFD. Let me explain.

When you install Windows, it attempts to use the drivers it ships with on the CD to begin copying Windows down onto the HDD, or to connect over the network to start setup through RIS.

This approach worked alright, but it had one little problem which became significant. Not long after Windows XP shipped, new categories of networking and storage devices began arriving on high-end computers and rapidly making their way downmarket; these all required new drivers in order for Windows to work. Unfortunately, none of these drivers were “in the box” (on the Windows CD) as we liked to say. While Windows Server often needed special drivers to install on some high-end storage controllers before, this was really a new problem for the Windows consumer client. All of a sudden we didn’t have drivers on the CD for the devices that were shipping on a rapidly increasing number of new PCs.

In other words, even with a new computer and a stock Windows XP CD in your hand, you might never get it working. You needed another computer and a floppy diskette to get the ball rolling.

Early on during Windows XP’s setup, it asks you to press the keyboard’s F6 function key if you have special drivers to install. If it can’t find the network and you’re installing from CD, you’ll be okay through setup – but then you have no way to add new drivers or connect to Windows Update. If you were installing through RIS and you had no appropriate network driver, setup would fail. Similarly, if you had no driver for the storage controller on your PC, it wouldn’t ever find find a HDD where it could install Windows – so it would terminally fail too. It wasn’t pretty.

Here’s where it gets ugly. As I mentioned, we were entering an era where OEMs wanted to ship, and often were shipping, those legacy-free PCs. These computers often had no built-in floppy diskette – which was the only place we could look for F6 drivers at the time. As a result, not long after we shipped Windows XP, we got a series of design change requests (DCRs) from OEMs and large customers to make it so Windows setup could search any attached UFD for drivers as well. While this idea sounds easy, it isn’t. This meant having to add Windows USB code into the Windows kernel so it could search for the drives very early on, before Windows itself has actually loaded and started the normal USB stack. While we could consider doing this for a full release of Windows, it wasn’t something that we could easily do in a service pack – and all of this came to a head in 2002.

Dell was the first company to ever request that we add UFD F6 support. I worked with the kernel team, and we had to say no – the risk of breaking a key part of Windows setup was too great for a service pack or a hotfix, because of the complexity of the change, as I mentioned. Later, a very large bank requested it as well. We had to say no then as well. In a twist of fate, at Winternals I would later become friends with one of the people who had triggered that request, back when he was working on a project onsite at that bank.

Not adding UFD F6 support was, I believe, a mistake. I should have pushed harder, and we should have bitten the bullet in testing it. As a result of us not doing it, a weird little cottage industry of USB floppy diskette drives continued for probably a decade longer than it should have.

So it was, several years after I left, that the much maligned Windows Vista brought both USB boot of WinPE and also brought USB F6 support so you could install the operating system on hardware with drivers newer than Windows XP, and not need a floppy diskette drive to get through setup.

As I sit here writing this, it’s interesting to consider the death of CD/DVD media (“shiny media”, as I often call it) on mainstream computers today. When Apple dropped shiny media on the MacBook Air, people called them nuts – much as they did when Apple dropped the floppy diskette on the original iMac years before. As tablets and Ultrabooks have finally dropped shiny media drives, there’s an odd echo of the floppy drive from years ago. Where external floppy drives were needed for specific scenarios (recovery and deployment), external shiny media drives are still used today for movies, some storage and installation of legacy software. But in a few years, shiny media will be all but dead – replaced by ubiquitous high-speed wired and wireless networking and pervasive USB storage. Funny to see the circle completed.


12
Feb 15

Bring your own stuff – Out of control?

The college I went to had very small cells… I mean dorm rooms. Two people to a small concrete walled-room, with a closet, bed, and desk that mounted to the walls. The RA on my floor (we’ll call him “Roy”) was a real stickler about making us obey the rules – no televisions or refrigerators unless they were rented from the overpriced facility in our dorm. After all, he didn’t want anybody creating a fire hazard.

But in his room? A large bench grinder and a sanding table, among other toys. Perhaps it was a double standard… but he was the boss of the floor – and nobody in the administration knew about it.

Inside of almost every company, there are several types of Roy, bringing in toys that could potentially harm the workplace. Most likely, the harm will come in the form of data loss or a breach, not a fire as it might if they brought in a bench grinder. But I’m really starting to get concerned that too many companies aren’t mindful of the volume of toys that their own Roys have been bringing in.

Basically, there are three types of things that employees are bringing in through rogue or personal purchasing:

  • Smartphones, tablets, and other mobile devices (BYOD)
  • Standalone software as a service
  • Other cloud services

It’s obvious that we’ve moved to a world where employees are often using their own personal phones or tablets for work – whether it becomes their main device or not. But the level of auditing and manageability offered by these devices, and the level of controls that organizations are actively enforcing on them, all leave a lot to be desired. I can’t fathom the number of personal devices today, most of them likely equipped with no passcode or a weak one, that are currently storing documents that they shouldn’t be. That document that was supposed to be kept only on the server… That billing spreadsheet with employee salaries or patient SSNs… all stored on someone’s phone, with a horrible PIN if one at all, waiting for it to be lost or stolen.

Many “freemium” apps/services offer just enough rope for an employee to hang their employer with. Sign up with your work credentials and work with colleagues – but your management cannot do anything to manage them – without (often) paying.

Finally, we have developers and IT admins bringing in what we’ll call “rogue cloud”. Backing up servers to Azure… spinning up VMs in AWS… all with the convenience of a credit card. Employees with the best of intentions can smurf their way through, getting caught by internal procedures or accounting. A colleague tells a story about a CFO asking, “Why are your developers buying so many books?” The CFO was, of course, asking about Amazon Web Services, but had no idea, since the charges were small irregular amounts every month across different developers, from Amazon.com. I worry that the move towards “microservices” and cloud will result in stacks that nobody understands, that run from on-premises to one or more clouds – without an end-to-end design or security review around them.

Whether we’re talking about employees bringing devices, applications, or cloud services, the overarching problem here is the lack of oversight that so many businesses seem to have over these rapidly growing and evolving technologies, and the few working options they have to remediate them. In fact, many freemium services are feeding on this exact problem, and building business models around it. “I’m going to give your employees a tool that will solve a problem they’re having. But in order for you to solve the new problem that your employees will create by using it, you’ll need to buy yet another tool, likely for everybody.”

If you aren’t thinking about the devices, applications, and services that your employees are bringing in without you knowing, or without you managing them, you really might want to go take a look and see what kinds of remodeling they’ve been doing to your infrastructure without you noticing. Want to manage, secure, integrate, audit, review, or properly license the technology your employees are already using? You may need to get your wallet ready.


24
Dec 14

Mobile devices or cloud as a solution to the enterprise security pandemic? Half right.

This is a response to Steven Sinofsky’s blog post, “Why Sony’s Breach Matters”. While I agree with parts of his thesis – the parts about layers of complexity leaving us where we are, and secured, legacy-free mobile OS’s helping alleviate this on the client side, I’m not sure I agree with his points about the cloud being a path forward – at least in any near term, or to the degree of precision he alludes to.

The bad news is that the Sony breach is not unique. Not by a long shot. It’s not the limit. It’s really the beginning. It’s the shot across the bow for companies that will let them see one example of just how bad this can get. Of course, they should’ve been paying attention to Target, Home Depot, Michaels, and more by this point already.

Instead, the Sony breach is emblematic of the security breaking point that has become increasingly visible over the last 2 years. It would be the limit if the industry turned a corner tomorrow and treated security like their first objective. But it won’t. I believe what I’ve said before – the poor security practices demonstrated by Sony aren’t unique. They’re typical  of how too many organizations treat security. Instead of trying to secure systems, they grease the skids just well enough to meet their compliance bar, turning an eye to security that’s just “too hard”.

 

While the FBI has been making the Sony attack sound rather unique, the only unique aspect of this one, IMHO, is the scale of success it appears to have achieved. This same attack could be replayed pretty easily. A dab of social engineering… a selection of well-chosen exploits (they’re not that hard to get), and Windows’ own management infrastructure appears to have been used to distribute it.

 

I don’t necessarily see cloud computing yet as the holy grail that you do. Mobile? Perhaps.

 

The personal examples you discussed were all interesting examples, but indeed were indicative of more of a duct-tape approach, similar to what we had to do with some things in Windows XP during the security push that led up to XPSP2 after XPSP1 failed to fill the holes in the hull of the ship. A lot of really key efforts, like run as non-admin just couldn’t have been done in a short timeframe to work with XP – had to be pushed to Vista (where they honestly still hurt users) or Windows 7 where the effort could be taken to really make them work for users from the ground up. But again, much of this was building foundations around the Win32 legacy, which was getting a bit sickly in a world with ubiquitous networking and everyone running as admin.

 

I completely agree as well that we’re long past adding speed bumps. It is immediately apparent, based upon almost every breach I can recall over the past year, that management complexity as a security vector played a significant part in the breach.

If you can’t manage it, you can’t secure it. No matter how many compliance regs the government or your industry throws at you. It’s quite the Gordian knot. Fun stuff.

 

 

I think we also completely agree about how the surface area exposed by today’s systems is to blame for where we are today as well. See my recent Twitter posts. As I mentioned, “systems inherently grow to become so complex nobody understands them.” – whether you’re talking about programmers, PMs, sysadmins, or compliance auditors.

 

 

I’m inclined to agree with your point about social and the vulnerabilities of layer 8, and yet we also do live in a world where most adults know not to stick a fork into an AC outlet. (Children are another matter.)

Technology needs to be more resilient to user-error or malignant exploitation, until we can actually solve the dancing pigs problem where it begins. Mobile solves part of that problem.

 

When Microsoft was building UAC during Longhorn -> Vista, Mark Russinovich and I were both frustrated that Microsoft wasn’t really doing anything with Vista to really nail security down, and so we built a whitelisting app at Winternals to do this for Windows moving forward. (Unfortunately, Protection Manager was crushed for parts after our acquisition, and AppLocker was/is too cumbersome to accomplish this for Win32. Outside of the longshot of ditching the Intel processor architecture completely, whitelisting is the only thing that can save Win32 from the security mayhem it is experiencing at the moment.

 

I do agree that moving to hosted IaaS really does nothing for an organization, except perhaps drive them to reduce costs in a way that on-premises hosting can’t.

But I guess if there was one statement in particular that I would call out in your blog as something I heartily disagree with, it’s this part:

 

“Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company.”

 

This is a wonderful goal, and it’ll be great for startups that have no legacy codebase (and don’t bring in hundreds of open-source or shared libraries that none of their dev team understands down to the bottom of the stack). But most existing companies can’t do what they should, and cut back the overgrowth in their systems.

I believe pretty firmly that what I’ve seen in the industry over the last decade since I left Microsoft is also, unfortunately, the norm – that management – as demonstrated by Sony’s leadership in that interview, will all too often let costs win over security.

 

For organizations that can redesign for a PaaS world, the promise offered by Azure was indeed what you’ve suggested – that designing new services and new applications for a Web-first world can lead to much more well-designed, refined, manageable, and securable applications and systems overall. But the problem is that that model only works well for new applications – not applications that stack refinement over legacy goo that nobody understands. So really, clean room apps only.

The slow uptake of Azure’s PaaS offerings unfortunately demonstrates that this is the exception, and an ideal, not necessarily anything that we can expect to see become the norm in the near future.

 

Also, while Web developers may not be integrating random bits of executable code into their applications, the amount of code reuse across the Web threatens to do the same, although the security perimeter is winnowed down to the browser and PII shared within it. Web developers can and do grab shared .js libraries off the Web in a heartbeat.

Do they understand the perimeter of these files? Absolutely not. No way.

Are the risks here as big as those posed by an unsecured Win32 perimeter? Absolutely not – but I wouldn’t trivialize them either.

There are no more OS hooks, but I’m terrified about how JS is evolving to mimic many of the worst behaviors that Win32 picked up over the years. The surface has changed, as you said – but the risks – loss of personal information, loss of data, phishing, DDOS, are so strikingly similar, especially as we move to a “thicker”, more app-centric Web.

 

Overall, I think we are in for some changes, and I agree with what I believe you’ve said both in your blog and on Twitter, that modern mobile OS’s with a perimeter designed in them are the only safe path forward. The path towards a secure Web application perimeter seems less clear, far less immediate, and perhaps less explicit than your post seemed to allude to.

 

There is much that organizations can learn from the Sony breach.

 

But will they?