22
Aug 17

A few thoughts on Windows 10 S…

A few months ago, before Microsoft announced their new Surface Laptop or Windows 10 S, I had several conversations with reporters and friends about what might be coming. In particular, some early reports had hinted that this might be a revision of Windows, something designed for robustness. Some thought it might be more Chromebook-like. Given the experiences of my daughters with Chromebooks, those last two sentences are oxymorons. But I digress. What arrived, Windows 10 S (AKA “Windows 10 Pro in S Mode”) wasn’t a revision or really much of a refinement. It was a nuanced iteration of Windows 10 Pro, with built-in Device Guard policies, and some carefully crafted changes to the underlying OS infrastructure.

Putting the Surface Laptop aside for now (it’s not my laptop, and I’m not its customer), Windows 10 S seems to me to be an OS full of peculiar compromises, with a narrow set of benefits for end users, at least at this time.

I saw this tweet go by on Twitter a bit ago, and several more followed, discussing the shortcomings of Windows 10 S.

In most conversations I’ve had with reporters recently about Windows, I’ve reemphasized my point that what most customers want isn’t “an OS that does <foo>”. They want a toaster.

What do I mean by that? Think about a typical four-slice toaster:

You use it Sunday morning. It toasts.
You use it Monday morning. It toasts.
You use it Wednesday morning. It toasts.

This is what a huge percentage of the populace wants. A toaster. Normals want it. Schools want it. Most IT workers want it. Frankly, I think a lot of IT wants it, because they’re constantly being asked to do more, and given less money to do it with.

The era of tinkering with PCs being fun for normals, and even some technical people, has passed.

So that in mind, what’s wrong with Windows 10 S? Nothing, I guess. In a way, It is at least a more toasterish model for Windows than we’ve seen before. It’s constrained, and attempts to put a perimeter around the Windows desktop OS to reduce the risk posed by the very features of the OS itself.

I encourage you to read Piotr’s thread, above, before reading further.

Windows 10 S is not:

  • A new edition of Windows (or version, for that matter). It’s effectively a specially configured installation of Windows 10 Pro
  • Redesigned for use with touch or tablets, any more than 10 itself is
  • Cloud-backup enabled or cloud recoverable (this one is a shame, IMHO)
  • Free of Win32 and the quirks and challenges that it brings.

Those last two are important. Consumers with iOS devices today are generally used to toaster-like experiences when it comes to backing up and recovering their devices (yes, exceptions exist) to iCloud ideally, or a Mac or PC in certain circumstances. The last one is important because most of the troublesome battery life issues that hit lightweight, low-energy Windows devices can be easily pointed back to the cumbersome baggage of Win32 itself, and Win32 applications engineered for a time when energy was cheap because PCs were plugged in all the time, and everything was about processor power.

So if Windows 10 S isn’t “all new”, what is it?

Technologically, Windows 10 S is designed for the future. Or at least the future Microsoft wants:

  • It offers almost all features of Pro, and can be easily “upgraded” to Pro
  • It natively supports Azure Active Directory domain join and authentication as Pro does, but does not support joining Active Directory at all
  • It supports Windows Store applications only (UWP, Desktop Bridge if crafted correctly, etc), otherwise, no use of Win32 applications not in-box and approved by Microsoft
  • Secure by default, at least in the sense that the previous objective and the implementation of Device Guard + policies built in can deliver.

So it’s an OS that supports the directory, app store, and legacy app distribution models of the future.

A question I’ve been asked several times was, “why no AD join?” – Initially I was just going with the “it’s the directory of the future” theory. But there’s more to it. From the day that AD and Group Policy came into Windows, there was an ongoing struggle in terms of performance and cost. Ask anyone who had a Windows 2000 PC how long they had to wait when they logged on every day. A giant chunk of that was Active Directory. Over time,  Windows added increasing amounts of messaging to tell you what the OS was doing during logon.

If you go back and look at the 10 S reveal, logon performance was a touted feature. I’ve even seen people on Twitter say that’s why they like 10 S better. Why is it better? I’m sure there are some other reasons as well, but by completely obliterating AD integration, I’m certain that a huge performance win was observed.

When I look at 10 S then, particularly the Device Guard-based security, the defenestration of Active Directory, and the use of Pro as an underlying OS rather than a new edition, 10 S feels… kind of like a science experiment that escaped the lab. Frankly, Device Guard always kind of looked that way to me too.

But there’s another angle here too, and it’s kind of a weird one.

I don’t know how much Microsoft is selling Windows 10 S to OEMs for, but price is clearly a factor here. Some have assumed that because it’s based on Pro, that 10 S costs the same, or even costs as much as Home. It is not clear whether that is actually the case.

When announced, Microsoft stated that it would ship on PCs starting at US$189. As I said, price is clearly a factor. Given the fact that a one-time upgrade from 10 S to Pro costs US$49, it seems pretty apparent to me that with 10 S, Microsoft has shifted some costs for Pro that used to be borne by OEMs to consumers. While this US$49 upgrade is basically moot for the remainder of this calendar year, eventually it must be considered, as consumers (and some businesses) will need to pay if they require Pro-only functionality.

So the net effect then is that Windows 10 S devices can be cheaper, at least up-front, than Windows 10 Pro devices (and maybe Home). Users who need Pro can “upgrade” to it.

Here’s where I think this gets really interesting. Before too long, we can expect to see ARM-based devices running Windows 10. I think that these devices could likely come with 10 S on them, resulting in lower purchase prices, as well as a reduced risk vector if users don’t actually need to run their own library of Win32 applications. In a way then, “Windows 10 S on ARM” offers most of the actual value that Windows RT ever delivered, but would offer far more, by supporting Desktop Bridge applications, and a complete upgrade to Pro with support for x86 Win32 applications.

Consumers could pay for the upgrade to Pro if they need to run full Win32, or need to upgrade the device to Enterprise for work. In this scenario, I imagine that Chrome will likely be the reason why a number of 10 S users pay for an upgrade.

Just as with the vaguely unannounced “Windows 10 Pro for Workstations”, there’s always a reason why these changes occur, and a strategic objective that Microsoft has planned. For me, I think that 10S, especially with a pilot launch on Microsoft’s own Surface Laptop hardware, is pretty clearly a sign of a few directions there the company wants to go.

 


08
Dec 16

Windows 10 on ARM. What does it mean?

Yesterday, when I heard the news from Microsoft’s WinHEC announcements stating, “Windows 10 is coming to ARM through a partnership with Qualcomm”, my brain went through a set of loops, trying to get what this really was, and what it really meant.

Sure, most of us have seen the leaks over the past few weeks about x86 on ARM, but I hadn’t seen enough to find much signal in the noise as to what this was.

But now that I’ve thought about it, most of it makes sense, and if we view the holistic Windows 10 brand as step 1, this is step 2 of blurring the line of what a Windows PC is.

Before we look forward, a bit of history is important. Windows RT was a complex equation to try and reduce – that is, why did it fail? The hardware was expensive, it wasn’t <ahem/> real Windows, it couldn’t run legacy applications at all, and the value proposition and branding were very confusing. Wait. Was I talking about Windows RT, or Windows on Itanium? Hah. Tricked you – it applies to both of them. But let’s let sleeping dogs be.

So if the lack of support for Windows legacy applications is a problem, and ARM processors are getting faster, how to best address this? Windows 10, the last version of Windows. Now available in a complex amalgam that will be ARM64 native, but run Win32 x86 applications through emulation.

Let’s take a look at a couple of things here, in terms of Q&A. I have received no briefing from Microsoft on this technology – I’m going to make some suppositions here.

Question 1: What is meant by x86 Win32 applications? Everything? How about 64-bit Win32 applications?

This is actually pretty straightforward. It is, as the name would imply, x86 Win32 applications. That means the majority of the legacy applications written during the lifetime of Windows (those capable of running on 32-bit Windows 10 on x86) should work when running on 64-bit Windows 10 on ARM. In general, unless there are some hardware shenanigans performed by the software, I assume that most applications will work. In many ways, I see this emulation behaving sort of like Win32 virtualization on AMD64 systems, albeit with very different internals.

Question 2: Ah, so this is virtualization?

No, this is emulation. You’re tricking x86 Win32 applications into thinking they’re running on a (low-powered) x86 processor.

Question 3: Why only 32-bit?

See a few of the next answers for a crucial piece of this answer, but in short, to save space. You could arguably have it add support for Win64 (x64, 64-bit) Windows desktop applications, but this would mean additional bloat for the operating system, and offer rapidly diminishing returns. You’re asking a low-powered ARM processor to really run 64-bit applications and make the most of them? No. Get an x64 processor and don’t waste your money.

Question 4: What is the intent here?

As I said on Twitter this morning, “This is not the future of personal computing. This is a nod to the past.” I have written far more words than justified on why Windows on ARM faced challenges. This is, in many ways, the much-needed feature to make it succeed. However, this feature is also a subtle admission of… the need for more. In order to drive Windows the platform forward on ARM, and help birth the forthcoming generations of UWP-optimal systems, there is a need to temper that future with the reality of the past – that businesses and consumers have an utterly wacky amount of time and money involved in legacy Windows desktop applications, and… something something, cold, dead hands. Thus, we will now see the beginning of x86 support on these ARM processors, and a unified brand of Windows that addresses “How do I get this?” For consumers, it will mean a lack of confusion. Buy this PC, and it will be a great tablet when you want a tablet, but it will also run all of that old stuff.

Question 5: Why not just use Project Centennial, and recompile these old desktop apps for UWP?

First, for this to succeed, it must be point-and-shoot. No repackaging. No certificate games. No weird PowerShell scripts. No recompilation. Take my ancient printer driver, and it just works. Take my old copy of MS Money that I shouldn’t be using. It just works. Etc. We’re talking old apps that should be out to pasture. On the consumer side, there is no code, and no ISV in their right mind will spend time going back and doing the work to support something like this. On the business side, there’s likely nobody around who understands the code or wants to break it. Centennial is a great idea if you are an ISV or enterprise and you want to take your existing Win32 app and begin transmogrifying it into a UWP application through the non-trivial steps needed. But it’s certainly not always the best answer, and doesn’t do the same thing this will.

Question 6: Wait. So won’t I be able to get ransomware too, then?

I would have to assume the answer to that is… yes. However, it is important to note that Terry showed off Windows 10 Enterprise edition in yesterday’s demo. Why does that matter? Because there, you have the option to use DeviceGuard to lock down the device, on these PCs that will ship with OEM Windows. That is one step, for orgs willing to pay for Enterprise. I also assume that there will be the option to turn off the Win32 layer through configuration and GPO.

Question 7: So this is like Virtual PC on PowerPC Macs?

Not exactly. That’s a fine example of emulation, but that was Windows stacked on top of the Mac OS. This looks to be, as it should be, a more side-by-side emulation. Run a UWP app, and all of your resources are running on the ARM side natively.  Run a legacy app, and all your resources are running on the x86 side. Again, the experience should be much like running 32-bit applications on 64-bit Windows, without directory tricks to do it. That’s certainly what I saw in Terry’s demo. Importantly, this means a couple of things. First, you service the whole thing together. This isn’t a VM, and doesn’t require additional steps to service it. Second, where Terry mentions “CHPE = Compiled Hybrid Portable Executable” here, unless I’m misunderstanding, he’s saying that Windows 10 on ARM is basically running fat binaries. It’s two, two, two OS’s in one.

Question 8: Wait. What does that mean?

Well, if I’m understanding their direction correctly, the build includes resources for A64 and x86 in one binary. Meaning that you only need to service one binary to service both… modes? of the OS. Significantly, this also means some on-disk bloat. You’re going to need to have more space for this to work, as you’ve basically got two installs of the OS glued together. Significantly, this is also why you don’t have x64 support too. Because if my theory above holds, adding Win64 would… do amazing things to your remaining disk space.

Question 9: Ah, so UWP is dead?

Heck no. If anything, as I said earlier, this helps UWP in the long run, by reestablishing what Windows is. UWP is still what developers must target if they care about selling anything new, designing for touch, or reaching the collection of devices that Microsoft is driving UWP forward on. I also can envision that this functionality only works when a device is Continuum’d. That is, when you’re docked and ready to work at your desk. This is all about legacy, and your desktop.

Question 10: Ah, so Intel processors are dead?

LOLNO. This is an ARM processor running x86 software. No x64 support. Performance may wind up being fair, but an ARM system will hardly be your destination if you want to do hardcore gaming, data work, development, run VMs… and then there’s the server side, where ARM still has a huge uphill battle ahead of it. This will fill a hole for consumers and low-mid tier knowledge workers. If you cared that the new MBP didn’t have more than 16GB of RAM, well… I digress.

Question 11: Ah, so Windows Mobile is dead?

No. At least not yet. Windows Mobile won’t include this layer, which will likely mean that it also won’t require the storage space. In the long run, a Windows-based ARM64 phone could indeed run Windows 10, and finally blur the line as to what is a Windows phone and what is a Windows PC – and also make Continuum incredibly useful.

 


28
Aug 16

It doesn’t have to be a crapfest

A  bit ago, this blog post crossed my Twitter feed. I read it, and while the schadenfreude made me smirk for a minute, it eventually made me feel bad.

The blog post purports to describe how a shitty shutdown dialog became a shitty shutdown dialog. But instead, it documents something I like to call “too many puppies” syndrome. If you are working on high visibility areas of a product – like the Windows Shell – like Explorer in particular, everybody has an belief that their opinion is the right direction. It’s like dogs and a fire hydrant. My point really isn’t to be derisive here, but to point out that the failure of that project does not seem to be due to any other teams. Instead, it seems to have been due to some combination of unclear goals and a fair amount of the team he was on being lost in the wilderness.

I mentioned on Twitter that, if you are familiar with the organizational structure of Windows, that you can see the cut lines of those teams in the UI. A reply to that mentioned Conway’s law – which I was unfamiliar with, but basically states that as a general concept, a system designed by an organization will reflect the structure of that organization.

But not every project is doomed to live inside its own silo. In fact, some of my favorite projects that I worked on while I was at The Firm were ones that fought the silo, and the user won. Unfortunately, this was novel then, and still feels novel now.

During the development of Windows Server 2003, Bill Veghte, a relatively new VP on the product, led a series of reviews where he had program managers (PMs) across the product walk through their feature area/user scenario, to see how it worked, didn’t work, and how things could perhaps be improved. Owning the enterprise deployment experience for Windows at the time, I had the (mis?)fortune of walking Bill through the setup and configuration experience with a bunch of people from the Windows Server team.

When I had joined the Windows “Whistler” team just before beta 2, the OS that became Windows XP was described by a teammate as a “lipstick on a chicken” release was already solidifying, and while we had big dreams of future releases like “Blackcomb” (never happened), Whistler was limited largely by time to the goal of shipping the first NT-based OS to both replace ME and the 9X family for consumers, and Windows 2000 in business.

Windows Server, on the other hand, was to ship later. (In reality, much, much later, on a branched source tree, due to the need to <ahem/> revisit XP a few times after we shipped it.) This meant that the Windows Server team could think a bit bigger about shipping the best product for their customers. These scenario reviews, which I really enjoyed attending at the time, were intended to shake out the rattles in the product and figure out how to make it better.

During my scenario review, we walked through the entire setup experience – from booting the CD to configuring the server. If you recall, this meant walking through some really ugly bits of Windows. Text-mode setup. F5 and F6 function keys to install a custom HAL or mass-storage controller drivers during text-mode setup. Formatting a disk in text-mode setup. GUI-mode setup. Fun, fun stuff.

Also, some forget, but this was the first time that Windows Server was likely to ship with different branding from the client OS. Yet the Windows client branding was… everywhere. Setup “billboards” touting OS features that were irrelevant in a server, wizards, help files, even the fact that setup was loading drivers for PCMCIA cards and other peripherals that a server would never need or use in the real world, or verbs on the shutdown menu that made no sense on a server, like standby or hibernate.

A small team of individuals on the server team owned the resulting output from these walkthroughs, which went far beyond setup, and resulted in a bunch of changes to how Windows Server was configured, managed, and more. In terms of my role, I wound up being their liaison for design change requests (DCRs) on the Windows setup team.

There were a bunch of things that were no-brainers – fixing Windows Setup to be branded with Windows Server branding, for example. And there were a ton of changes that, while good ideas, were just too invasive to change, given the timeframe that Windows Server was expected to ship in, (and that it was still tethered to XP’s codebase at that time, IIRC). So lots of things were punted out to Blackcomb, etc.

One of my favorite topics of discussion, however, became the Start menu. While Windows XP shipped with a bunch of consumer items in the Start menu, almost everything it put there was… less than optimal on a server. IE, Outlook Express, and… Movie Maker? Heck, the last DCR I had to say no to for XP was a very major customer telling us they didn’t even want movie maker in Windows XP Pro! It had no place on servers – nor did Solitaire or the Windows XP tour.

So it became a small thing that David, my peer on the server team, and I tinkered with. I threw together a mockup and sent it to him. (It looked a lot like the finished product you see in this article.) No consumer gunk. But tools that a server administrator might use regularly. David ran this and a bunch of other ideas by some MVPs at an event on campus, and even received applause for their work.

As I recall, I introduced David to Raymond Chen, the guru of all things Windows shell, and Raymond and David wound up working together to resolve several requests that the Windows Server team had in the user interface realm. In the end, Windows Server 2003 (and Server SP1, which brought x64 support) wound up being really important releases to the company, and I think they reflected the beginning of a new maturity at Microsoft on building a server product that really felt… like a server.

The important thing to remember is that there wasn’t really any sort of vehicle to reflect cross-team collaboration within the company then. (I don’t know if there is today.) It generally wasn’t in your review goals (those all usually reflected features in your team’s immediate areas), and compensation surely didn’t reflect it. I sat down with David this week, having not talked for some time, and told him how most of my favorite memories of Microsoft were working on cross-team projects where I helped other teams deliver better experiences by refining where their product/feature crossed over into our area, and sometimes beyond.

I think that if you can look deeply in a product or service that you’re building, and see Conway’s law in action, you need to take a step back. Because you’re building a product for yourself, not for your customers. Building products and services that serve your entire customer base means always collaborating, and stretching the boundaries of what defines “your team”. I believe the project cited in the original blog post I referenced above failed both because there were too many cooks, but also because it would seem that anyone with any power to control the conversation actually forgot what they were cooking.

 

 


27
Jun 16

Compute Stick PCs – Flash in the pan?

A few years ago, following the success of many other HDMI-connected computing devices, a new type of PC arrived – the “compute stick”. Also referred to sometimes as an HDMI PC or a stick PC, the device immediately made me scratch my head a bit.

If Windows 10 still featured a Media Center edition, I guess I could sort of see the point. But Windows, outside of Surface Hub (which seemingly runs a proprietary edition of Windows), no longer features a 10′ UI in the box. Meaning, without third-party software and nerd-porn duct tape, it’s a computer with a TV as a display, and a very limited use case.

Unlike Continuum on Windows 10 Mobile, I’ve never had a licensing boot camp attendee ask me about compute sticks (almost none ever asked us about Windows To Go, the mode of booting Windows Enterprise edition off of USB on a random PC).

The early sticks featured 2GB of RAM or less, really limiting their use case even further. With 4GB, more modern versions will run Windows 10 well, but to what end?

I can see some cases where compute sticks might make sense for point of service, but a NUC is likely to be more affordable, powerful, and expandable, and not suffer from heat exhaustion like a compute stick is likely to.

I’ve also heard it suggested that a compute stick is a good potential for the business traveler. But I don’t get that. Using a compute stick requires you to have a keyboard and pointing device with you, and find an AC power source behind a hotel TV or shared workspace. Now I don’t know about you, but while I used to travel with a keyboard to use with my iPad, I don’t anymore… and I never travel with a spare pointing device. And as to finding AC power behind a hotel TV? Shoot me now.

The stick PC has some use cases, sure. Home theater where the user is willing to assemble the UX they want. But that’s nerd porn, not a primary use case, and not a long-term use case (see Media Center edition).

You eventually reach a point where, if you want a PC while you’re on the go, you should haul a PC with you. Laptops, convertibles, and tablets are ridiculously small, and you don’t always have to tote peripherals with you to make them work.

In short, I can see a very limited segment of use cases where compute sticks make sense. (Frankly, it’s a longer list than Windows To Go.) But I think in most cases, upon closer inspection, a NUC (or larger PC), Windows 10 tablet or laptop, or <gasp/> a Windows 10 Mobile device running Continuum is likely to make more sense.

 


03
Feb 16

Surface Pro and iPad Pro – incomparable

0.12 of a pound less in weight. 0.6 inches more in display area.

That’s all that separates the iPad Pro from the Surface Pro (lightest model of each). Add in the fact that both feature the modifier “Pro” in their name, and that they look kind of similar, and it’s hard to not invite comparisons, right? (Of course, what tablets in 2016 don’t look like tablets?)

Over the past few weeks, several reports have suggested that perhaps Apple’s Tablet Grande and Microsoft’s collection of tablet and tablet-like devices may have affected at holiday quarter sales of tablet-like devices from the other. Given what I’ve said above, I’ve surely even suggested that I might cross-shop one with the other when shopping. But man, that would be a mistake.

I’m not going to throw any more numbers at you to try and explain why the iPad Pro and Surface devices aren’t competitors, and shouldn’t be cross-shopped. Okay, only a few more; but it’ll be a minute. Before I do, let’s take a step back and consider the two product lines we’re dealing with.

The iPad Pro is physically Apple’s largest iOS device, by far. But that’s just it. It runs iOS, not OS X. It does not include a keyboard of any kind. It does not include a stylus of any kind. It can’t be used with an external pointing device, or almost any other traditional PC peripheral. (There are a handful of exceptions.)

The Surface Pro 4 is Microsoft’s most recent tablet. It is considered by many pundits to be a “detachable” tablet, which it is – if you buy the keyboard, which is not included. (As an aside, inventing a category called detachables when the brunt of devices in the category feature removable, but completely optional keyboards seems slightly sketchy to me.) Unlike the iPad Pro, the Surface Pro 4 does include the stylus for the device. You can also connect almost any traditional PC peripheral to a Surface Pro 4 (or Surface 3, or Surface Book.)

Again, at this point, you might say, “See, look how much they have in common. 1) A tablet. 2) A standardized keyboard peripheral. 3) A Stylus.”

Sure. That’s a few similarities, but certainly not enough to say they’re the same thing. A 120 volt light fixture for use in your home and a handheld flashlight also both offer a standard way to have a light source powered by electrical energy. But you wouldn’t jumble the two together as one category, as they aren’t interchangeable at all. You use them to perform completely different tasks.

The iPad Pro can’t run any legacy applications at all. None for Windows (of course), and none for OS X. There is it’s Achilles heel; it’s great at running iOS apps that have been tuned for it. But if the application you want to run isn’t there, or lacks features found in the Windows or OS X desktop variant you’d normally use (glares at you, Microsoft Word), you’re up the creek. (Here’s where someone will helpfully point out VDI, which is a bogus solution to running legacy business-critical applications that you need with any regularity.)

The Surface Pro offers a contrast at this point. It can run universal Windows platform (UWP) applications, AKA Windows Store apps, AKA Modern apps, AKA Metro apps. (Visualize my hand getting slapped here by platform fans for belaboring the name shifts.) And while the Surface Pro may have an even more constrained selection of platform-optimized UWP apps to choose from, if the one you want isn’t available in the Windows Store, you’ve got over two decades worth of Win32 applications that you can turn to.

Anybody who tells you that either the iPad Pro or the Surface Pro are “no compromise” devices is either lying to you, or they just don’t know that they’re lying to you. They’re both great devices for what they try to be. But both come with compromises.

Several people have also said that the iPad Pro is a “companion device”. But it depends upon the use case as to whether that is true or not. If you’re a hard-core Windows power user, then yes, the iPad Pro must be a companion device. If you regularly need features only offered by Outlook, Excel, Access, or similar Win32 apps of old, then the iPad Pro is not the device for you. But if every app you need is either available in the App Store, you can live within the confines of the limited versions of Microsoft Office for Office 365 on the iPad Pro, or your productivity tools are all Web accessible, then the iPad Pro might not only be a good device for you, but it might actually be the only device you need. It all comes down to your own requirements. Some PC using readers at this point will helpfully chime in that the user I’ve identified above doesn’t exist. Not true – they’re just not that user.

If a friend or family member came to me and said, “I’m trying to decide which one to buy – an iPad Pro or Surface Pro.”, I’d step them through several questions:

  1. What do you want to do with it?
  2. How much will you type on it? Will you use it on your lap?
  3. How much will you draw on it? Is this the main thing you see yourself using it for
  4. How important is running older applications to you?
  5. How important is battery life?
  6. Do you ever want to use it with a second monitor?
  7. Do you have old peripherals that you simply can’t live without? (And what are they?)
  8. Have you bought or ripped a lot of audio or video content in formats that Apple won’t let you easily use anymore? (And how important is that to you?)

These questions will each have a wide variety of answers – in particular question 1. (Question 2 is a trap, as the need to use the device as a true laptop will lead most away from either the iPad Pro or the Surface Pro.) But these questions can easily steer the conversation, and their decision, the right direction.

I mentioned that I would throw a few more numbers at you:

  • US$1,028.99 and
  • US$1,067.00

These are the base prices for a Surface Pro 4 (Core m3) and iPad Pro, respectively, equipped with a stylus and keyboard. Just a few cups of Starbucks apart from each other. The Surface Pro 4 can go wildly north of this price, depending upon CPU options (iPad Pro offers none) or storage options (iPad Pro only offers one). The iPad Pro also offers cellular connectivity for an additional charge in the premium storage model (not available in the Surface Pro). My point is, at this base price, they’re close to each other, but that is a matter of convenience. It invites comparisons, but deciding upon these devices based purely on price is a fool’s errand.

The more you want the Surface Pro 4 (or a Surface Book) to act like a workstation PC, the more you will pay. But there is the rub; it can be a workstation too – the iPad Pro can’t ever be. Conversely, the iPad Pro can be a great tablet, where it offers few compromises as a tablet – you could read on it, it has a phenomenal stylus experience for artists, and it’s a great, big, blank canvas for whatever you want to run on it (if you can run it). But it will never run legacy software.

The iPad Pro may be your ideal device if:

  1. You want a tablet that puts power optimization ahead of everything else
  2. Every application you need is available in the App Store
  3. The are available in an iPad Pro optimized form
  4. The available version of the app has all of the features you need
  5. All of your media content is in Apple formats or available through applications blessed by Apple.

The Surface Pro may be your ideal device if:

  1. You want a tablet that is a traditional Windows PC first and foremost
  2. Enough of the applications you want to run on it as a tablet are available in the Windows Store
  3. They support features like Snap and resizing when the app is running on the desktop
  4. You need to run more full-featured, older, or more power hungry applications, or applications that cannot live within the sandboxed confines of an “app store” platform
  5. You have media content (or apps) that are in formats or categories that Apple will not bless, but will run on Windows.

From the introduction of both devices last year, many people have been comparing and contrasting these two “Pro” devices. I think that doing so is a disservice. In general, a consumer who cross-shops the two devices and buys the wrong one will wind up sorely disappointed. It’s much better to figure out what you really want to do with the device, and buy the right option that will meet your personal requirements.


22
Sep 15

You have the right… to reverse engineer

This NYTimes article about the VW diesel issue and the DMCA made me think about how, 10 years ago next month, the Digital Millennium Copyright Act (DMCA) almost kept Mark Russinovich from disclosing the Sony BMG Rootkit. While the DMCA provides exceptions for reporting security vulnerabilities, it does nothing to allow for reporting breaches of… integrity.

I believe that we need to consider an expansion of how researchers are permitted to, without question, reverse engineer certain systems. While entities need a level of protection in terms of their copyright and their ability to protect their IP, VW’s behavior highlights the risks to all of us when of commercial entities can ship black box code and ensure nobody can question it – technically or legally.

In October of 2005, Mark learned that a putting a particular Sony BMG CD in a Windows computer would result in it installing a rootkit. Simplistically, a rootkit is a piece of software – usually installed by malicious individuals – that sits at a low level within the operating system and returns forged results when a piece of software at a higher level asks the operating system to perform an action. Rootkits are usually put in place to allow malware to hide. In this case, the rootkit was being put in place to prevent CDs from being copied. Basically, a lame attempt at digital rights management (DRM) gone too far.

In late October, Mark researched this, and prepped a blog post outlining what was going on. We talked at length, as he was concerned that his debugging and disclosure of the rootkit might violate the DMCA, a piece of legislation put in place to protect copyrights and prevent reverse engineering of DRM software, among other things. So in essence, to stop exactly what Mark had done. I read over the DMCA several times during the last week of October, and although I’m not a lawyer, I was pretty satisfied that Mark’s actions fit smack dab within the part of the DMCA that was placed there to enable security professionals to diagnose and report security holes. The rootkit that Sony BMG had used to “protect” their CD media had several issues in it, and was indeed creating security holes that were endangering the integrity of Windows systems where the software had unwittingly been installed.

Mark decided to go ahead and publish the blog post announcing the rootkit on October 31, 2005 – Halloween. Within 48 hours, Mark was being pulled in on television interviews, quoted in major press publications, and was repeatedly a headline on Slashdot, the open-source focused news site over the next several months – an interesting occurrence for someone who had spent almost his entire career in the Windows realm.

The Sony BMG disclosure was very important – but it almost never happened. Exceptions that allow reverse engineering are great. But security isn’t the only kind of integrity that researchers need to diagnose today. I don’t think we should tolerate laws that keep researchers from ensuring our systems are secure, and that they operate the way that we’ve been told they do.


18
Aug 15

Continuum vs. Continuity – Seven letters is all they have in common

It’s become apparent that there’s some confusion between Microsoft’s Continuum feature in Windows 10, and Apple’s Continuity feature in OS X. I’ve even heard technical people get them confused.

But to be honest, the letters comprising “Continu” are basically all they have in common. In addition to different (but confusingly similar) names, the two features are platform exclusive to their respective platform, and perform completely different tasks that are interesting to consider in light of how each company makes money.

Apple’s Continuity functionality, which arrived first, on OS X Yosemite late in 2014, allows you to hand off tasks between multiple Apple devices. Start a FaceTime call on your iPhone, finish it on your Mac. Start a Pages document on your Mac, finish it on your iPad. If they’re on the same Wi-Fi network, it “just works”. The Handoff feature that switches between the two devices works by showing an icon for the respective app you were using, that lets you begin using the app on the other device. Switching from iOS to OS X is easy. Going the other way is a pain in the butt, IMHO, largely because of how iOS presents the app icon on the iOS login screen.

Microsoft’s Continuum functionality, which arrived in one form with Windows 10 in July, and will arrive in a different (yet similar) form with Windows 10 Mobile later this year, lets the OS adapt to the use case of the device you’re on. On Windows 10 PC editions, you can switch Tablet Mode off and on, or if the hardware provides it, it can switch automatically if you allow it. Windows 10 in Tablet Mode is strikingly similar to, but different from, Windows 8.1. Tablet mode delivers a full screen Start screen, and full-screen applications by default. Turning tablet mode off results in a Start menu and windowed applications, much like Windows 7.

When Windows 10 Mobile arrives later this year, the included incarnation of Continuum will allow phones that support the feature to connect to external displays in a couple of ways. The user will see an experience that will look like Windows 10 with Tablet mode off, and windowed universal apps. While it won’t run legacy Windows applications, this means a Windows 10 Mobile device could act as a desktop PC for a user that can live within the constraints of the Universal application ecosystem.

Both of these pieces of functionality (I’m somewhat hesitant to call either of them “features”, but I digress) provide strategic value for Apple, and Microsoft, respectively. But the value that they provide is different, as I mentioned earlier.

Continuity is sold as a “convenience” feature. But it’s really a great vehicle for hardware lock-in and upsell. It only works with iOS and OS X devices, so it requires that you use Apple hardware and iCloud. In short: Continuity is intended to help sell you more Apple hardware. Shocker, I know.

Continuum, on the other hand, is designed to be more of a “flexibility” feature. It adds value to the device you’re on, even if that is the only Windows device you own. Yes, it’s designed to be a feature that could help sell PCs and phones too – but the value is delivered independently, on each device you own.

With Windows 8.x, your desktop PC had to have the tablet-based features of the OS, even if they worked against your workflow. Your tablet couldn’t adapt well if you plugged it into an external display and tried to use it as a desktop. Your phone was… well… a phone. Continuum is intended to help users make the most of any individual Windows device, however they use it. Want a phone or tablet to be a desktop and act like it? Sure. Want a desktop to deliver a desktop-like experience and a tablet to deliver a tablet-like experience? No problem. Like Continuity, Continuum is platform-specific, and features like Continuum for Windows 10 Mobile will require all-new hardware. I expect that this Fall’s hardware season will likely continue to bring many new convertibles that automatically switch, helping to make the most of the feature, and could help sell new hardware.

Software vendors made Continuity-like functionality before Apple did it, and that’ll surely continue. We’ll see more and more device to device bridging in Android and Windows. However, Apple has an advantage here, with their premium consumer, and owning their entire hardware and software stack.

People have asked me for years if I see Apple making features that look like Continuum. I don’t. At least not trying to make OS X into iOS. We may see Apple try and bridge the tablet and small laptop market here in a few weeks with an iOS device that can act like a laptop, but arguably that customer wouldn’t be a MacBook (Air) customer anyway. It’ll be interesting to see how the iPad evolves/collides into the low-end laptop market.

Hopefully if you were confused about these two features, that helps clarify what they are – and that they’re actually completely different things, designed to accomplish completely different things.


03
Jun 15

Windows 10 and free. Free answers to frequently asked questions.

I keep hearing the same questions over and over again about Windows 10 and the free* upgrade, so I have decided to put together a set of frequently asked questions about the Windows 10 promotion.

Who gets it?

Q: Is Windows 10 really free?

Yes. It is free. Completely free. But only if you meet the qualifications and take Microsoft up on the offer from a qualified PC before July 29th, 2016.

You must have Windows 7, 8, or 8.1 installed on your x86 or x64 system, and it cannot be an Enterprise edition of Windows (only Home, Pro/Professional, Ultimate, or similar. See the bottom of this page for a significant disclaimer.

Q: Can I get the free upgrade if I have some version of Windows RT?

No free upgrade for you. Microsoft has indicated there’s a little something coming in the pipeline for you at some point, but haven’t indicated what that would be. It won’t be Windows 10, and won’t be the full Windows 10 for smartphones and small tablets either. MHO: Expect something more akin to Windows Phone 7.8.

Q: Can I get it for free if I have Enterprise edition of Windows 7, 8, or 8.1?

No. Enterprise edition must be purchased through the Volume Licensing channel, as it always has had to be. Talk to the people in your organization who handle Windows volume licensing.

Q: Can I get it for free if I’m in the Windows Insider program?

No. There’s no magic program rewarding Windows Insiders with a completely free full product. You have to have upgraded the system from a valid license for 7, 8, or 8.1. (See this tweet from @GabeAul.)

Q: Can I get it for free if I have Windows XP or Windows Vista?

No. You’ll need to either buy a legal copy of Windows 7, 8, or 8.1, or just purchase Windows 10 when it becomes available at retail, supposedly in late August, 2015. Your install of Windows does not qualify for the offer.

Q: Can I get it for free if I pirated Windows 7, 8, or 8.1?

Not really, no. If it was “Non-Genuine” before your upgrade, or Windows 10 recognizes it as such, it will still be Non-Genuine after the fact. You may be upgraded, but expect to be nagged. Your OEM might also be able to help you get legit… Or you could always buy a copy.

Q: Can I perform a clean install of Windows 10?

Yes, but you’ll have to do it after you’ve upgraded from a qualified install of Windows 7, 8, or 8.1 first. Then you can perform clean installs on that device at any time. (See yet another tweet from @GabeAul.)

Q: Can I upgrade all of my PCs for free?

Yes, if they each have a qualifying OS version and edition installed. But installing on one device doesn’t give you rights to run Windows 10 on any other system, or move an OEM install to a virtual machine.

Q: Can I upgrade my phone?

This is all about Windows 10 for your x86 or x64 PC, not your Windows Phone. Microsoft will have more details about Windows for phones at some point later this year, when they talk about it being released. It won’t be available at the same time as Windows 10 for PCs and tablets.

 

What edition do I get?

Q: I have Media Center, K, N, Ultimate, or some other transient edition – what do I get?

Check out “What edition of Windows will I get as a part of this free upgrade?” on this page. If you have a K or N install, you will be upgraded to the parent edition for the K or N OS you are licensed for.

Q: When will I get the upgrade?

See “What happens when I reserve?” on this page. In general, once you reserve on that device, it’ll download automatically and you’ll be notified when it is ready to install, on or about July 29th, 2015.

 

What breaks if I upgrade?

Q: Can I still run Windows Media Center after I upgrade to Windows 10?

No. According to this page, if you upgrade a system that is running Media Center software to Windows 10, it will be uninstalled. If you use/love Media Center on a given system, I would strongly advise not upgrading to Windows 10 on that system, as it will be deleted.

Mass hysteria

Q: Is this thing running in my system notification area malware?

You might have malware, but the little flag running over there isn’t it. It’s just Microsoft working to get every qualified Windows install that they can to Windows 10 within a year’s time. Enjoy your free lunch.

Q: How do I stop users in my organization from installing Windows 10 on systems I manage?

If it’s a domain-joined Windows Pro system, or a Windows Enterprise system, have no fear. They aren’t getting prompted.

Q: How do I stop users in my organization from installing Windows 10 on BYOD systems I don’t manage?

If it is a system running Windows Home (or similar, like “Windows 8.1” with no suffix), or a Windows Pro/Professional) system that isn’t joined to the domain, and you don’t manage it in any way, you’re kind of up the creek on this one. This article provides info on KB3035583, which needs to be uninstalled to stop the promotion, and you’ll need to figure out a way to remove it on each of those systems.

 

Q: Microsoft will charge me in a year for updates, won’t they?

No. They won’t. Microsoft has stated that they will not charge for “free, ongoing security updates for the supported lifetime of the device.” Microsoft may well charge for a future upgrade to some other version of the OS. But I don’t see them going back on this as stated.

 


22
May 15

Farewell, floppy diskette

I never would have imagined myself in an arm-wrestling match with the floppy disk drive. But sitting where I did in Windows setup, that’s exactly what happened. A few times.

When I had started at Microsoft, a boot floppy was critical to setting up a new machine. Not by the time I was in setup. Since Remote Installation Services (RIS) could start with a completely blank machine, and you could now boot a system to WinPE using a CD, there were two good-sized nails in the floppy diskette’s coffin.

Windows XP was actually the first version of Windows that didn’t ship with boot floppies. It only shipped with a CD. While you could download a tool that would build boot floppies for you, most computers that XP happily ran on supported CD boot by that time. The writing was on the wall for the floppy diskette. In the months after XP released, Bill Gates made an appearance on the American television sitcom Frasier. Early in the episode, a caller asks about whether they need diskettes to install Windows XP. For those of us on the team, it was amusing. Unfortunately, the reality was that behind the scenes, there were some issues with customers whose systems didn’t boot from CD, or didn’t boot properly, anyway. We made it through most of those those birthing pains, though.

It was both a bit amusing and a bit frustrating to watch OEMs early on during the early days of Windows XP; while customers often said, “I want a legacy free system”, they didn’t know what that really meant. By “legacy free”, customers usually meant they wanted to abandon all of the legacy connectors (ports) and peripherals used on computers before USB had started to hit its stride with Windows 98.

While USB had replaced serial in terms of mice – which were at one time primarily serial – the serial port, parallel port, and floppy disk controller often came integrated together in the computer. We saw some OEMs not include a parallel port, and eventually not include a floppy diskette, but still include a serial port – at least inside – for when you needed to debug the computer. When a Windows machine has software problems, you often hook it up to a debugger, an application on another computer, where the developer can “step through” the programming code to figure out what is misbehaving. When Windows XP shipped, a serial cable connection was the primary way to debug.  Often, to make the system seem more legacy free than it actually was, this serial port was tucked inside the computer’s case – which made consumers “think” it was legacy free when it technically wasn’t. PCs often needed BIOS updates, too – and even when Windows XP shipped with them, these PCs would still usually boot to an MS-DOS diskette in order to update the BIOS.

My arrival in the Windows division was timely; when I started, USB Flash Drives (UFDs) were just beginning to catch on, but had very little storage space, and the cheapest ones were slow and unreliable. 32MB and 64MB drives were around, but still not commonplace. In early 2002, the idea of USB booting an OS began circling around the Web, and I talked with a few developers within The Firm about it. Unfortunately, there wasn’t a good understanding of what would need to happen for it to work, nor was the UFD hardware really there yet. I tabled the idea for a year, but came back to it every once in a while, trying to research the missing parts.

As I tinkered with it, I found that while many computers supported boot from USB, they only supported USB floppy drives (a ramshackle device that had come about, and largely survived for another 5-10 years, because we were unable to make key changes to Windows that would have helped killed it). I started working with a couple of people around Microsoft to try and glue the pieces together to get WinPE booting from a UFD. I was able to find a PC that would try to boot from the disk, and failed because the disk wasn’t prepared for boot as a hard disk normally would be. I worked with a developer from the Windows kernel team and one of our architects to get a disk formatted correctly. Windows didn’t like to format UFDs as bootable because they were removable drives; even Windows to Go in Windows 8.1 today boots from special UFDs which are exceptionally fast, and actually lie to the operating system about being removable disks. Finally, I worked with another developer who knew the USB stack when we hit a few issues booting. By early 2003, we had a pretty reliable prototype that worked on my Motion Computing Tablet PC.

Getting USB boot working with Windows was one of the most enjoyable features I ever worked on, although it wasn’t a formal project in my review goals (brilliant!). USB boot was even fun to talk about, amongst co-workers and Microsoft field employees. You could mention the idea to people and they just got it. We were finally killing the floppy diskette. This was going to be the new way to boot and repair a PC. Evangelists, OEM representatives, and UFD vendors came out of the woodwork to try and help us get the effort tested and working. One UFD manufacturer gave me a stash of 128MB and larger drives – very expensive at the time – to prepare and hand out to major PC OEMs. It gave us a way to test, and gave the UFD vendor some face time with the OEMs.

For a while, I had a shoebox full of UFDs in my office which were used for testing; teammates from the Windows team would often email or stop by asking to get a UFD prepped so they could boot from it. I helped field employees get it working so many times that for a while, my nickname from some in the Microsoft field was “thumbdrive”, one of the many terms used to refer to UFDs.

Though we never were able to get UFD booting locked in as an official feature until Windows Vista, OEMs used it before then, and it began to go mainstream. Today, you’d be hard pressed to find a modern PC that can’t boot from UFD, though the experience of getting there is a bit of a pain, since the PC boot experience, even with new EFI firmware, still (frankly) sucks.

Computers usually boot from their HDD all the time. But when something goes wrong, or you want to reinstall, you have to boot from something else; a UFD, CD/DVD, PXE server like RIS/WDS, or sometimes an external HDD. Telling your Windows computer what to boot from if something happens is a pain. You have to hit a certain key sequence that is often unique to each OEM. Then you often have to hit yet another key (like F12) to PXE boot. It’s a user experience only a geek could love. One of my ideas was to try and make it easier not only for Windows to update the BIOS itself, but for the user to more easily say what they wanted to boot the PC from (before they shut it down, or selecting from a pretty list of icons or a set of keys – like Macs can do). Unfortunately, this effort largely stalled out for over a decade until Microsoft delivered a better recovery, boot, and firmware experience with their Surface tablets. Time will tell whether we’re headed towards a world where this isn’t such a nuisance anymore.

It’s actually somewhat amusing how much of my work revolved around hardware even though I worked in an area of Windows which only made software. But if there was one commonly requested design change request that I wish I could have accommodated but couldn’t ever get done, it was F6 from UFD. Let me explain.

When you install Windows, it attempts to use the drivers it ships with on the CD to begin copying Windows down onto the HDD, or to connect over the network to start setup through RIS.

This approach worked alright, but it had one little problem which became significant. Not long after Windows XP shipped, new categories of networking and storage devices began arriving on high-end computers and rapidly making their way downmarket; these all required new drivers in order for Windows to work. Unfortunately, none of these drivers were “in the box” (on the Windows CD) as we liked to say. While Windows Server often needed special drivers to install on some high-end storage controllers before, this was really a new problem for the Windows consumer client. All of a sudden we didn’t have drivers on the CD for the devices that were shipping on a rapidly increasing number of new PCs.

In other words, even with a new computer and a stock Windows XP CD in your hand, you might never get it working. You needed another computer and a floppy diskette to get the ball rolling.

Early on during Windows XP’s setup, it asks you to press the keyboard’s F6 function key if you have special drivers to install. If it can’t find the network and you’re installing from CD, you’ll be okay through setup – but then you have no way to add new drivers or connect to Windows Update. If you were installing through RIS and you had no appropriate network driver, setup would fail. Similarly, if you had no driver for the storage controller on your PC, it wouldn’t ever find find a HDD where it could install Windows – so it would terminally fail too. It wasn’t pretty.

Here’s where it gets ugly. As I mentioned, we were entering an era where OEMs wanted to ship, and often were shipping, those legacy-free PCs. These computers often had no built-in floppy diskette – which was the only place we could look for F6 drivers at the time. As a result, not long after we shipped Windows XP, we got a series of design change requests (DCRs) from OEMs and large customers to make it so Windows setup could search any attached UFD for drivers as well. While this idea sounds easy, it isn’t. This meant having to add Windows USB code into the Windows kernel so it could search for the drives very early on, before Windows itself has actually loaded and started the normal USB stack. While we could consider doing this for a full release of Windows, it wasn’t something that we could easily do in a service pack – and all of this came to a head in 2002.

Dell was the first company to ever request that we add UFD F6 support. I worked with the kernel team, and we had to say no – the risk of breaking a key part of Windows setup was too great for a service pack or a hotfix, because of the complexity of the change, as I mentioned. Later, a very large bank requested it as well. We had to say no then as well. In a twist of fate, at Winternals I would later become friends with one of the people who had triggered that request, back when he was working on a project onsite at that bank.

Not adding UFD F6 support was, I believe, a mistake. I should have pushed harder, and we should have bitten the bullet in testing it. As a result of us not doing it, a weird little cottage industry of USB floppy diskette drives continued for probably a decade longer than it should have.

So it was, several years after I left, that the much maligned Windows Vista brought both USB boot of WinPE and also brought USB F6 support so you could install the operating system on hardware with drivers newer than Windows XP, and not need a floppy diskette drive to get through setup.

As I sit here writing this, it’s interesting to consider the death of CD/DVD media (“shiny media”, as I often call it) on mainstream computers today. When Apple dropped shiny media on the MacBook Air, people called them nuts – much as they did when Apple dropped the floppy diskette on the original iMac years before. As tablets and Ultrabooks have finally dropped shiny media drives, there’s an odd echo of the floppy drive from years ago. Where external floppy drives were needed for specific scenarios (recovery and deployment), external shiny media drives are still used today for movies, some storage and installation of legacy software. But in a few years, shiny media will be all but dead – replaced by ubiquitous high-speed wired and wireless networking and pervasive USB storage. Funny to see the circle completed.


12
Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.