Aug 16

It doesn’t have to be a crapfest

A  bit ago, this blog post crossed my Twitter feed. I read it, and while the schadenfreude made me smirk for a minute, it eventually made me feel bad.

The blog post purports to describe how a shitty shutdown dialog became a shitty shutdown dialog. But instead, it documents something I like to call “too many puppies” syndrome. If you are working on high visibility areas of a product – like the Windows Shell – like Explorer in particular, everybody has an belief that their opinion is the right direction. It’s like dogs and a fire hydrant. My point really isn’t to be derisive here, but to point out that the failure of that project does not seem to be due to any other teams. Instead, it seems to have been due to some combination of unclear goals and a fair amount of the team he was on being lost in the wilderness.

I mentioned on Twitter that, if you are familiar with the organizational structure of Windows, that you can see the cut lines of those teams in the UI. A reply to that mentioned Conway’s law – which I was unfamiliar with, but basically states that as a general concept, a system designed by an organization will reflect the structure of that organization.

But not every project is doomed to live inside its own silo. In fact, some of my favorite projects that I worked on while I was at The Firm were ones that fought the silo, and the user won. Unfortunately, this was novel then, and still feels novel now.

During the development of Windows Server 2003, Bill Veghte, a relatively new VP on the product, led a series of reviews where he had program managers (PMs) across the product walk through their feature area/user scenario, to see how it worked, didn’t work, and how things could perhaps be improved. Owning the enterprise deployment experience for Windows at the time, I had the (mis?)fortune of walking Bill through the setup and configuration experience with a bunch of people from the Windows Server team.

When I had joined the Windows “Whistler” team just before beta 2, the OS that became Windows XP was described by a teammate as a “lipstick on a chicken” release was already solidifying, and while we had big dreams of future releases like “Blackcomb” (never happened), Whistler was limited largely by time to the goal of shipping the first NT-based OS to both replace ME and the 9X family for consumers, and Windows 2000 in business.

Windows Server, on the other hand, was to ship later. (In reality, much, much later, on a branched source tree, due to the need to <ahem/> revisit XP a few times after we shipped it.) This meant that the Windows Server team could think a bit bigger about shipping the best product for their customers. These scenario reviews, which I really enjoyed attending at the time, were intended to shake out the rattles in the product and figure out how to make it better.

During my scenario review, we walked through the entire setup experience – from booting the CD to configuring the server. If you recall, this meant walking through some really ugly bits of Windows. Text-mode setup. F5 and F6 function keys to install a custom HAL or mass-storage controller drivers during text-mode setup. Formatting a disk in text-mode setup. GUI-mode setup. Fun, fun stuff.

Also, some forget, but this was the first time that Windows Server was likely to ship with different branding from the client OS. Yet the Windows client branding was… everywhere. Setup “billboards” touting OS features that were irrelevant in a server, wizards, help files, even the fact that setup was loading drivers for PCMCIA cards and other peripherals that a server would never need or use in the real world, or verbs on the shutdown menu that made no sense on a server, like standby or hibernate.

A small team of individuals on the server team owned the resulting output from these walkthroughs, which went far beyond setup, and resulted in a bunch of changes to how Windows Server was configured, managed, and more. In terms of my role, I wound up being their liaison for design change requests (DCRs) on the Windows setup team.

There were a bunch of things that were no-brainers – fixing Windows Setup to be branded with Windows Server branding, for example. And there were a ton of changes that, while good ideas, were just too invasive to change, given the timeframe that Windows Server was expected to ship in, (and that it was still tethered to XP’s codebase at that time, IIRC). So lots of things were punted out to Blackcomb, etc.

One of my favorite topics of discussion, however, became the Start menu. While Windows XP shipped with a bunch of consumer items in the Start menu, almost everything it put there was… less than optimal on a server. IE, Outlook Express, and… Movie Maker? Heck, the last DCR I had to say no to for XP was a very major customer telling us they didn’t even want movie maker in Windows XP Pro! It had no place on servers – nor did Solitaire or the Windows XP tour.

So it became a small thing that David, my peer on the server team, and I tinkered with. I threw together a mockup and sent it to him. (It looked a lot like the finished product you see in this article.) No consumer gunk. But tools that a server administrator might use regularly. David ran this and a bunch of other ideas by some MVPs at an event on campus, and even received applause for their work.

As I recall, I introduced David to Raymond Chen, the guru of all things Windows shell, and Raymond and David wound up working together to resolve several requests that the Windows Server team had in the user interface realm. In the end, Windows Server 2003 (and Server SP1, which brought x64 support) wound up being really important releases to the company, and I think they reflected the beginning of a new maturity at Microsoft on building a server product that really felt… like a server.

The important thing to remember is that there wasn’t really any sort of vehicle to reflect cross-team collaboration within the company then. (I don’t know if there is today.) It generally wasn’t in your review goals (those all usually reflected features in your team’s immediate areas), and compensation surely didn’t reflect it. I sat down with David this week, having not talked for some time, and told him how most of my favorite memories of Microsoft were working on cross-team projects where I helped other teams deliver better experiences by refining where their product/feature crossed over into our area, and sometimes beyond.

I think that if you can look deeply in a product or service that you’re building, and see Conway’s law in action, you need to take a step back. Because you’re building a product for yourself, not for your customers. Building products and services that serve your entire customer base means always collaborating, and stretching the boundaries of what defines “your team”. I believe the project cited in the original blog post I referenced above failed both because there were too many cooks, but also because it would seem that anyone with any power to control the conversation actually forgot what they were cooking.



Jun 16

Compute Stick PCs – Flash in the pan?

A few years ago, following the success of many other HDMI-connected computing devices, a new type of PC arrived – the “compute stick”. Also referred to sometimes as an HDMI PC or a stick PC, the device immediately made me scratch my head a bit.

If Windows 10 still featured a Media Center edition, I guess I could sort of see the point. But Windows, outside of Surface Hub (which seemingly runs a proprietary edition of Windows), no longer features a 10′ UI in the box. Meaning, without third-party software and nerd-porn duct tape, it’s a computer with a TV as a display, and a very limited use case.

Unlike Continuum on Windows 10 Mobile, I’ve never had a licensing boot camp attendee ask me about compute sticks (almost none ever asked us about Windows To Go, the mode of booting Windows Enterprise edition off of USB on a random PC).

The early sticks featured 2GB of RAM or less, really limiting their use case even further. With 4GB, more modern versions will run Windows 10 well, but to what end?

I can see some cases where compute sticks might make sense for point of service, but a NUC is likely to be more affordable, powerful, and expandable, and not suffer from heat exhaustion like a compute stick is likely to.

I’ve also heard it suggested that a compute stick is a good potential for the business traveler. But I don’t get that. Using a compute stick requires you to have a keyboard and pointing device with you, and find an AC power source behind a hotel TV or shared workspace. Now I don’t know about you, but while I used to travel with a keyboard to use with my iPad, I don’t anymore… and I never travel with a spare pointing device. And as to finding AC power behind a hotel TV? Shoot me now.

The stick PC has some use cases, sure. Home theater where the user is willing to assemble the UX they want. But that’s nerd porn, not a primary use case, and not a long-term use case (see Media Center edition).

You eventually reach a point where, if you want a PC while you’re on the go, you should haul a PC with you. Laptops, convertibles, and tablets are ridiculously small, and you don’t always have to tote peripherals with you to make them work.

In short, I can see a very limited segment of use cases where compute sticks make sense. (Frankly, it’s a longer list than Windows To Go.) But I think in most cases, upon closer inspection, a NUC (or larger PC), Windows 10 tablet or laptop, or <gasp/> a Windows 10 Mobile device running Continuum is likely to make more sense.


Feb 16

Surface Pro and iPad Pro – incomparable

0.12 of a pound less in weight. 0.6 inches more in display area.

That’s all that separates the iPad Pro from the Surface Pro (lightest model of each). Add in the fact that both feature the modifier “Pro” in their name, and that they look kind of similar, and it’s hard to not invite comparisons, right? (Of course, what tablets in 2016 don’t look like tablets?)

Over the past few weeks, several reports have suggested that perhaps Apple’s Tablet Grande and Microsoft’s collection of tablet and tablet-like devices may have affected at holiday quarter sales of tablet-like devices from the other. Given what I’ve said above, I’ve surely even suggested that I might cross-shop one with the other when shopping. But man, that would be a mistake.

I’m not going to throw any more numbers at you to try and explain why the iPad Pro and Surface devices aren’t competitors, and shouldn’t be cross-shopped. Okay, only a few more; but it’ll be a minute. Before I do, let’s take a step back and consider the two product lines we’re dealing with.

The iPad Pro is physically Apple’s largest iOS device, by far. But that’s just it. It runs iOS, not OS X. It does not include a keyboard of any kind. It does not include a stylus of any kind. It can’t be used with an external pointing device, or almost any other traditional PC peripheral. (There are a handful of exceptions.)

The Surface Pro 4 is Microsoft’s most recent tablet. It is considered by many pundits to be a “detachable” tablet, which it is – if you buy the keyboard, which is not included. (As an aside, inventing a category called detachables when the brunt of devices in the category feature removable, but completely optional keyboards seems slightly sketchy to me.) Unlike the iPad Pro, the Surface Pro 4 does include the stylus for the device. You can also connect almost any traditional PC peripheral to a Surface Pro 4 (or Surface 3, or Surface Book.)

Again, at this point, you might say, “See, look how much they have in common. 1) A tablet. 2) A standardized keyboard peripheral. 3) A Stylus.”

Sure. That’s a few similarities, but certainly not enough to say they’re the same thing. A 120 volt light fixture for use in your home and a handheld flashlight also both offer a standard way to have a light source powered by electrical energy. But you wouldn’t jumble the two together as one category, as they aren’t interchangeable at all. You use them to perform completely different tasks.

The iPad Pro can’t run any legacy applications at all. None for Windows (of course), and none for OS X. There is it’s Achilles heel; it’s great at running iOS apps that have been tuned for it. But if the application you want to run isn’t there, or lacks features found in the Windows or OS X desktop variant you’d normally use (glares at you, Microsoft Word), you’re up the creek. (Here’s where someone will helpfully point out VDI, which is a bogus solution to running legacy business-critical applications that you need with any regularity.)

The Surface Pro offers a contrast at this point. It can run universal Windows platform (UWP) applications, AKA Windows Store apps, AKA Modern apps, AKA Metro apps. (Visualize my hand getting slapped here by platform fans for belaboring the name shifts.) And while the Surface Pro may have an even more constrained selection of platform-optimized UWP apps to choose from, if the one you want isn’t available in the Windows Store, you’ve got over two decades worth of Win32 applications that you can turn to.

Anybody who tells you that either the iPad Pro or the Surface Pro are “no compromise” devices is either lying to you, or they just don’t know that they’re lying to you. They’re both great devices for what they try to be. But both come with compromises.

Several people have also said that the iPad Pro is a “companion device”. But it depends upon the use case as to whether that is true or not. If you’re a hard-core Windows power user, then yes, the iPad Pro must be a companion device. If you regularly need features only offered by Outlook, Excel, Access, or similar Win32 apps of old, then the iPad Pro is not the device for you. But if every app you need is either available in the App Store, you can live within the confines of the limited versions of Microsoft Office for Office 365 on the iPad Pro, or your productivity tools are all Web accessible, then the iPad Pro might not only be a good device for you, but it might actually be the only device you need. It all comes down to your own requirements. Some PC using readers at this point will helpfully chime in that the user I’ve identified above doesn’t exist. Not true – they’re just not that user.

If a friend or family member came to me and said, “I’m trying to decide which one to buy – an iPad Pro or Surface Pro.”, I’d step them through several questions:

  1. What do you want to do with it?
  2. How much will you type on it? Will you use it on your lap?
  3. How much will you draw on it? Is this the main thing you see yourself using it for
  4. How important is running older applications to you?
  5. How important is battery life?
  6. Do you ever want to use it with a second monitor?
  7. Do you have old peripherals that you simply can’t live without? (And what are they?)
  8. Have you bought or ripped a lot of audio or video content in formats that Apple won’t let you easily use anymore? (And how important is that to you?)

These questions will each have a wide variety of answers – in particular question 1. (Question 2 is a trap, as the need to use the device as a true laptop will lead most away from either the iPad Pro or the Surface Pro.) But these questions can easily steer the conversation, and their decision, the right direction.

I mentioned that I would throw a few more numbers at you:

  • US$1,028.99 and
  • US$1,067.00

These are the base prices for a Surface Pro 4 (Core m3) and iPad Pro, respectively, equipped with a stylus and keyboard. Just a few cups of Starbucks apart from each other. The Surface Pro 4 can go wildly north of this price, depending upon CPU options (iPad Pro offers none) or storage options (iPad Pro only offers one). The iPad Pro also offers cellular connectivity for an additional charge in the premium storage model (not available in the Surface Pro). My point is, at this base price, they’re close to each other, but that is a matter of convenience. It invites comparisons, but deciding upon these devices based purely on price is a fool’s errand.

The more you want the Surface Pro 4 (or a Surface Book) to act like a workstation PC, the more you will pay. But there is the rub; it can be a workstation too – the iPad Pro can’t ever be. Conversely, the iPad Pro can be a great tablet, where it offers few compromises as a tablet – you could read on it, it has a phenomenal stylus experience for artists, and it’s a great, big, blank canvas for whatever you want to run on it (if you can run it). But it will never run legacy software.

The iPad Pro may be your ideal device if:

  1. You want a tablet that puts power optimization ahead of everything else
  2. Every application you need is available in the App Store
  3. The are available in an iPad Pro optimized form
  4. The available version of the app has all of the features you need
  5. All of your media content is in Apple formats or available through applications blessed by Apple.

The Surface Pro may be your ideal device if:

  1. You want a tablet that is a traditional Windows PC first and foremost
  2. Enough of the applications you want to run on it as a tablet are available in the Windows Store
  3. They support features like Snap and resizing when the app is running on the desktop
  4. You need to run more full-featured, older, or more power hungry applications, or applications that cannot live within the sandboxed confines of an “app store” platform
  5. You have media content (or apps) that are in formats or categories that Apple will not bless, but will run on Windows.

From the introduction of both devices last year, many people have been comparing and contrasting these two “Pro” devices. I think that doing so is a disservice. In general, a consumer who cross-shops the two devices and buys the wrong one will wind up sorely disappointed. It’s much better to figure out what you really want to do with the device, and buy the right option that will meet your personal requirements.

Sep 15

You have the right… to reverse engineer

This NYTimes article about the VW diesel issue and the DMCA made me think about how, 10 years ago next month, the Digital Millennium Copyright Act (DMCA) almost kept Mark Russinovich from disclosing the Sony BMG Rootkit. While the DMCA provides exceptions for reporting security vulnerabilities, it does nothing to allow for reporting breaches of… integrity.

I believe that we need to consider an expansion of how researchers are permitted to, without question, reverse engineer certain systems. While entities need a level of protection in terms of their copyright and their ability to protect their IP, VW’s behavior highlights the risks to all of us when of commercial entities can ship black box code and ensure nobody can question it – technically or legally.

In October of 2005, Mark learned that a putting a particular Sony BMG CD in a Windows computer would result in it installing a rootkit. Simplistically, a rootkit is a piece of software – usually installed by malicious individuals – that sits at a low level within the operating system and returns forged results when a piece of software at a higher level asks the operating system to perform an action. Rootkits are usually put in place to allow malware to hide. In this case, the rootkit was being put in place to prevent CDs from being copied. Basically, a lame attempt at digital rights management (DRM) gone too far.

In late October, Mark researched this, and prepped a blog post outlining what was going on. We talked at length, as he was concerned that his debugging and disclosure of the rootkit might violate the DMCA, a piece of legislation put in place to protect copyrights and prevent reverse engineering of DRM software, among other things. So in essence, to stop exactly what Mark had done. I read over the DMCA several times during the last week of October, and although I’m not a lawyer, I was pretty satisfied that Mark’s actions fit smack dab within the part of the DMCA that was placed there to enable security professionals to diagnose and report security holes. The rootkit that Sony BMG had used to “protect” their CD media had several issues in it, and was indeed creating security holes that were endangering the integrity of Windows systems where the software had unwittingly been installed.

Mark decided to go ahead and publish the blog post announcing the rootkit on October 31, 2005 – Halloween. Within 48 hours, Mark was being pulled in on television interviews, quoted in major press publications, and was repeatedly a headline on Slashdot, the open-source focused news site over the next several months – an interesting occurrence for someone who had spent almost his entire career in the Windows realm.

The Sony BMG disclosure was very important – but it almost never happened. Exceptions that allow reverse engineering are great. But security isn’t the only kind of integrity that researchers need to diagnose today. I don’t think we should tolerate laws that keep researchers from ensuring our systems are secure, and that they operate the way that we’ve been told they do.

Aug 15

Continuum vs. Continuity – Seven letters is all they have in common

It’s become apparent that there’s some confusion between Microsoft’s Continuum feature in Windows 10, and Apple’s Continuity feature in OS X. I’ve even heard technical people get them confused.

But to be honest, the letters comprising “Continu” are basically all they have in common. In addition to different (but confusingly similar) names, the two features are platform exclusive to their respective platform, and perform completely different tasks that are interesting to consider in light of how each company makes money.

Apple’s Continuity functionality, which arrived first, on OS X Yosemite late in 2014, allows you to hand off tasks between multiple Apple devices. Start a FaceTime call on your iPhone, finish it on your Mac. Start a Pages document on your Mac, finish it on your iPad. If they’re on the same Wi-Fi network, it “just works”. The Handoff feature that switches between the two devices works by showing an icon for the respective app you were using, that lets you begin using the app on the other device. Switching from iOS to OS X is easy. Going the other way is a pain in the butt, IMHO, largely because of how iOS presents the app icon on the iOS login screen.

Microsoft’s Continuum functionality, which arrived in one form with Windows 10 in July, and will arrive in a different (yet similar) form with Windows 10 Mobile later this year, lets the OS adapt to the use case of the device you’re on. On Windows 10 PC editions, you can switch Tablet Mode off and on, or if the hardware provides it, it can switch automatically if you allow it. Windows 10 in Tablet Mode is strikingly similar to, but different from, Windows 8.1. Tablet mode delivers a full screen Start screen, and full-screen applications by default. Turning tablet mode off results in a Start menu and windowed applications, much like Windows 7.

When Windows 10 Mobile arrives later this year, the included incarnation of Continuum will allow phones that support the feature to connect to external displays in a couple of ways. The user will see an experience that will look like Windows 10 with Tablet mode off, and windowed universal apps. While it won’t run legacy Windows applications, this means a Windows 10 Mobile device could act as a desktop PC for a user that can live within the constraints of the Universal application ecosystem.

Both of these pieces of functionality (I’m somewhat hesitant to call either of them “features”, but I digress) provide strategic value for Apple, and Microsoft, respectively. But the value that they provide is different, as I mentioned earlier.

Continuity is sold as a “convenience” feature. But it’s really a great vehicle for hardware lock-in and upsell. It only works with iOS and OS X devices, so it requires that you use Apple hardware and iCloud. In short: Continuity is intended to help sell you more Apple hardware. Shocker, I know.

Continuum, on the other hand, is designed to be more of a “flexibility” feature. It adds value to the device you’re on, even if that is the only Windows device you own. Yes, it’s designed to be a feature that could help sell PCs and phones too – but the value is delivered independently, on each device you own.

With Windows 8.x, your desktop PC had to have the tablet-based features of the OS, even if they worked against your workflow. Your tablet couldn’t adapt well if you plugged it into an external display and tried to use it as a desktop. Your phone was… well… a phone. Continuum is intended to help users make the most of any individual Windows device, however they use it. Want a phone or tablet to be a desktop and act like it? Sure. Want a desktop to deliver a desktop-like experience and a tablet to deliver a tablet-like experience? No problem. Like Continuity, Continuum is platform-specific, and features like Continuum for Windows 10 Mobile will require all-new hardware. I expect that this Fall’s hardware season will likely continue to bring many new convertibles that automatically switch, helping to make the most of the feature, and could help sell new hardware.

Software vendors made Continuity-like functionality before Apple did it, and that’ll surely continue. We’ll see more and more device to device bridging in Android and Windows. However, Apple has an advantage here, with their premium consumer, and owning their entire hardware and software stack.

People have asked me for years if I see Apple making features that look like Continuum. I don’t. At least not trying to make OS X into iOS. We may see Apple try and bridge the tablet and small laptop market here in a few weeks with an iOS device that can act like a laptop, but arguably that customer wouldn’t be a MacBook (Air) customer anyway. It’ll be interesting to see how the iPad evolves/collides into the low-end laptop market.

Hopefully if you were confused about these two features, that helps clarify what they are – and that they’re actually completely different things, designed to accomplish completely different things.

Jun 15

Windows 10 and free. Free answers to frequently asked questions.

I keep hearing the same questions over and over again about Windows 10 and the free* upgrade, so I have decided to put together a set of frequently asked questions about the Windows 10 promotion.

Who gets it?

Q: Is Windows 10 really free?

Yes. It is free. Completely free. But only if you meet the qualifications and take Microsoft up on the offer from a qualified PC before July 29th, 2016.

You must have Windows 7, 8, or 8.1 installed on your x86 or x64 system, and it cannot be an Enterprise edition of Windows (only Home, Pro/Professional, Ultimate, or similar. See the bottom of this page for a significant disclaimer.

Q: Can I get the free upgrade if I have some version of Windows RT?

No free upgrade for you. Microsoft has indicated there’s a little something coming in the pipeline for you at some point, but haven’t indicated what that would be. It won’t be Windows 10, and won’t be the full Windows 10 for smartphones and small tablets either. MHO: Expect something more akin to Windows Phone 7.8.

Q: Can I get it for free if I have Enterprise edition of Windows 7, 8, or 8.1?

No. Enterprise edition must be purchased through the Volume Licensing channel, as it always has had to be. Talk to the people in your organization who handle Windows volume licensing.

Q: Can I get it for free if I’m in the Windows Insider program?

No. There’s no magic program rewarding Windows Insiders with a completely free full product. You have to have upgraded the system from a valid license for 7, 8, or 8.1. (See this tweet from @GabeAul.)

Q: Can I get it for free if I have Windows XP or Windows Vista?

No. You’ll need to either buy a legal copy of Windows 7, 8, or 8.1, or just purchase Windows 10 when it becomes available at retail, supposedly in late August, 2015. Your install of Windows does not qualify for the offer.

Q: Can I get it for free if I pirated Windows 7, 8, or 8.1?

Not really, no. If it was “Non-Genuine” before your upgrade, or Windows 10 recognizes it as such, it will still be Non-Genuine after the fact. You may be upgraded, but expect to be nagged. Your OEM might also be able to help you get legit… Or you could always buy a copy.

Q: Can I perform a clean install of Windows 10?

Yes, but you’ll have to do it after you’ve upgraded from a qualified install of Windows 7, 8, or 8.1 first. Then you can perform clean installs on that device at any time. (See yet another tweet from @GabeAul.)

Q: Can I upgrade all of my PCs for free?

Yes, if they each have a qualifying OS version and edition installed. But installing on one device doesn’t give you rights to run Windows 10 on any other system, or move an OEM install to a virtual machine.

Q: Can I upgrade my phone?

This is all about Windows 10 for your x86 or x64 PC, not your Windows Phone. Microsoft will have more details about Windows for phones at some point later this year, when they talk about it being released. It won’t be available at the same time as Windows 10 for PCs and tablets.


What edition do I get?

Q: I have Media Center, K, N, Ultimate, or some other transient edition – what do I get?

Check out “What edition of Windows will I get as a part of this free upgrade?” on this page. If you have a K or N install, you will be upgraded to the parent edition for the K or N OS you are licensed for.

Q: When will I get the upgrade?

See “What happens when I reserve?” on this page. In general, once you reserve on that device, it’ll download automatically and you’ll be notified when it is ready to install, on or about July 29th, 2015.


What breaks if I upgrade?

Q: Can I still run Windows Media Center after I upgrade to Windows 10?

No. According to this page, if you upgrade a system that is running Media Center software to Windows 10, it will be uninstalled. If you use/love Media Center on a given system, I would strongly advise not upgrading to Windows 10 on that system, as it will be deleted.

Mass hysteria

Q: Is this thing running in my system notification area malware?

You might have malware, but the little flag running over there isn’t it. It’s just Microsoft working to get every qualified Windows install that they can to Windows 10 within a year’s time. Enjoy your free lunch.

Q: How do I stop users in my organization from installing Windows 10 on systems I manage?

If it’s a domain-joined Windows Pro system, or a Windows Enterprise system, have no fear. They aren’t getting prompted.

Q: How do I stop users in my organization from installing Windows 10 on BYOD systems I don’t manage?

If it is a system running Windows Home (or similar, like “Windows 8.1” with no suffix), or a Windows Pro/Professional) system that isn’t joined to the domain, and you don’t manage it in any way, you’re kind of up the creek on this one. This article provides info on KB3035583, which needs to be uninstalled to stop the promotion, and you’ll need to figure out a way to remove it on each of those systems.


Q: Microsoft will charge me in a year for updates, won’t they?

No. They won’t. Microsoft has stated that they will not charge for “free, ongoing security updates for the supported lifetime of the device.” Microsoft may well charge for a future upgrade to some other version of the OS. But I don’t see them going back on this as stated.


May 15

Farewell, floppy diskette

I never would have imagined myself in an arm-wrestling match with the floppy disk drive. But sitting where I did in Windows setup, that’s exactly what happened. A few times.

When I had started at Microsoft, a boot floppy was critical to setting up a new machine. Not by the time I was in setup. Since Remote Installation Services (RIS) could start with a completely blank machine, and you could now boot a system to WinPE using a CD, there were two good-sized nails in the floppy diskette’s coffin.

Windows XP was actually the first version of Windows that didn’t ship with boot floppies. It only shipped with a CD. While you could download a tool that would build boot floppies for you, most computers that XP happily ran on supported CD boot by that time. The writing was on the wall for the floppy diskette. In the months after XP released, Bill Gates made an appearance on the American television sitcom Frasier. Early in the episode, a caller asks about whether they need diskettes to install Windows XP. For those of us on the team, it was amusing. Unfortunately, the reality was that behind the scenes, there were some issues with customers whose systems didn’t boot from CD, or didn’t boot properly, anyway. We made it through most of those those birthing pains, though.

It was both a bit amusing and a bit frustrating to watch OEMs early on during the early days of Windows XP; while customers often said, “I want a legacy free system”, they didn’t know what that really meant. By “legacy free”, customers usually meant they wanted to abandon all of the legacy connectors (ports) and peripherals used on computers before USB had started to hit its stride with Windows 98.

While USB had replaced serial in terms of mice – which were at one time primarily serial – the serial port, parallel port, and floppy disk controller often came integrated together in the computer. We saw some OEMs not include a parallel port, and eventually not include a floppy diskette, but still include a serial port – at least inside – for when you needed to debug the computer. When a Windows machine has software problems, you often hook it up to a debugger, an application on another computer, where the developer can “step through” the programming code to figure out what is misbehaving. When Windows XP shipped, a serial cable connection was the primary way to debug.  Often, to make the system seem more legacy free than it actually was, this serial port was tucked inside the computer’s case – which made consumers “think” it was legacy free when it technically wasn’t. PCs often needed BIOS updates, too – and even when Windows XP shipped with them, these PCs would still usually boot to an MS-DOS diskette in order to update the BIOS.

My arrival in the Windows division was timely; when I started, USB Flash Drives (UFDs) were just beginning to catch on, but had very little storage space, and the cheapest ones were slow and unreliable. 32MB and 64MB drives were around, but still not commonplace. In early 2002, the idea of USB booting an OS began circling around the Web, and I talked with a few developers within The Firm about it. Unfortunately, there wasn’t a good understanding of what would need to happen for it to work, nor was the UFD hardware really there yet. I tabled the idea for a year, but came back to it every once in a while, trying to research the missing parts.

As I tinkered with it, I found that while many computers supported boot from USB, they only supported USB floppy drives (a ramshackle device that had come about, and largely survived for another 5-10 years, because we were unable to make key changes to Windows that would have helped killed it). I started working with a couple of people around Microsoft to try and glue the pieces together to get WinPE booting from a UFD. I was able to find a PC that would try to boot from the disk, and failed because the disk wasn’t prepared for boot as a hard disk normally would be. I worked with a developer from the Windows kernel team and one of our architects to get a disk formatted correctly. Windows didn’t like to format UFDs as bootable because they were removable drives; even Windows to Go in Windows 8.1 today boots from special UFDs which are exceptionally fast, and actually lie to the operating system about being removable disks. Finally, I worked with another developer who knew the USB stack when we hit a few issues booting. By early 2003, we had a pretty reliable prototype that worked on my Motion Computing Tablet PC.

Getting USB boot working with Windows was one of the most enjoyable features I ever worked on, although it wasn’t a formal project in my review goals (brilliant!). USB boot was even fun to talk about, amongst co-workers and Microsoft field employees. You could mention the idea to people and they just got it. We were finally killing the floppy diskette. This was going to be the new way to boot and repair a PC. Evangelists, OEM representatives, and UFD vendors came out of the woodwork to try and help us get the effort tested and working. One UFD manufacturer gave me a stash of 128MB and larger drives – very expensive at the time – to prepare and hand out to major PC OEMs. It gave us a way to test, and gave the UFD vendor some face time with the OEMs.

For a while, I had a shoebox full of UFDs in my office which were used for testing; teammates from the Windows team would often email or stop by asking to get a UFD prepped so they could boot from it. I helped field employees get it working so many times that for a while, my nickname from some in the Microsoft field was “thumbdrive”, one of the many terms used to refer to UFDs.

Though we never were able to get UFD booting locked in as an official feature until Windows Vista, OEMs used it before then, and it began to go mainstream. Today, you’d be hard pressed to find a modern PC that can’t boot from UFD, though the experience of getting there is a bit of a pain, since the PC boot experience, even with new EFI firmware, still (frankly) sucks.

Computers usually boot from their HDD all the time. But when something goes wrong, or you want to reinstall, you have to boot from something else; a UFD, CD/DVD, PXE server like RIS/WDS, or sometimes an external HDD. Telling your Windows computer what to boot from if something happens is a pain. You have to hit a certain key sequence that is often unique to each OEM. Then you often have to hit yet another key (like F12) to PXE boot. It’s a user experience only a geek could love. One of my ideas was to try and make it easier not only for Windows to update the BIOS itself, but for the user to more easily say what they wanted to boot the PC from (before they shut it down, or selecting from a pretty list of icons or a set of keys – like Macs can do). Unfortunately, this effort largely stalled out for over a decade until Microsoft delivered a better recovery, boot, and firmware experience with their Surface tablets. Time will tell whether we’re headed towards a world where this isn’t such a nuisance anymore.

It’s actually somewhat amusing how much of my work revolved around hardware even though I worked in an area of Windows which only made software. But if there was one commonly requested design change request that I wish I could have accommodated but couldn’t ever get done, it was F6 from UFD. Let me explain.

When you install Windows, it attempts to use the drivers it ships with on the CD to begin copying Windows down onto the HDD, or to connect over the network to start setup through RIS.

This approach worked alright, but it had one little problem which became significant. Not long after Windows XP shipped, new categories of networking and storage devices began arriving on high-end computers and rapidly making their way downmarket; these all required new drivers in order for Windows to work. Unfortunately, none of these drivers were “in the box” (on the Windows CD) as we liked to say. While Windows Server often needed special drivers to install on some high-end storage controllers before, this was really a new problem for the Windows consumer client. All of a sudden we didn’t have drivers on the CD for the devices that were shipping on a rapidly increasing number of new PCs.

In other words, even with a new computer and a stock Windows XP CD in your hand, you might never get it working. You needed another computer and a floppy diskette to get the ball rolling.

Early on during Windows XP’s setup, it asks you to press the keyboard’s F6 function key if you have special drivers to install. If it can’t find the network and you’re installing from CD, you’ll be okay through setup – but then you have no way to add new drivers or connect to Windows Update. If you were installing through RIS and you had no appropriate network driver, setup would fail. Similarly, if you had no driver for the storage controller on your PC, it wouldn’t ever find find a HDD where it could install Windows – so it would terminally fail too. It wasn’t pretty.

Here’s where it gets ugly. As I mentioned, we were entering an era where OEMs wanted to ship, and often were shipping, those legacy-free PCs. These computers often had no built-in floppy diskette – which was the only place we could look for F6 drivers at the time. As a result, not long after we shipped Windows XP, we got a series of design change requests (DCRs) from OEMs and large customers to make it so Windows setup could search any attached UFD for drivers as well. While this idea sounds easy, it isn’t. This meant having to add Windows USB code into the Windows kernel so it could search for the drives very early on, before Windows itself has actually loaded and started the normal USB stack. While we could consider doing this for a full release of Windows, it wasn’t something that we could easily do in a service pack – and all of this came to a head in 2002.

Dell was the first company to ever request that we add UFD F6 support. I worked with the kernel team, and we had to say no – the risk of breaking a key part of Windows setup was too great for a service pack or a hotfix, because of the complexity of the change, as I mentioned. Later, a very large bank requested it as well. We had to say no then as well. In a twist of fate, at Winternals I would later become friends with one of the people who had triggered that request, back when he was working on a project onsite at that bank.

Not adding UFD F6 support was, I believe, a mistake. I should have pushed harder, and we should have bitten the bullet in testing it. As a result of us not doing it, a weird little cottage industry of USB floppy diskette drives continued for probably a decade longer than it should have.

So it was, several years after I left, that the much maligned Windows Vista brought both USB boot of WinPE and also brought USB F6 support so you could install the operating system on hardware with drivers newer than Windows XP, and not need a floppy diskette drive to get through setup.

As I sit here writing this, it’s interesting to consider the death of CD/DVD media (“shiny media”, as I often call it) on mainstream computers today. When Apple dropped shiny media on the MacBook Air, people called them nuts – much as they did when Apple dropped the floppy diskette on the original iMac years before. As tablets and Ultrabooks have finally dropped shiny media drives, there’s an odd echo of the floppy drive from years ago. Where external floppy drives were needed for specific scenarios (recovery and deployment), external shiny media drives are still used today for movies, some storage and installation of legacy software. But in a few years, shiny media will be all but dead – replaced by ubiquitous high-speed wired and wireless networking and pervasive USB storage. Funny to see the circle completed.

Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.

Aug 14

My path forward

Note: I’m not leaving Seattle, or leaving Directions on Microsoft. I just thought I would share the departure email I sent in 2004. Today, August 6, 2014 marks the tenth anniversary of the day I left Microsoft and Seattle to work at Winternals in Austin. For those who don’t know – earlier that day, Steve Ballmer had sent a company-wide memo entitled “Our path forward”, hence my tongue-in cheek subject selection.

From: Wes Miller
Sent: Tuesday, July 06, 2004 2:32 PM
To: Wes Miller
Subject: My path forward

Seven years ago, when I moved up from San Jose to join Microsoft, I wondered if I was doing the right thing… Not that I was all that elated working where I was, but rather we all achieve a certain level of comfort in what we know, and we fear that which we don’t know. I look back on the last seven years and it’s been an amazing, fun, challenging, and sometimes stressful experience – experiences that I would never trade for anything.

At the same time, for family reasons and for personal reasons, I’ve had to do some soul searching that retraced the memories I have from, and steps I went through when I initially came to Microsoft, and I have accepted a position working for a small software company in Austin, TX. My last day at Microsoft will be Friday August 6, one month from today. The best way to reach me after that until my new address is set up is <redacted>. Between now and August 6th I will be doing my best to meet with any of you that need closure on deployment or LH VPC related issues before my departure. Please do let me know if you need something from me between now and then.

Many thanks to those of you who I have worked with over the years – take care of yourselves, and stay in touch.


Apr 14

Complex systems are complex (and fragile)

About every two months, a colleague and I travel to various cities in the US (and sometimes abroad) to teach Microsoft customers how to license their software effectively over a rather intense two-day course.

Almost none of these attendees want to game the system. Instead, most come (often repeatedly, sometimes with more people each time) to simply understand the ever-changing rules, how to apply them correctly, and how to (as I often hear it said) “do the right thing”.

Doing the right thing, whether we’re talking licensing, security, compliance, and beyond, often isn’t cheap. It takes planning, auditing, understanding the entire system, understanding an application lifecycle, and hiring competent developers and testers to help build and verify everything.

In the case of software licensing, we’ve generally found that there is no one single person that knows the breadth of a typical organization’s infrastructure. How can there possibly be? But the problem is if you want to license effectively (or build systems that are secure, compliant, or reliable), an individual or group of individuals must understand the entire integrated application stack – or face the reality that there will be holes. But what about the technology, when issues like Heartbleed come along and expose fundamental flaws across the Internet?

The reality is that complex systems are complex. But it is because of this complexity that these systems must be planned, documented, and clearly understood at some level, or we’re kidding ourselves that we can secure, protect, defend (and properly pay for) these systems, and have them be available with any kind of reliability.

Two friends on Twitter had a dialog the other day about responsibility/culpability when open source components are included in an application/system. One commented, “I never understand why doing it right & not getting sued for doing it wrong aren’t a strong argument.”

I get what she means. But unfortunately having been at a small ISV who wound up suing a much larger retail company because they were pirating our software, “doing the right thing” in business sometimes comes down to “doing the cheap, quick, or lazy thing”. In our case, an underling at the retail company had told us they were pirating our software, and he wanted to rectify it. He wanted to do the right thing. Negotiations occurred to try and come to closure about the piracy, but when it came down to paying the bill for the software that had been used/was being used, a higher up vetoed the payment due to us. Why? Simple risk management. Cheaper was believed to be better than the right thing.This tiny Texas software company couldn’t ever challenge them in court and win (for posterity: we could, and we did).

Unfortunately we hear stories all the time of this sort of thing. It’s a game of chicken. This isn’t unusual – it happens in software all the time.

I wish I could say that I was shocked when I hear of companies taking shortcuts – improperly using open-source (or commercial) software out of the bounds of how it is licensed, deploying complex systems without understanding their security threat model, or continuing to run software after it has left support. But no. Not much really surprises me much anymore.

What does concern me, though, is that the world assumed that OpenSSL was secure, and that it had been reviewed and audited by enough skilled eyes to avoid elementary bugs like the one that created Heartbleed. But no, that’s not the case. Like any complex system, there’s a certain point where an innumerable number of people around the world just assumed that OpenSSL worked, accepted it, and deployed it; yet here it failed at a fundamental level for two years.

In a recent interview, the developer responsible for the flaw behind Heartbleed discussed the issue, stating, “But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area.”

I can’t tell you how troubling I find that statement. Long ago, Microsoft had a sea change with regard to how software was developed. Key components of this change involved

  1. Developing threat models in order to be certain we understood the types and angles of approach for any threat vectors we could find
  2. Deeper security foundations across the OS and applications
  3. Finally, a much more comprehensive approach to testing (in large part to try and ensure that “simple programming errors in new features” wouldn’t blow the entire system apart.

No, even Microsoft’s system is not perfect, and flaws still happen, even with new operating systems. But as I noted, I find it remarkably troubling that a flaw as significant as Heartbleed can make it through development, peer review, any bounds-checking testing done in the OpenSSL development process, and into release (where it will generally be accepted as “known good” by the community at large – warranted or not) for two years. It’s also concerning that the statement included that the Heartbleed flaw “unfortunately occurred in a security relevant area“. As I said on Twitter – this is OpenSSL. The entire thing should be considered to be a security relevant area.

The biggest problem with this issue is that there should be ongoing threat modeling and bounds checking amongst users of OpenSSL (or any software – open or commercial), and in this case the OpenSSL development community to ensure that the software is actually secure. But as with any complex system, there’s a uniform expectation that this type of project results in code that could be generally regarded as safe. But most companies will simply assume a project as mature and ubiquitous as OpenSSL is so, and do little to no verification of the software, deploy it, and later hear through others about vulnerabilities in the software.

In the complex stacks of software today, most businesses aren’t qualified to, simply aren’t willing to, or aren’t aware of the need to, perform acceptance checking on third-party software they’re using in their own systems (and likely don’t really have developers on staff that are qualified to review software such as OpenSSL. As a result, a complex and fragile system becomes even more complex. And even more fragile. Even more dangerous, without any level of internal testing, these systems of internal and external components are assumed to be reliable, safe, and secure – until time (and usually a highly technical developer being compensated for finding vulnerabilities) show it to not be the case, and then we find ourselves in goose chase mode, as we are right now.