06
Aug 14

My path forward

Note: I’m not leaving Seattle, or leaving Directions on Microsoft. I just thought I would share the departure email I sent in 2004. Today, August 6, 2014 marks the tenth anniversary of the day I left Microsoft and Seattle to work at Winternals in Austin. For those who don’t know – earlier that day, Steve Ballmer had sent a company-wide memo entitled “Our path forward”, hence my tongue-in cheek subject selection.

From: Wes Miller
Sent: Tuesday, July 06, 2004 2:32 PM
To: Wes Miller
Subject: My path forward

Seven years ago, when I moved up from San Jose to join Microsoft, I wondered if I was doing the right thing… Not that I was all that elated working where I was, but rather we all achieve a certain level of comfort in what we know, and we fear that which we don’t know. I look back on the last seven years and it’s been an amazing, fun, challenging, and sometimes stressful experience – experiences that I would never trade for anything.

At the same time, for family reasons and for personal reasons, I’ve had to do some soul searching that retraced the memories I have from, and steps I went through when I initially came to Microsoft, and I have accepted a position working for a small software company in Austin, TX. My last day at Microsoft will be Friday August 6, one month from today. The best way to reach me after that until my new address is set up is <redacted>. Between now and August 6th I will be doing my best to meet with any of you that need closure on deployment or LH VPC related issues before my departure. Please do let me know if you need something from me between now and then.

Many thanks to those of you who I have worked with over the years – take care of yourselves, and stay in touch.

Thanks,
Wes


13
Apr 14

Complex systems are complex (and fragile)

About every two months, a colleague and I travel to various cities in the US (and sometimes abroad) to teach Microsoft customers how to license their software effectively over a rather intense two-day course.

Almost none of these attendees want to game the system. Instead, most come (often repeatedly, sometimes with more people each time) to simply understand the ever-changing rules, how to apply them correctly, and how to (as I often hear it said) “do the right thing”.

Doing the right thing, whether we’re talking licensing, security, compliance, and beyond, often isn’t cheap. It takes planning, auditing, understanding the entire system, understanding an application lifecycle, and hiring competent developers and testers to help build and verify everything.

In the case of software licensing, we’ve generally found that there is no one single person that knows the breadth of a typical organization’s infrastructure. How can there possibly be? But the problem is if you want to license effectively (or build systems that are secure, compliant, or reliable), an individual or group of individuals must understand the entire integrated application stack – or face the reality that there will be holes. But what about the technology, when issues like Heartbleed come along and expose fundamental flaws across the Internet?

The reality is that complex systems are complex. But it is because of this complexity that these systems must be planned, documented, and clearly understood at some level, or we’re kidding ourselves that we can secure, protect, defend (and properly pay for) these systems, and have them be available with any kind of reliability.

Two friends on Twitter had a dialog the other day about responsibility/culpability when open source components are included in an application/system. One commented, “I never understand why doing it right & not getting sued for doing it wrong aren’t a strong argument.”

I get what she means. But unfortunately having been at a small ISV who wound up suing a much larger retail company because they were pirating our software, “doing the right thing” in business sometimes comes down to “doing the cheap, quick, or lazy thing”. In our case, an underling at the retail company had told us they were pirating our software, and he wanted to rectify it. He wanted to do the right thing. Negotiations occurred to try and come to closure about the piracy, but when it came down to paying the bill for the software that had been used/was being used, a higher up vetoed the payment due to us. Why? Simple risk management. Cheaper was believed to be better than the right thing.This tiny Texas software company couldn’t ever challenge them in court and win (for posterity: we could, and we did).

Unfortunately we hear stories all the time of this sort of thing. It’s a game of chicken. This isn’t unusual – it happens in software all the time.

I wish I could say that I was shocked when I hear of companies taking shortcuts – improperly using open-source (or commercial) software out of the bounds of how it is licensed, deploying complex systems without understanding their security threat model, or continuing to run software after it has left support. But no. Not much really surprises me much anymore.

What does concern me, though, is that the world assumed that OpenSSL was secure, and that it had been reviewed and audited by enough skilled eyes to avoid elementary bugs like the one that created Heartbleed. But no, that’s not the case. Like any complex system, there’s a certain point where an innumerable number of people around the world just assumed that OpenSSL worked, accepted it, and deployed it; yet here it failed at a fundamental level for two years.

In a recent interview, the developer responsible for the flaw behind Heartbleed discussed the issue, stating, “But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area.”

I can’t tell you how troubling I find that statement. Long ago, Microsoft had a sea change with regard to how software was developed. Key components of this change involved

  1. Developing threat models in order to be certain we understood the types and angles of approach for any threat vectors we could find
  2. Deeper security foundations across the OS and applications
  3. Finally, a much more comprehensive approach to testing (in large part to try and ensure that “simple programming errors in new features” wouldn’t blow the entire system apart.

No, even Microsoft’s system is not perfect, and flaws still happen, even with new operating systems. But as I noted, I find it remarkably troubling that a flaw as significant as Heartbleed can make it through development, peer review, any bounds-checking testing done in the OpenSSL development process, and into release (where it will generally be accepted as “known good” by the community at large – warranted or not) for two years. It’s also concerning that the statement included that the Heartbleed flaw “unfortunately occurred in a security relevant area“. As I said on Twitter – this is OpenSSL. The entire thing should be considered to be a security relevant area.

The biggest problem with this issue is that there should be ongoing threat modeling and bounds checking amongst users of OpenSSL (or any software – open or commercial), and in this case the OpenSSL development community to ensure that the software is actually secure. But as with any complex system, there’s a uniform expectation that this type of project results in code that could be generally regarded as safe. But most companies will simply assume a project as mature and ubiquitous as OpenSSL is so, and do little to no verification of the software, deploy it, and later hear through others about vulnerabilities in the software.

In the complex stacks of software today, most businesses aren’t qualified to, simply aren’t willing to, or aren’t aware of the need to, perform acceptance checking on third-party software they’re using in their own systems (and likely don’t really have developers on staff that are qualified to review software such as OpenSSL. As a result, a complex and fragile system becomes even more complex. And even more fragile. Even more dangerous, without any level of internal testing, these systems of internal and external components are assumed to be reliable, safe, and secure – until time (and usually a highly technical developer being compensated for finding vulnerabilities) show it to not be the case, and then we find ourselves in goose chase mode, as we are right now.


07
Apr 14

The end is near here!

Imagine I handed you a Twinkie (or your favorite shelf-stable food item), and asked you to hold on to it for almost 13 years, and then eat it.

Aw, c’mon. Why the revulsion?

It’s been hard for me to watch the excited countdown to the demise of Windows XP. Though I did help ship Windows Server 2003 as well, no one product (or service) that I’ve ever worked on became so popular, for so long – by any stretch of the imagination – as Windows XP did.

Yet, here we are, reading articles discussing the topic of what country or what company is now shelling out $M to get support coverage for Windows XP for the next 1, 2, or 3 years (getting financially more painful as the year count goes up). It’s important to note that this is no “get out of jail free” card. Nope. This is just life support for an OS that has terminal zero-day. These organizations still have to plan and execute a migration to a newer version of Windows that isn’t on borrowed time.

Why didn’t these governments and companies execute an XP evacuation plan? That’s a very good question. Putting aside the full blame for a second, there’s a bigger issue to consider.

Go back and think of that Twinkie. Contrary to popular opinion, Twinkies don’t last forever (most sources say it’s about 25 days). Regardless, you get the idea that for most normal things, even shelf-stable isn’t shelf-stable forever. Heck, even most MRE‘s need to be stored at a reasonable temperature and will taste suboptimal after 5 or more years.

While I can perhaps excuse consumers who decide to hang on to an operating system past it’s expiration date, I have a harder time understanding how organizations and governments with any long-term focus sat by and let XP sour on them. It would be one thing if XP systems were all standalone and not connected to the Internet. Perhaps then we could turn a blind eye to it. But that’s not usually the case; XP systems in business environments, which lack most of the security protections delivered later for Windows Vista, 7, and 8.x, are largely defenseless, and will be standing there waiting to get pwned as the vulnerabilities stack up after tomorrow. In my mind, the most dangerous thing is security vendors claiming to be able to protect the OS after April 8. In most cases, that’s an all but impossible feat, and instills a false sense of confidence in XP users and administrators.

The key concern I have is that people are looking at Windows XP as if software dying is a new thing, or something unusual. It isn’t. In fact, tomorrow, the entire spectrum of Office 2003 software (the Office productivity suite, SharePoint, Exchange, and more) also leave support and could have their own set of security compromises down the road. But as I said, this isn’t the first time software has entered an unsupportable realm, and it won’t be the last. It’s just a unique combination as we get the perfect storm of XP’s pervasiveness, the ubiquity of the Internet, and the increasing willingness of bad people to do bad things to computers for money. Windows Server 2003 (and 2003 R2) are next, coming up in July of 2015.

People across the board seem to have this odd belief that when they buy a perpetual license to software, it can be used forever (versus Office 365, which people more clearly understand as a subscription that expires if not paid in an ongoing manner). But no software, even if “perpetually licensed”, is actually perpetual. Like that Twinkie I’ve mentioned a few times, even good software goes bad. As an industry, we need to start getting customers throughout the world to understand that, and get more organizations to begin planning software deployments as an ongoing lifecycle, rather than a one-time expense that is ignored until it goes terminal.


12
Mar 14

The trouble with DaaS

I recently read a blog post entitled DaaS is a Non-Starter, discussing how Desktop as a Service (DaaS) is, as the title says, a non-starter. I’ll have to admit, I agree. I’m a bit of a naysayer about DaaS, just as I have long been about VDI itself.

In talking with a colleague the other day, as well as customers at a recent licensing boot camp, it sure seems like VDI, like “enterprise social” is a burger with a whole lot of bun, and not as much meat as you might hope for (given your investment). The promise as I believe it to be is that by centralizing your desktops, you get better manageability. To a degree, I believe that to be true. To a huge degree, I don’t. It really comes down to how standardized you make your desktops, how centrally you manage user document storage, and how much sway your users have (are they admin or can they install their own Win32 apps).

With VDI, the problem is, well… money. First you have server hardware and software costs, second, you have the appropriate storage and networking to actually execute a a VDI implementation, and third, you finally have to spend the money to hire people who can glue it all together in an end-user experience that isn’t horrible. It feels to me that a lot of businesses fall in love with VDI (true client OS-based VDI) without taking the complete cost into account.

With DaaS, you pay a certain amount per month, and your users can access a standardized desktop image hosted on a service provider’s server and infrastructure – which is created and managed by them. The OS here is actually usually Windows Server, not a Windows desktop OS – I’ll discuss that in a second. But as far as infrastructure, using DaaS from a service provider means you usually don’t have to invest the cash in corporate standard Windows desktops or laptops (or Windows Server hardware if you’re trying VDI on-premises), or the high-end networking and storage, or the people to glue that architecture together. Your users, in turn, get (theoretically) the benefits of VDI, regardless of what device they come at it with (a personally owned PC, tablet, whatever).

However, as with any *aaS, you’re then at the mercy of your DaaS purveyor. In turn, you’re also at the mercy of their licensing limitations as it regards Windows. This is why  most of them run Windows Server; it’s the only version of Windows that can generally be made available by hosting providers, and Windows desktop OSs can’t be. You also have to live within the constraints of their DaaS implementation (HW/SW availability, infrastructure, performance, and architecture, etc). To date, most DaaS offerings I’ve seen focused on “get up and running fast!”, not “we’ll work with you to make sure your business needs are solved!”.

Andre’s blog post, mentioned at the beginning of my post here, really hit the nail on the head. In particular, he mentioned good points about enterprise applications, access to files and folders the user needs, adequate bandwidth for real-world use, and DaaS vs. VDI.

To me, the main point here is that with a DaaS, your service provider, not you, get to call a lot of the shots here, and not many of them consider the end-to-end user workflow necessary for your business.

Your users need to get tasks done, wherever they are. Fine. Can they get access to their applications that live on premises, through VDI in the cloud, from a tablet at the airport? How about their files? Does your DaaS require a secondary logon, or does it support SSO from their tablet or other non-company owned/managed device? How fat of a pipe is necessary for your users before they get frustrated? How close can your DaaS come to on-premises functionality (as if the user was sitting at an actual PC with an actual keyboard and mouse (or touch)?

On Twitter, I mentioned to Andre that Microsoft’s own entry into the DaaS space would surely change the game. I don’t know anything (officially or unofficially) here, but it has been long suspected that Microsoft has planned their own DaaS offering.

When you combine the technologies available in Windows Server 2012 R2, Windows Azure, and Office 365, the scenario for a Microsoft DaaS actually starts to become pretty amazing. There are implementation costs to get all of this deployed, mind you – including licensing and deployment/migration. That isn’t free. But it might be worth it if DaaS sounds compelling and I’m right about Microsoft’s approach.

Microsoft’s changes to Active Directory in Server 2012 R2 (AD FS, the Web Application Proxy [WAP]) mean that users can get to AD from wherever they are, and Office 365 and third party services (including a Microsoft DaaS) can have seamless SSO.

Workplace Join can provide that SSO experience, even from a Windows 7, iOS, or Samsung Knox device, and the business can control which assets and applications the user can connect to, even if they’re on the inside of the firewall and the user is not (through WAP, mentioned previously), or available through another third party.

Work Folders enables synchronized access to files and folders that are stored on-premises in Windows file shares, to user devices. This could conceptually be extended to work with a Microsoft (or third-party) DaaS as well, and I have to think OneDrive for Business could be made to work as well given the right VDI/DaaS model.

In a DaaS, applications the user needs could be provided through App-V, RemoteApp running from an on-premises Remote Desktop server (a bit of redundancy, I know), or again, published out through WAP so users could connect to them as if the DaaS servers were on-premises.

When you add in Office 365, it continues building out the solution, since users can again be authenticated using their AD credentials, and OneDrive for Business can provide synchronization to their work PCs and DaaS, or access on their personally owned device.

Performance is of course a key bottleneck here, assuming all of the above pieces are in place, and work as advertised (and beyond). Microsoft’s RemoteFX technology has been advancing in terms of offering a desktop-like experience regardless of the device (and is now supported by Microsoft’s recently acquired RDP clients for OS X, iOS, and Android). While Remote Desktop requires a relatively robust connection to the servers, it degrades relatively gracefully, and can be tuned down for connections with bandwidth/latency issues.

All in all, while I’m still a doubter about VDI, and I think there’s a lot of duct tape you’d need to put in place for a DaaS to be the practical solution to user productivity that many vendors are trying to sell it as, there is promise here, and given the right vendor, things could get interesting.


05
Mar 14

Considering CarPlay

Late last week, some buzz began building that Apple, alongside automaker partners, would formally reveal the first results of their “iOS in the Car” initiative. Much as rumors had suspected, the end result, now dubbed CarPlay, was demonstrated (or at least shown in a promo video) by initial partners Ferrari, Mercedes-Benz, and Volvo. If you only have time to watch one of them, watch the video of the Ferrari. Though it is an ad-hoc demo, the Ferrari video isn’t painfully overproduced as the Mercedes-Benz video unfortunately is, and isn’t just a concept video as the Volvo is.

The three that were shown are interesting for a variety of reasons (though it is also notable that all three are premium brands). The Ferrari and Volvo videos demonstrate touch-based navigation, and the Mercedes-Benz video uses what (I believe) is their knob-based COMAND system. While CarPlay is navigable using all of them, using the COMAND knob to control the iOS-based experience feels somewhat contrived or forced; like using an old iPod click wheel to navigate a modern iPhone). It just looks painful (to me that’s a M-B issue, not an Apple issue).

Outside of the initial three auto manufacturers, Apple has said that Honda, Hyundai, and Jaguar will also have models in 2014 with CarPlay functionality.

So what exactly is CarPlay?

As I initially looked at CarPlay, it looked like a distinct animal in the Apple ecosystem. But the more I thought about it, the more familiar it looked. Apple pushing their UX out into a new realm, on a device that they don’t own the final interface of… It’s sort of Apple TV, for the car. In fact, pondering what the infrastructure might look like, I kept getting flashbacks to Windows Media Center Extenders, which are remote thin clients that rendered a Windows Media Center UI over a wired or wireless connection.

Apple’s  CarPlay involves a cable-based connection (this seems to be a requirement at this point, I’ll talk about it a bit later) which is used to remotely display several key functions of your compatible iPhone (5s, 5c, 5) on the head unit of your car. That is, the display is that of your auto head unit – but for CarPlay features, your iPhone looks to be what’s actually running the app, and the head unit is simply a dumb terminal rendering it. All data is transmitted through your phone, not some in-car LTE/4G connection, and all of the apps reside, and are updated on your phone, not on the head unit. CarPlay seems to be navigable regardless of the type of touch support your screen has (if it has touch), but also works with buttons, and again, works with knob-based navigation like COMAND.

Apple seems to be requiring two key triggers for CarPlay – 1) a voice command button on the steering wheel, and 2) an entry point into CarPlay itself, generally a button on the head unit (quite easy to see if you watch the Ferrari video, labeled APPLE CARPLAY). Of course these touches are in addition to integrating in the required Apple Lightning cable to tether it all together.

In short, Apple hasn’t done a complete end around of the OEM – the automaker can still have their own UI for their own in-car functions, and then Apple’s distinct CarPlay UI (very familiar to anyone who has used iOS 7) is there when you’re “in CarPlay”, if you will. It seems to me that CarPlay can best be thought of as a remote display for your iPhone, designed to fit the display of your car’s entertainment system. Some have said that “CarPlay systems” are running QNX – perhaps some are. The head unit manufacturer doesn’t really appear to be important here. The main point of all of this is it appears the OEM doesn’t have to do massive work to make it functional, it really looks to primarily be integrating in the remote display functionality and the I/O to the phone. In fact, the UI of the Ferrari as demonstrated doesn’t look to be that different from head units in previous versions of the FF (from what I can see). Also, if you watch the Apple employee towards the end, you can see her press the FF “app”, exiting out to the FF’s own user interface, which is distinctly different from the CarPlay UI. The CarPlay UI, in contrast, is remarkably consistent across the three examples shown so far. While the automakers all have their own unique touches, and controls for the rest of the vehicle, these distinct things that the phone is, frankly, better at, are done through the CarPlay UI.

The built-in iPhone apps supported with CarPlay at this point appear to be:

  • Phone
  • Messages
  • Maps
  • Music
  • Podcasts

The obvious scenarios here are making/receiving phone calls or sending/receiving SMS/iMessages with your phone’s native contact list, and navigation. Quick tasks. Not surfing or searching the Web while you’re driving. Yay! The Maps app has an interesting touch that the Apple employee chose to highlight in the Ferrari video, where maps you’ve been sent in messages are displayed in the list of potential destinations you can choose from. Obviously the CarPlay solution enables Apple’s turn-by-turn maps. If you’re an Apple Maps fan, that’s great news (I’m quite happy with them at this point, personally). If you like using Google Maps or another mapping/messaging or VOIP solution, it looks like you’re out of luck at this point.

In addition to touch, button, or knob-based navigation, Siri is omnipresent in CarPlay, and the system can use voice as your primary input mechanism (triggered through a voice command button on the steering wheel), and is used for reading text messages out loud to you, and responding to them. I use that Siri feature pretty often, myself.

The Music and Podcasts seem like obvious apps to make available, especially now that iTunes Radio is available (although most people either either love or hate the Podcasts app). Just as importantly, Apple is making a handful of third-party applications at this point. Notably:

  • Spotify
  • iHeartRadio
  • Stitcher

Though Apple’s CarPlay site does call out the Beats Music app as well, I noticed it was missing in the Ferrari demo.

Overall, I like Apple’s direction with this. Of course, as I said on Twitter, I’m so vested in the walled garden, I don’t necessarily care that it doesn’t integrate in with handsets from other platforms. That said, I do think most OEMs will be looking at alternatives and implementing one or more of them simultaneously (hopefully implementing all of them that they choose to in a somewhat consistent manner).

Personally, I see quite a few positives to CarPlay:

  • If you have an iPhone, it takes advantage of the device that is already your personal  hub, instead of trying to reinvent it
  • It isolates the things the manufacturer may either be good at or may want to control, and the CarPlay UX. In short, Apple gets their own UX, presented reliably
  • It uses your existing data connection, not yet another one for the car
  • It uses one cable connection. No WiFi or BLE connectivity, and charges while it works
  • I trust Apple to build a lower-distraction (Siri-centric) UI than most automakers
  • It can be updated by Apple, independent of the car head unit
  • Apple can push new apps to it independent of the manufacturer
  • Apple Maps may suck in some people’s perspective (not mine), but it isn’t nearly as bad as some in-dash nav systems (watch some of Brian’s car reviews if you don’t believe me), and doesn’t require shelling out for shiny-media based updates!

Of course, there are some criticisms I or others have already mentioned on Twitter or in reviews:

  • It requires, and uses, iOS 7. Don’t like the iOS 7 UI? You’re probably not going to be a fan
  • It requires a cable connection. Not WiFi or BLE. This is a good/bad thing. I think in time, we’ll see considerate design of integrated phone slots or the like – push the phone in, flat, to dock it. The cables look hacky, but likely enable the security, performance, low latency, and integrated charging that are a better experience overall (also discourages you from picking the phone up while driving)
  • Apple Maps. If you don’t like it, you don’t like it. I do, but lots of people still seem to like deriding it
  • It is yet another Apple walled garden (like Apple TV, or iOS as a whole). Apple controls the UI of CarPlay, how it works, and what apps and content are or are not available. Just like Apple TV is at present. The fact that it is not an open platform or open spec also bothers some.

Overall, I really am excited by what CarPlay represents. I’ve never seen an in-car entertainment system I really loved. While I don’t think I really love any of the three head units I’ve seen so far, I do relish the idea of being able to use the device I like to use already, and having an app experience I’m already familiar with. Now I just need to have it hit some lower-priced vehicles I actually want to buy.

Speaking of that; Apple has said that, beyond the makers above, the following manufacturers have also signed on to work with CarPlay:

BMW Group (which includes Mini and Rolls-Royce), Chevrolet, Ford, Kia, Land Rover, Mitsubishi, Nissan, Opel PSA Peugeot Citroen, Subaru, Suzuki, and Toyota.

As a VW fan, I was disheartened to not see VW on the list. Frankly I wouldn’t be terribly surprised to see a higher-end VW marque opt into it before too long (Porsche, Audi, or Bentley seem like obvious ones to me – but we’ll see). Also absent? Tesla. But I wouldn’t be surprised to see that show up in time as well.

It’s an interesting start. I look forward to seeing how Google, Microsoft, and others continue to evolve their own automotive stories over the coming years – but I think one thing is for sure; the beginning of the phone as the hub of the car (and beyond) is just beginning.


17
Jan 14

Running Windows XP after April? A couple of suggestions for you

Yesterday on Twitter, I said the following:

Suggestion… If you have an XP system that you ABSOLUTELY must run after April, I’d remove all JREs, as well as Acrobat Reader and Flash.

This was inspired by an inquiry from a customer about Windows XP support that arrived earlier in the day.

As a result of that tweet, three things have happened.

  1. Many people replied “unplug it from the network!” 1
  2. Several people asked me why I suggested doing these steps.
  3. I’ve begun working on a more comprehensive set of recommendations, to be available shortly. 2

First off… Yes, it’d be ideal if we could just retire all of these XP systems on a dime. But that’s not going to happen. If it was easy (or free), businesses and consumers wouldn’t have waited until the last second to retire these systems. But there’s a reason why they haven’t. Medical/dental practices have practice management or other proprietary software that isn’t tested/supported on anything newer, custom point of sale software from vendors that disappeared, were acquired, or simply never brought that version of their software… There’s a multitude of reasons, and these systems aren’t all going to disappear or be shut off by April. It’s not going to happen. It’s unfortunate, but there are a lot of Windows XP systems that will be used for many years still in many places that we’d all rather not see happen. There’s no silver bullet for that. Hence, my off the cuff recommendations over Twitter.

Second, there’s a reason why I called out these three pieces of software. If you aren’t familiar with the history, I’d encourage you to go Bing (or Google, or…) the three following searches:

  1. zero day java vulnerability
  2. zero day Flash vulnerability
  3. zero day Acrobat vulnerability

Now if you looked carefully, each one of those, at least on Bing, returned well over 1M results, many (most?) of them from the last three years. In telling me that these XP systems should be disconnected from the Web, many people missed the point I was making.

PCs don’t get infected from the inside out. They get infected from the outside in. When Microsoft had the “Security Push” over ten years ago that forced us to reconsider how we designed, built and tested software, it involved stopping where we were, and completely thinking about how Windows was built. Threat models replaced ridiculous statements like, “We have the very best xx encryption, so we’re ‘secure’”. While Windows XP may be more porous than Vista and later are (because the company was able to implement foundational security even more deeply, and engineer protections deeply into IE, for example, as well as implement primordial UAC), Windows XPSP2 and later are far less of a threat vector than XPSP1 and earlier were. So if you’re a bad guy and you want to get bad things to happen on a PC today, who do you go after? It isn’t Windows binaries themselves, or even IE. You go next for the application runtimes that are nearly as pervasive. Java, Flash, and Acrobat. Arguably, Acrobat may or may not be a runtime, depending on your POV. But the threat is still there, especially if you haven’t been maintaining these as they’ve been updated over the last few years.

As hard as Adobe and Oracle may try to keep these three patched, these three codebases have significant vulnerabilities that are found far too often. Those vulnerabilities, if not patched by vendors and updated by system owners incredibly quickly, become the primary vector of infecting both Windows and OS X systems by executing shellcode.

After April, Windows XP is expected to get no updates. Got that? NO UPDATES. NONE. Nada. Zippo. Zilch. So while you may get antivirus updates from Microsoft and third parties, but at that point you honestly have a rotting wooden boat. I say this in the nicest way possible. I was on the team shipping Windows XP, and it saddens me to throw it under the bus, but I don’t think people get the threat here. Antivirus simply cannot protect you from every kind of attack. Windows XP and the versions of IE (6-8) have still regularly received patches almost every month for the past several years. So Windows XP isn’t “war hardened”, it is brittle. So after April, you won’t even get those patches trying to spackle over newly found vulnerabilities in the OS and IE. Instead, these will become exploit vectors ready to be hit by shellcode coming in off of the Internet (or even the local network) and turned into opportunistic infections.

Disclaimer: This is absolutely NOT a guarantee that systems won’t get infected, and you should NOT remove these or any piece of Microsoft or third-party software if a business-critical application actually depends on them or if you do not understand the dependencies of the applications in use on a particular PC or set of PCs! 

So what is a business or consumer to do? Jettison, baby. Jettison. If you can’t retire the entire Windows XP system, retire every single piece of software on that system that you can, beginning with the three I mentioned above. Those are key connection points of any system to the Web/Internet. Remove them and there is a good likelihood of lessening the infection vector.   But it is a recommendation to make jetsam of any software on those XP systems that you really don’t need. Think of this as not traveling to a country where a specific disease is breaking out until the threat has passed. In the same vein, I’d say blocking Web browsers and removing email clients coming in a close second, since they’re such a great vector for social engineering-based infections today.

Finally, as I mentioned earlier, I am working on an even more comprehensive set of recommendations to come in a more comprehensive report to be published for work, in our next issue, which should be live on the Web during the last week of January. My first recommendation would of course be to, if at all possible, retire your Windows XP systems as soon as possible. But I hope that this set of recommendations, while absolutely not a guarantee, can help some people as they move away, or finally consider how to move away, from Windows XP.

Footnotes

  1. Or unplug the power, or blow it up with explosives, or…
  2. These recommendations will be included in the next issue of Update.

05
Jan 14

Bimodal tablets (Windows and Android). Remember them when they’re gone. Again.

I hope these rumors are wrong, but for some odd reason, the Web is full of rumors that this year’s CES will bring a glut of bimodal tablets; devices that are designed to run Windows 8.1, but also feature an integrated instance of Android. But why?

For years, Microsoft and Intel were seemingly the best of partners. While Microsoft had fleeting dalliances with other processor architectures, they always came back to Intel. There were clear lines in the sand;

  1. Intel made processors
  2. Microsoft made software
  3. Their mutual partners (ODMs and OEMs) made complete systems.

When Microsoft announced the Surface tablets, they crossed a line. Their partners (Intel and the device manufactures) were stuck in an odd place. Continue partnering just with Microsoft (now a competitor to manufacturers, and a direct purveyor of consumer devices with ARM processors), or find alternative counterpoints to ensure that they weren’t stuck in the event that Microsoft harmed their market.

For device manufacturers, this has meant what we might have thought unthinkable 3 years ago, with key manufacturers (now believing that their former partner is now also a competitor) building Android and Chrome OS devices. For Intel, it has meant looking even more broadly at what other operating systems they should ensure compatibility with, and evangelization of (predominantly Android).

While the Windows Store has grown in terms of app count, there are still some holes, and there isn’t really a gravitational pull of apps leading users to the platform. Yet.

So some OEMs, and seemingly Intel, have collaborated on this effort to glue together Windows 8.1 and Android on a single device, with the hopes that the two OSs combined in some way equate to “consumer value”. However, there’s really no clear sign that the consumer benefits from this approach, and in fact they really lose, as they’ve now got a Windows device with precious storage space consumed by an Android install of dubious value. If the consumer really wanted an Android device, they’re in the opposite conundrum.

Really, the OEMs and Intel have to be going into this strategy without any concern for consumers. It’s just about moving devices, and trying to ensure an ecosystem is there when they can’t (or don’t want to) bet on one platform exclusively. The end result is a device that instead of doing task A well, or task B well, does a really middling job with both of them, and results in a device that the user regrets buying (or worse, regrets being given).

BIOS manufacturers and OEMs have gone down this road several times before, usually trying to put Linux either in firmware or on disk as a rapid-boot dual use environment to “get online faster” or watch movies without waiting for Windows to boot/unhibernate. To my knowledge most devices that ever had these modes provided by the OEM were rarely actually used. Users hate rebooting, they get confused by where their Web bookmarks are (or aren’t) when they need them, etc.

These kinds of approaches rarely solve problems for users; in fact, they usually create problems instead, and are a huge nightmare in terms of management. Non-technical users are generally horrible about maintaining one OS. Give them two on a single device? This will turn out quite well, don’t you think? In the end, these devices, unless executed flawlessly, are damaging to both the Windows and Android ecosystems, the OEMs, and Intel. Any bad experiences will likely result in returns, or exchanges for iPads.


20
Dec 13

Security and Usability – Yes, you read that right.

I want you to think for a second about the key you use most. Whether it’s for your house, your apartment, your car, or your office, just think about it for a moment.

Now, this key you’re thinking of is going to have a few basic properties. It consists of metal, has a blade extending out of it that has grooves along one or both sides, and either a single set of teeth cut into the bottom, or two sets of identical teeth cut into both the top and bottom.

If it is a car key, it might be slightly different; as car theft has increased, car keys have gotten more complex, so you might be thinking about a car key that is just a wireless fob that unlocks and or starts the car based on proximity, or it might be an inner-cut key as is common with many Asian and European cars today.

Aside from the description I just gave you, when was the last time you thought about that key? When did you actually last look at the ridges on it?

It’s been a while, hasn’t it? That’s because that key and the lock it works with provide the level of security you feel that you need to protect that place or car, yet it doesn’t get in your way, as long as the key and the lock are behaving properly.

Earlier this week, I was on a chat on Twitter, and we were discussing aspects of security as they relate to mobile devices. In particular, the question was asked, “Why do users elect to not put a pin/passcode/password on their mobile devices?” While I’ve mocked the idea of considering security and usability in the same sentence, let alone the same train of thought while developing technology, I was wrong. Yes, I said it. I was wrong. Truth be told, Apple’s Touch ID is what finally schooled me on it. Security and usability should be peers today.

When Apple shipped the iPhone 5s and added the Touch ID fingerprint sensor, it was derided by some as not secure enough, not well designed, not a 100% replacement for the passcode, or simply too easy to defeat. But Touch ID does what it needs to do. It works with the user’s existing passcode – which Apple wisely tries to coax users into setting up on iOS 7, regardless of whether they have a 5s or not – to make day to day use of the device easier while living with a modicum of security, and a better approach to securing the data, the device, and the credentials stored in it and iCloud in a better way than most users had prior to their 5s.

That last part is important. When we shipped Windows XP, I like to think we tried to build security into it to begin with. But the reality is, security wasn’t pervasive. It took setting aside a lot of dedicated time (two solid months of security training, threat modeling, and standing down on new feature work) for the Windows Security Push. We had to completely shift our internal mindset to think about security from end to end. Unlike the way we had lived before, security wasn’t to be a checkbox, it wasn’t a developer saying, “I used the latest cryptographic APIs”, and it wasn’t something added on at the last minute.

Security is like yeast in bread. If you add it when you’re done, you simply don’t have bread – well, at least you don’t have leavened bread. So it took us shipping Windows XP SP2 – an OS update so big and so significant many people said it should have been called a new OS release – before we ever shipped a Windows release where security was baked in from the beginning of the project, across the entirety of the project.

When it comes to design, I’ve mentioned this video before, but I think two of Jonathan Ives’ quotes in it are really important to have in your mind here. Firstly:

“A lot of what we seem to be doing in a product like that (the iPhone) is getting design out of the way.”

and secondarily:

“It’s really important in a product to have a sense of the hierarchy of what’s important and what’s not important by removing those things that are all vying for your attention.”

I believe that this model of thought is critical to have in mind when considering usability, and in particular where security runs smack dab into usability (or more often, un-usability). I’ve said for a long time that solutions like two-factor security won’t take off until they’re approachable by, and effectively invisible to, normal people. Heck, too much of the world didn’t set ever set their VCR clocks for the better part of a decade because it was too hard, and it was a pain in the ass to do it again every time the power went out. You really don’t understand why they don’t set a good pin, let alone a good passcode, on their phone?

What I’m about to say isn’t meant to infer that usability isn’t important to many companies, including Microsoft, but I believe many companies run, and many software, hardware or technology projects are started, run, and finished, where usability is still just a checkbox. As security is today at Microsoft, usability should be embraced, taught, and rewarded across the organization.

One can imagine an alternate universe where a software project the world uses was stopped in it’s tracks for months, redesigned, and updated around the world because a user interface element was so poorly designed for mortals that they made a bad security decision. But this alternate universe is just that, an alternate universe. As you’re reading the above, it sounds wacky to you – but it shouldn’t! As technologists, it is our duty to build hardware, software, and devices where the experience, including the approach to security, works with the user, not against them. Any move that takes the status quo of “security that users self-select to opt into” and moves it forward a notch is a positive move. But any move here also has to just work. You can’t implement nerd porn like facial recognition if it doesn’t work all of the time or provide an alternative for when it fails.

Projects that build innovative solutions where usability and security intersect should be rewarded by technologists. Sure, they should be critiqued and criticized, especially if designing in a usable approach really compromises the security fundamentals of the – ideally threat-modeled – implementation. But critics should also understand where their criticism falls down in light of the practical security choices most end users make in daily life.

Touch ID,  with as much poking, prodding, questioning, and hacking as it received when it was announced, is a very good thing. It’s not perfect, and I’m sure it’ll get better in future iterations of the software and hardware, and perhaps as competitors come up with alternatives or better implementations, Apple will have to make it ever more reliable. But a solution that allows that bar to be moved forward, from a place where most users don’t elect to set a pin or passcode to a place where they do? That’s a net positive, in my book.

As Internet-borne exploits continue to grow in both intensity and severity, it is so critical that we all start taking the usability of security implementations by normal people seriously. If you make bad design decisions about the intersection where security and usability collide, your end users will find their own desire path through the mayhem, likely making the easiest, and not usually the best, security decisions.

 


04
Nov 13

Plan on profiting off of Windows XP holdouts? There’s no gold left in them thar hills.

A few times over the last year, I’ve had conversations with people about Windows XP holdouts. That is, that as Windows XP’s impending doom rapidly approaches next April, businesses and consumers holding out on Windows XP will readily flock to something new, such as – ideally for Microsoft, Windows 8.1 – or Windows 7.

I’m not so sure.

To start, let’s consider why a business or consumer would still be running Windows XP today. Most likely, it’s a combination of all of the following:

  1. It’s paid for (the OS and hardware)
  2. It runs on the hardware they have
  3. Applications they have won’t run, or aren’t supported, on anything newer
  4. It requires no user retraining
  5. They don’t see  a compelling reason to move beyond XP
  6. They don’t realize the risks of sticking with XP after next April.

You can split those reasons into two categories. Items 1-3 are largely due to financial impediment, while 4-6 are generally due to “static quo” – XP meets their business needs and 8.1 or 7 doesn’t provide the necessary pull to motivate them to move off of XP.

It’s not that 7, 8, or 8.1 did anything wrong, necessarily. I was there when XP shipped, and I’ll tell you I heard many business customers complain about numerous things. They hated the theme and felt it was toy-like. They wanted to be able to take off Movie Maker, Internet Mail & News, or other consumer niblets of the OS, but couldn’t. Frankly, some of them just felt it was a warmed over Windows 2000 (obviously none of those customers had ever tried to undock a hibernated Windows 2000 laptop). For many customers, it took until XP had been effectively re-released as XPSP2 for them to really fall for it. When Vista shipped, reasons 4 & 5 above largely doomed it. Vista had a completely nebulous value proposition for most consumers and almost every business, leading to Windows XP becoming even more deeply engrained into many businesses.

Many people describe Windows 7 as “a better Windows XP”, which I think is actually an insult to both operating systems. But frankly, unless a business understands item number 6 (which Microsoft just grabbed a drum and started beating really hard – albeit very, very late), the rest don’t matter.

I’ve talked to several businesses about Windows XP over the past several years. For better or worse, most of them are happy with the hardware and software investments they made over the last 12 years, and many don’t feel like spending money for new hardware (especially new touch-centric form factors with value that they don’t see clearly yet).

Even more important though, is the number of times we have run into businesses – especially small businesses like dental, medical, or other independent practices – which during the past decade either bought commercial software packages or hired consultants to build them custom software. As a result, many of them hit item number 3 – “Applications they have won’t run, or aren’t supported, on anything newer”. I kid you not, there are a lot of small businesses with a lot of applications that honestly have no path forward. They cannot stay on XP – they cannot be secure. They cannot move off, as they either cannot find a replacement of one or more of their key applications, cannot move that key existing application, or in some cases, simply cannot afford to move to a replacement (in case you haven’t noticed, we’re still not in a great economic climate). They are stuck between a rock and a hard place. Move off of XP and throw away working systems your employees already know for new systems with unknown features or functionality. To boot, any of these new solutions are primarily still targeting Windows 7 (the desktop), not the Windows 8+ “modern UI” – diminishing some of the key value in acquiring tablet or touch-centric devices running 8, if the system is, for the time being (and likely for the foreseeable future) stuck on the desktop. Since Windows 8+ doesn’t include Windows XP Mode, unless a customer has appropriate enterprise licensing with Microsoft, they can’t even run Windows XP in a VM on Windows 8+ (and I have a hard time believing that customers who spend that kind of money are the kind of customers who are holding out).

There are still many organizations that are using XP (and likely Office 2003) and appear to have no exit strategy or plan to leave XP behind. It appears a lot of organizations don’t realize (or don’t care) now porous Windows XP will become after it ceases being patched in April. It isn’t a war-hardened OS, as some customers believe. It’s a U.S.S. Constitution in an era of metal battleships. I hate to sound like a shill, but XP systems will be ripe for an ass-kicking beginning next spring, and they can, and will, be taken advantage of. I also don’t believe Microsoft will do any favors for businesses that stay on XP (and don’t pay the hefty costs for custom support agreements with a locked and loaded exit plan in place).

XP is dying. But I believe lots of organizations are simply unclear about what kind of threat it poses to them. As a result, they’re sitting on the investment they’ve already made.

I also think that a lot of organizations that are still sitting on XP today may even be aware of (some of the) potential risks that XP poses to their organization after April, but simply don’t have the budget to escape in time, even if they had the motivation (which lots of them don’t appear to have). Even if a company pulls the trigger today, if they have any significant number of XP systems and XP dependent applications, they’ll be lucky to be off of XP by April.

There’s a belief that a lot of these customers had budget sitting there, had no app blockers, and might have even wanted to go to 7, 8 or just “something new”, but were just lackadaisical and for some reason will now get a fire lit under them, generating a windfall of sorts for Microsoft, PC OEMs, and partners over the next 5 months. Instead of that easy opportunity, I believe where you run across XP in the majority of organizations at this point, a better analogy is a set of four fully impacted wisdom teeth in a patient with no dental coverage.


30
Oct 13

Windows Server on ARM processors? I don’t think so.

It’s hard to believe that almost three years have passed since I wrote my first blog entry discussing Windows running on the ARM processor. Over that time, we’ve seen an increasing onslaught of client devices (tablets and phones) running on ARM, and we’ve watched Windows expand to several Windows RT-based devices, and retract back to the Surface RT and Surface 2 being the only ARM-based Windows tablets, and now with the impending Nokia 2520 being the only non-Microsoft (and the only non-Nvidia) Windows RT tablets – that is, for as long as Nokia isn’t a part of Microsoft.

Before I dive in to the topic of Windows on ARM servers, I think it is important to take a step back and assess Windows RT.

Windows RT 8.1 likely shows the way that Microsoft’s non-x64 tablets will go – with less and less emphasis on the desktop over time, specifically as we see more touch-friendly Office applications in the modern shell. In essence, the strength that Microsoft has been promoting Windows RT upon (Office! The desktop you know!) is also it’s Achilles heel, due to the bifurcated roles of the desktop and modern UIs. But that’s the client – where, if Microsoft succeeds, the desktop becomes less important over time, and the modern interface becomes more important. A completely different direction than servers.

Microsoft will surely tell you that Windows RT, like the Windows Store and Surface, are investments in the long term. They aren’t short-term bets. That said, I think you’d have to really question anybody who tells you “Windows RT is doing really well.” Many partners kicking Windows RT’s tires ahead of launch bolted before the OS arrived, and every other ODM/OEM building or selling Windows RT devices has abandoned the platform in favor of low-cost Intel silicon instead. The Windows Store may be growing in some aspects, but until it is healthy and standing on its own, Windows RT is a second fiddle to Windows 8.x, where the desktop can be available to run “old software”, as much as that may be uninspiring on a tablet.

For some odd reason, people are fascinated with the idea of ARM-based servers. I’ve wound up in several debates/discussions with people on Twitter about Windows on ARM servers. I hope it never happens, and I don’t believe it will. Moreover, if it does, I believe it will fail.

ARM is ideal for a client platform – especially a clean client platform with no legacy baggage (Android, iOS, etc). It is low-power and highly customizable silicon. Certainly, when you look at data centers, the first thing you’ll notice is the energy consumption. Sure, it’d be great if we could conceptually reduce that by using ARM. But I’m really not sure replacing systems running one instruction set with systems running another is really a)viable or b)the most cost effective way to go about making the infrastructure more energy efficient.

Windows RT is, in effect, a power-optimized version of Windows 8, targeted to Nvidia and Qualcomm SoCs. It cannot run (most) troublesome desktop applications, and as a result doesn’t suffer from decades of Win32 development bad habits, with applications constantly pushing, pulling, polling and waiting… Instead, Windows RT is predominantly based around WinRT, a new, tightly marshaled API set intended to (in addition to favoring touch) minimize power consumption of non-foreground applications (you’ll note, the complete opposite of what servers do). Many people contemplating ARM-based Windows servers don’t seem to understand how horribly this model (WinRT) would translate to Windows server.

I talked earlier this year about the fork in the road ahead of Windows Server and the Windows client. I feel that it is very important to understand this fork, as Windows Server and client are headed in totally different directions in terms of how you interact with them and how they fulfill their expected role:

  • Windows client shell is Start screen/modern/Explorer first. Focuses on low-power, foreground-led applications, ARM and x86/x64, predominantly emphasizing WinRT.
  • Windows Server shell is increasingly PowerShell first. Focuses on virtualization, storage, and networking, efficient use of background processes, x64 only, predominantly emphasizing .NET and ASP.NET.

For years, Microsoft fought Linux tooth and nail to be the OS of choice for hosters. There’s really not much money to be made at that low end when you’re fighting against free and can’t charge for client access licenses, where Microsoft loves to make bread and butter. Microsoft offered low-end variants of Windows Server to try and break into this market. Cheaper prices mixed with hamstrung feature capabilities, etc. In time the custom edition was dropped in favor of less restrictive licensing of the regular editions of Windows Server 2012. But this isn’t a licensing piece, so I digress.

It is my sincere hope that there are enough people left at Microsoft who can still remember the Itanium. We’ll never know how much money ($MM? $BB?) was wasted on trying to make Windows Server and a few other workloads successful on the Itanium processor. Microsoft spent considerable time and money getting Windows (initially client and server, eventually just server) and select server applications/workloads ported to Itanium. Not much in terms of software ever actually made it over. Now it is dead – like every other architecture Windows NT has been ported to other than x64 (technically a port, but quite different) and, for now, ARM.

That in mind, I invite you to ponder what it would take to get a Windows Server ecosystem running on ARM processors, doing the things servers need to do. You’d need:

  1. 64-bit ARM processors from Nvidia or Qualcomm (SoCs already supported by Windows, but in 64-bit forms)
  2. Server hardware built around these processors – likely blade servers
  3. Server workloads for Windows built around these processors – likely IIS and a select other range of roles such as a Hadoop node, etc.
  4. .NET framework and other third-party/dev dependencies (many of these in place due to Windows RT, but are likely 32-bit, not 64-bit)
  5. Your code, running on ARM. Many things might just work, lots of things just won’t.

That’s just the technical side. It isn’t to say you couldn’t do it – or that part of it might not already be done within Microsoft already, but otherwise it would be a fairly large amount of work with likely a very, very low payoff for Microsoft, which leads us, briefly, to the licensing side. You think ARM-based clients are scraping the bottom of the pricing barrel? I don’t think Microsoft could charge nearly the price they do for Windows Server 2012 R2 Standard on an ARM-based server and have it be commercially viable (when going up against free operating systems). Charge less than Windows Server on x64, and you’re cannibalizing your own platform – something Microsoft doesn’t like to do.

Of course, the biggest argument against Windows Server on ARM processors is this: www.windowsazure.com. Any role that you would likely find an ARM server well-suited for, Microsoft would be happy to sublet you time on Windows Azure to accomplish the same task. Web hosting, Web application, task node, Hadoop node, etc. Sure, it isn’t on-premises, but if your primary consideration is cost, using Azure instead of building out a new ARM-based data center is probably a more financially viable direction, and is what Microsoft would much rather see going forward. The energy efficiency is explicit – you likely pay fractions of what you might for the same fixed hardware workload on premises running on x64 Windows, and you pay “nothing” when the workload is off in Azure – you can also expand or contract your scale as you need to, without investing in more hardware (but you run the same code you would on-premises – not the same as ARM would need). Microsoft, being a Devices and Services company now, would much rather sell you a steady supply of Windows Azure-based services instead of Windows Server licenses that might never be updated again.

Certainly, anything is possible. We could see Windows Server on ARM processors. We could even see Microsoft-branded server hardware (please no, Microsoft). But I don’t believe Microsoft sees either of those as a path forward. For on-premises, the future of energy efficiency with Windows Server lies in virtualization and consolidation on top of Hyper-V and Windows Server 2012+. For off-premises, the future of energy efficiency with Windows Server appears rather Azure. I certainly don’t expect to see an ARM-based Windows Server anytime soon. If I do, I’d really like to see the economic model that justifies it, and what the OS would sell for.