28
Aug 16

It doesn’t have to be a crapfest

A  bit ago, this blog post crossed my Twitter feed. I read it, and while the schadenfreude made me smirk for a minute, it eventually made me feel bad.

The blog post purports to describe how a shitty shutdown dialog became a shitty shutdown dialog. But instead, it documents something I like to call “too many puppies” syndrome. If you are working on high visibility areas of a product – like the Windows Shell – like Explorer in particular, everybody has an belief that their opinion is the right direction. It’s like dogs and a fire hydrant. My point really isn’t to be derisive here, but to point out that the failure of that project does not seem to be due to any other teams. Instead, it seems to have been due to some combination of unclear goals and a fair amount of the team he was on being lost in the wilderness.

I mentioned on Twitter that, if you are familiar with the organizational structure of Windows, that you can see the cut lines of those teams in the UI. A reply to that mentioned Conway’s law – which I was unfamiliar with, but basically states that as a general concept, a system designed by an organization will reflect the structure of that organization.

But not every project is doomed to live inside its own silo. In fact, some of my favorite projects that I worked on while I was at The Firm were ones that fought the silo, and the user won. Unfortunately, this was novel then, and still feels novel now.

During the development of Windows Server 2003, Bill Veghte, a relatively new VP on the product, led a series of reviews where he had program managers (PMs) across the product walk through their feature area/user scenario, to see how it worked, didn’t work, and how things could perhaps be improved. Owning the enterprise deployment experience for Windows at the time, I had the (mis?)fortune of walking Bill through the setup and configuration experience with a bunch of people from the Windows Server team.

When I had joined the Windows “Whistler” team just before beta 2, the OS that became Windows XP was described by a teammate as a “lipstick on a chicken” release was already solidifying, and while we had big dreams of future releases like “Blackcomb” (never happened), Whistler was limited largely by time to the goal of shipping the first NT-based OS to both replace ME and the 9X family for consumers, and Windows 2000 in business.

Windows Server, on the other hand, was to ship later. (In reality, much, much later, on a branched source tree, due to the need to <ahem/> revisit XP a few times after we shipped it.) This meant that the Windows Server team could think a bit bigger about shipping the best product for their customers. These scenario reviews, which I really enjoyed attending at the time, were intended to shake out the rattles in the product and figure out how to make it better.

During my scenario review, we walked through the entire setup experience – from booting the CD to configuring the server. If you recall, this meant walking through some really ugly bits of Windows. Text-mode setup. F5 and F6 function keys to install a custom HAL or mass-storage controller drivers during text-mode setup. Formatting a disk in text-mode setup. GUI-mode setup. Fun, fun stuff.

Also, some forget, but this was the first time that Windows Server was likely to ship with different branding from the client OS. Yet the Windows client branding was… everywhere. Setup “billboards” touting OS features that were irrelevant in a server, wizards, help files, even the fact that setup was loading drivers for PCMCIA cards and other peripherals that a server would never need or use in the real world, or verbs on the shutdown menu that made no sense on a server, like standby or hibernate.

A small team of individuals on the server team owned the resulting output from these walkthroughs, which went far beyond setup, and resulted in a bunch of changes to how Windows Server was configured, managed, and more. In terms of my role, I wound up being their liaison for design change requests (DCRs) on the Windows setup team.

There were a bunch of things that were no-brainers – fixing Windows Setup to be branded with Windows Server branding, for example. And there were a ton of changes that, while good ideas, were just too invasive to change, given the timeframe that Windows Server was expected to ship in, (and that it was still tethered to XP’s codebase at that time, IIRC). So lots of things were punted out to Blackcomb, etc.

One of my favorite topics of discussion, however, became the Start menu. While Windows XP shipped with a bunch of consumer items in the Start menu, almost everything it put there was… less than optimal on a server. IE, Outlook Express, and… Movie Maker? Heck, the last DCR I had to say no to for XP was a very major customer telling us they didn’t even want movie maker in Windows XP Pro! It had no place on servers – nor did Solitaire or the Windows XP tour.

So it became a small thing that David, my peer on the server team, and I tinkered with. I threw together a mockup and sent it to him. (It looked a lot like the finished product you see in this article.) No consumer gunk. But tools that a server administrator might use regularly. David ran this and a bunch of other ideas by some MVPs at an event on campus, and even received applause for their work.

As I recall, I introduced David to Raymond Chen, the guru of all things Windows shell, and Raymond and David wound up working together to resolve several requests that the Windows Server team had in the user interface realm. In the end, Windows Server 2003 (and Server SP1, which brought x64 support) wound up being really important releases to the company, and I think they reflected the beginning of a new maturity at Microsoft on building a server product that really felt… like a server.

The important thing to remember is that there wasn’t really any sort of vehicle to reflect cross-team collaboration within the company then. (I don’t know if there is today.) It generally wasn’t in your review goals (those all usually reflected features in your team’s immediate areas), and compensation surely didn’t reflect it. I sat down with David this week, having not talked for some time, and told him how most of my favorite memories of Microsoft were working on cross-team projects where I helped other teams deliver better experiences by refining where their product/feature crossed over into our area, and sometimes beyond.

I think that if you can look deeply in a product or service that you’re building, and see Conway’s law in action, you need to take a step back. Because you’re building a product for yourself, not for your customers. Building products and services that serve your entire customer base means always collaborating, and stretching the boundaries of what defines “your team”. I believe the project cited in the original blog post I referenced above failed both because there were too many cooks, but also because it would seem that anyone with any power to control the conversation actually forgot what they were cooking.

 

 


30
Oct 13

Windows Server on ARM processors? I don’t think so.

It’s hard to believe that almost three years have passed since I wrote my first blog entry discussing Windows running on the ARM processor. Over that time, we’ve seen an increasing onslaught of client devices (tablets and phones) running on ARM, and we’ve watched Windows expand to several Windows RT-based devices, and retract back to the Surface RT and Surface 2 being the only ARM-based Windows tablets, and now with the impending Nokia 2520 being the only non-Microsoft (and the only non-Nvidia) Windows RT tablets – that is, for as long as Nokia isn’t a part of Microsoft.

Before I dive in to the topic of Windows on ARM servers, I think it is important to take a step back and assess Windows RT.

Windows RT 8.1 likely shows the way that Microsoft’s non-x64 tablets will go – with less and less emphasis on the desktop over time, specifically as we see more touch-friendly Office applications in the modern shell. In essence, the strength that Microsoft has been promoting Windows RT upon (Office! The desktop you know!) is also it’s Achilles heel, due to the bifurcated roles of the desktop and modern UIs. But that’s the client – where, if Microsoft succeeds, the desktop becomes less important over time, and the modern interface becomes more important. A completely different direction than servers.

Microsoft will surely tell you that Windows RT, like the Windows Store and Surface, are investments in the long term. They aren’t short-term bets. That said, I think you’d have to really question anybody who tells you “Windows RT is doing really well.” Many partners kicking Windows RT’s tires ahead of launch bolted before the OS arrived, and every other ODM/OEM building or selling Windows RT devices has abandoned the platform in favor of low-cost Intel silicon instead. The Windows Store may be growing in some aspects, but until it is healthy and standing on its own, Windows RT is a second fiddle to Windows 8.x, where the desktop can be available to run “old software”, as much as that may be uninspiring on a tablet.

For some odd reason, people are fascinated with the idea of ARM-based servers. I’ve wound up in several debates/discussions with people on Twitter about Windows on ARM servers. I hope it never happens, and I don’t believe it will. Moreover, if it does, I believe it will fail.

ARM is ideal for a client platform – especially a clean client platform with no legacy baggage (Android, iOS, etc). It is low-power and highly customizable silicon. Certainly, when you look at data centers, the first thing you’ll notice is the energy consumption. Sure, it’d be great if we could conceptually reduce that by using ARM. But I’m really not sure replacing systems running one instruction set with systems running another is really a)viable or b)the most cost effective way to go about making the infrastructure more energy efficient.

Windows RT is, in effect, a power-optimized version of Windows 8, targeted to Nvidia and Qualcomm SoCs. It cannot run (most) troublesome desktop applications, and as a result doesn’t suffer from decades of Win32 development bad habits, with applications constantly pushing, pulling, polling and waiting… Instead, Windows RT is predominantly based around WinRT, a new, tightly marshaled API set intended to (in addition to favoring touch) minimize power consumption of non-foreground applications (you’ll note, the complete opposite of what servers do). Many people contemplating ARM-based Windows servers don’t seem to understand how horribly this model (WinRT) would translate to Windows server.

I talked earlier this year about the fork in the road ahead of Windows Server and the Windows client. I feel that it is very important to understand this fork, as Windows Server and client are headed in totally different directions in terms of how you interact with them and how they fulfill their expected role:

  • Windows client shell is Start screen/modern/Explorer first. Focuses on low-power, foreground-led applications, ARM and x86/x64, predominantly emphasizing WinRT.
  • Windows Server shell is increasingly PowerShell first. Focuses on virtualization, storage, and networking, efficient use of background processes, x64 only, predominantly emphasizing .NET and ASP.NET.

For years, Microsoft fought Linux tooth and nail to be the OS of choice for hosters. There’s really not much money to be made at that low end when you’re fighting against free and can’t charge for client access licenses, where Microsoft loves to make bread and butter. Microsoft offered low-end variants of Windows Server to try and break into this market. Cheaper prices mixed with hamstrung feature capabilities, etc. In time the custom edition was dropped in favor of less restrictive licensing of the regular editions of Windows Server 2012. But this isn’t a licensing piece, so I digress.

It is my sincere hope that there are enough people left at Microsoft who can still remember the Itanium. We’ll never know how much money ($MM? $BB?) was wasted on trying to make Windows Server and a few other workloads successful on the Itanium processor. Microsoft spent considerable time and money getting Windows (initially client and server, eventually just server) and select server applications/workloads ported to Itanium. Not much in terms of software ever actually made it over. Now it is dead – like every other architecture Windows NT has been ported to other than x64 (technically a port, but quite different) and, for now, ARM.

That in mind, I invite you to ponder what it would take to get a Windows Server ecosystem running on ARM processors, doing the things servers need to do. You’d need:

  1. 64-bit ARM processors from Nvidia or Qualcomm (SoCs already supported by Windows, but in 64-bit forms)
  2. Server hardware built around these processors – likely blade servers
  3. Server workloads for Windows built around these processors – likely IIS and a select other range of roles such as a Hadoop node, etc.
  4. .NET framework and other third-party/dev dependencies (many of these in place due to Windows RT, but are likely 32-bit, not 64-bit)
  5. Your code, running on ARM. Many things might just work, lots of things just won’t.

That’s just the technical side. It isn’t to say you couldn’t do it – or that part of it might not already be done within Microsoft already, but otherwise it would be a fairly large amount of work with likely a very, very low payoff for Microsoft, which leads us, briefly, to the licensing side. You think ARM-based clients are scraping the bottom of the pricing barrel? I don’t think Microsoft could charge nearly the price they do for Windows Server 2012 R2 Standard on an ARM-based server and have it be commercially viable (when going up against free operating systems). Charge less than Windows Server on x64, and you’re cannibalizing your own platform – something Microsoft doesn’t like to do.

Of course, the biggest argument against Windows Server on ARM processors is this: www.windowsazure.com. Any role that you would likely find an ARM server well-suited for, Microsoft would be happy to sublet you time on Windows Azure to accomplish the same task. Web hosting, Web application, task node, Hadoop node, etc. Sure, it isn’t on-premises, but if your primary consideration is cost, using Azure instead of building out a new ARM-based data center is probably a more financially viable direction, and is what Microsoft would much rather see going forward. The energy efficiency is explicit – you likely pay fractions of what you might for the same fixed hardware workload on premises running on x64 Windows, and you pay “nothing” when the workload is off in Azure – you can also expand or contract your scale as you need to, without investing in more hardware (but you run the same code you would on-premises – not the same as ARM would need). Microsoft, being a Devices and Services company now, would much rather sell you a steady supply of Windows Azure-based services instead of Windows Server licenses that might never be updated again.

Certainly, anything is possible. We could see Windows Server on ARM processors. We could even see Microsoft-branded server hardware (please no, Microsoft). But I don’t believe Microsoft sees either of those as a path forward. For on-premises, the future of energy efficiency with Windows Server lies in virtualization and consolidation on top of Hyper-V and Windows Server 2012+. For off-premises, the future of energy efficiency with Windows Server appears rather Azure. I certainly don’t expect to see an ARM-based Windows Server anytime soon. If I do, I’d really like to see the economic model that justifies it, and what the OS would sell for.