At the end of your life, you take nothing with you. You leave behind everything.
If you’ve spent your life taking, you leave behind a legacy of taking.
If you’ve spent your life giving, you leave behind a legacy of giving.
You decide. Every day.
At the end of your life, you take nothing with you. You leave behind everything.
If you’ve spent your life taking, you leave behind a legacy of taking.
If you’ve spent your life giving, you leave behind a legacy of giving.
You decide. Every day.
During the last week, I have had an incredible number of conversations about Office 365 with press, customers, and peers. It’s apparent that with version 3.0 of their hosted services, as Microsoft has done many times before at v3.0, this is the one that could put some points on the board, if not take a lead in the game.
But one thing has been painfully clear to me for quite some time, and the last week only serves to reinforce it. As I’ve mentioned before, there’s not only confusion about Microsoft’s on-premises and hosted offerings, but simply confusion about what Office 365 is. The definitions are squishy, and Microsoft isn’t doing a great job of really enunciating what Office 365 brings to the table. Many assume that Office 365 is primarily about the Office client applications (when in fact only the premium business editions of Office 365 even include the desktop suite! Many others assume that Office 365 is only hosted services, and Web-based applications, along the lines of Google Apps for Business.
The truth is, there’s a medley of Office 365 editions among the 4 Office 365 “families” (Small Business, Midsize Business, Enterprise/Academic/Government, and Home Premium). But one thing is true – Office 365 is about hosted services (Exchange Online/Lync Online/SharePoint Online for businesses, or Outlook.com/Skype/SkyDrive for consumers), and – predominantly – the Office desktop application suite.
I bring this up because many people point at native applications and Web applications and say that there is a chasm growing… an unending rift that threatens to tear apart the ecosystem. I disagree. I think it is quite the opposite. Web apps (“cloud apps” if you like) and native apps (“apps”) are colliding at high speed. Even today it isn’t really that easy to tell them apart, and it’s only going to get harder.
When Adobe announced their Cloud Connect service last week, some people said there wasn’t much “cloud” about it. In general, I agree. To that same end, one can point a finger at Office 365 and say, “that’s not cloud either” because to deliver the most full-featured experience, it relies upon a so-called “fat client” locally installed on each endpoint, even though for a business, a huge amount of the value, and a large amount of the cost, is coming from the cloud services that those apps connect to.
To me, this is much ado about nothing. While it’s true that one can’t call Office 365 (or Cloud Connect) a 100% cloud solution, at least in the case of Office, each version of Microsoft’s hosted services has come closer than the one before to delivering much of the value of a cloud service, it continues to rely on these local bits, rather than running the entire application through a Web browser. With Office, this is quite intended. The day Office runs equally well on the Web as it does on Windows is the day that Microsoft announces they’re shutting down the Windows division.
But what’s interesting is that as we discuss/debate whether Microsoft and Adobe’s offerings are indeed “cloudy enough”, as they strive to provide more thick apps as a service, Google is working on the opposite, applications that run in the browser, but exploit more local resources. When we look at the high-speed collision of Android into ChromeOS, as well as Microsoft’s convergence of Web development into the WinRT application framework, this all begins to – as a goal – make sense.
In 1995, as the Web was dawning, it wasn’t about applications. It was about sites. It gradually became about applications and APIs – about getting things done, with the Web, not our new local networks, as the sole communication medium. Conversely, even the iPhone began with a very finite suite of actions that a user could perform. One screen of apps that Apple provided, and extensibility only by pinning Websites to the Home screen. Nothing that actually exploited the native power and functionality of the phone to help users complete tasks more readily. Apple eventually provided the full SDK that enabled native, local applications, which would still often connect out to the Internet to perform their role – when the Internet was available.
Windows has largely always been about “fat client” applications, even going so far as to have the now quite old – but once new and novel – Remote Desktop Protocol to enable fat clients to become light-ish weight, as long as a network connection back to the server (or eventually desktop) running the application was available.
I bring these examples up because the idea of “cloud applications” or cloud services is, as I noted, becoming squishy and hard to explicitly define, though I have to personally consider whether I really care that deeply about when applications are or are not cloudy (or are partly cloudy?).
Users buy (or use) applications because they have a specific task they need to complete. Users don’t care what framework the application is written in, what languages were used, what operating system any back-end of the application is running on, or what Web server it is connecting to.
What users do care about is getting the task done that led them to that application to begin with. Importantly, they need productivity wherever it can be available. With applications that are cloud-only, when you have a slow, or nonexistent Internet connection, you are… dead. You have no productivity. Flying on a plane but editing a Word document? You need a fat client. Whether it’s Google Apps for Business running on a Chromebook (with caching), QuickOffice on an iPad, or Office 2013 Pro Plus running on a Windows 7 laptop, without some local logic and file caching, you’re SOL at 39,000 feet without an Internet connection.
Conversely, if you are solely using Microsoft Office (or Pages), and you’re editing that important doc at an airport that happens to have WiFi before a flight that does not have WiFi, you might be SOL if you don’t sync the document to the Web if you accidentally leave your laptop on board the flight afterwards, never to be seen again. Once upon a time, productivity meant storing files locally only, or hand-pushing files to the Web. Both Office 2013 and Apple’s iWork (through iCloud) offer great synchronization.
The point is that there is value to having a thicker client:
But there is value to taking advantage of the Web:
But I believe that the merit of both mean that the future is in applications that are both local and cloudy – across the board. Many people are bullish that Chromebooks are the future. Many people think Chromebooks are bull. I think the truth is somewhere in the middle. As desktop productivity evolves, it will have deeper and deeper tentacles out to the Web – for storage and backup, for extensibility, and more. Conversely, as purely Web-based productivity evolves, expect the opposite. It will continue to have greater local storage and more ability to exploit local device capabilities, as we’re seeing Chrome and ChromeOS do.
Office 365 isn’t a cloud-only service in most tiers. Nor do I ever really expect it to be. Frankly, though, Google Apps isn’t really a cloud-only service today – and I don’t expect it to go any direction except towards a more offline capable story as well. Web apps and native apps aren’t a binary switch. We won’t have one or the other in the future. Before too long, most Web apps will have a local component, and most local applications will have a Web component. The best part is that when we reach this point, “cloud” will mean even less than it means today.
A few weeks ago I wrote about gestures on the Mac vs. Windows 8. By and large, I’ve shifted to using my Mac with most apps in full-screen, and really making the most of the gestures included in OS X 10.8. It isn’t always easy, as certain apps (looking at you, Word 2011), don’t optimally use full-screen. Word has Focus mode (its own full-screen model) and now supports OS X’s full-screen mode – but not together. Meaning if you shift to Focus mode, gestures don’t work as well as they could, since Word is on the desktop. More importantly, when working on a project, I often need two or more windows open at once. For this, full-screen doesn’t work, but something like Windows 7 Snap is ideal.
I’ve found quite a few tools over the past few weeks that have made working on the Mac an enjoyable experience. Some of these (Pages, and Office for Mac 2011) I’ve owned for a while. But most are things I’ve purchased since I bought my 13″ Retina MBP. In alphabetical order, here’s the list:
Yesterday I read Paul Miller’s piece on The Verge, I’m still here: back online after a year without the internet.
Having decided not nearly as long ago (4 days) to take a break from Twitter and Facebook, I found the piece timely.
I recently decided to take a bit of a timeout from Twitter – and even more from Facebook – because I felt that the energy I put into them, and the negative energy I received from them was more expensive than any reward I received from them. Paul obviously did better than I at staying away.
For me, the value of most social networks is <meh>. LinkedIn serves as little for me other than as an on-line resume for people who want to know who I am. Facebook has turned into a drivel-fest; where it once served as a photo sharing hub for me to friends, but I recently realized I’m not that interested in a lot of the stuff people I follow post on Facebook “<LIKE> if you think this is a funny meme!” and political opinions that collide with my own. So even in just taking a few days off of actively posting to Facebook, I’ve realized that I don’t miss it that much. My plan for Facebook is as LinkedIn is to me now. I won’t destroy my profile, but I’m effectively done contributing to the network.
Twitter, however? Twitter is a different beast. I’m going to open up a bit here… For much of my life, even though I’m a pretty extroverted, gregarious, chatterbox of a person, I’ve been kind of lonely. I don’t have a ton of people I’d describe as true friends, but I do enjoy talking to people – in person, on the phone, or over the Internet. It provides me with a sense of connectedness, and of belonging. I really enjoy the connections I make, and conversations I have, on Twitter.
Many of us seek connectedness throughout our world. It helps us to feel that we belong, and helps us build our own community structure around ourselves that can help us feel stronger about our own emotional well-being. I feel, though, that many of the things we hear from others can make us feel that these connections are either unhealthy, or may portray us as “weak” in their eyes. I think this perception is more dangerous than the desire most of us have to find connections over the Internet.
The world today is comprised of fractured communities. Few families stay in one place for generations any longer. People move when and where their school, career, and professional opportunities require them to, and as a result, we often feel disconnected from the world immediately around us. As a result, many of us (myself included) reach out to the Internet to make us feel like we belong; LinkedIn to stay connected to former co-workers, Facebook to stay connected to friends and family we may no longer be geographically close to. Twitter has become my primary community building and knowledge-sharing tool. I have met so many interesting people who have taught me so much, and often inspired me so much over the last nearly 5 years (I joined Twitter on May 31, 2008), and have met several people who I would consider friends, at least as we define “digital friends” that know you pretty well, even without having met you in person. I do grow tired of Twitter becoming an echo chamber for news, politics, and technology rumors (especially incorrect information in all three categories); for this reason, I’m looking to change how I use Twitter a bit again, as I don’t want to let it consume more time than it really warrants – but I have yet to decide how that plays out. We shall see.
Twitter and other social networks are really just digital communities – networks of like-minded people looking to connect with each other – an Internet of people. As the Internet started by connecting together multiple private networks of computers to create a giant public network for the benefit of all connected to it, the Internet is changing how we communicate, collaborate, and build and maintain communities. Relationships that we would have felt 10 years ago required you to have worked closely with someone for years can exist solely in the digital realm now. As the Internet interconnected disparate systems around the world for their mutual benefit, it does the same for individuals. I believe it is important to not diminish the role that digital networks can play in our own well-being, and not allow ourselves to feel shamed about the fact that social networks can help us feel that we belong, and make us feel connected. The importance in balancing it lies in finding, building, and nurturing our offline relationships with family and friends as well as online. The risk to well-being comes when you aren’t keeping up the offline relationships – or fail to deal with things that are going awry – by burying yourself in online communities, games, etc. Like many things in life, moderation is the key.
A friend recently posted a link to this blog. It’s an interesting read about where you should focus when building your app; should you have one app for each platform, or an API that goes as high up as possible into each platform?
In particular, he quotes the expression, “the API is the asset, the UI is simply throwaway”.
I get the point he’s trying to say. Platforms come and go – but an API should be designed to be durable. I kind of agree, and I kind of don’t. Let me explain.
When a developer builds an API, it generally exposes rough verbs that relate to user tasks. When a designer or developer builds an application, it should be entirely defined by the tasks that a user needs to complete, and ideally, take advantage of distinct benefits of each platform where the investment to comply with those hooks increases the ease of use of the application.
In a nutshell, you are designing an API to expose a service, and an application to deliver an experience. The goal of a good development team should be to take the API as high up the stack as the application will allow – without exposing the user to the flow of the API directly. Think of an old recliner with the padding crushed down over time. You feel every nuance of the springs or metal bars holding it together. A good application design provides the padding to shield the end user from that pain, without overstuffing it. You want to invest enough in the UI to deliver an experience representative of (your application + that platform). Perhaps the expression quoted isn’t intended to be so harsh towards the UI as to make it seem like a wood veneer appliqué, but that’s how I read it. It’s true – you want to make as much of your code as portable as possible (the API), but invest where you need to in order to provide the best experience (the UI).
The goal of the API is to provide structure, the goal of the user interface is to provide the abstraction between your API and the user experience your application seeks to deliver for that platform. Peanut butter and chocolate.
“It’s increasingly likely that a small group of well-financed people are going to be able to really bring this country to its knees.”
I couldn’t agree more, which is why we shouldn’t let them be re-elected. Anyone willing to grab a pitchfork and stab the rule of law in the name of fear doesn’t deserve to hold office in this country.
(Linked from USA Today)
Stop buying more stuff, believing that buying more stuff will make you happy.
Happy Earth Day.
Alright – shake out the giggles from the title, and let me show you why I said that.
Until recently I had been using Windows 8 every day – and recently switched to a Mac (running 10.8 Mountain Lion) as my primary computing device. The more I have used Mountain Lion – especially with apps in full-screen mode – the more certain things felt subtly similar to Windows 8.
I believe that Mountain Lion is yet another step in Apple’s gradual (some might say slow) rhythm to converge the iOS and OS X platforms, as iOS devices become more capable and OS X becomes more touch friendly, but Apple is doing it in a very cautious way – slowly building a visual and functional perimeter around Mac applications to make them behave much more like iOS applications. I have a thesis around that, which I’ll try to discuss in another post soon. But the main point is that Apple and Microsoft are both shooting for relatively common goals – immersive applications available from an application marketplace that they control for their platforms – with an increasing emphasis on touch – or at least on gestures. I’m not going to say who cloned whom, as many of these are simply examples of multiple discovery, where Apple and Microsoft, largely now chasing common goals, implement similar features in order to achieve them. Let’s take a look at a few similarities.
From the first time you sign on to Windows 8 or Mountain Lion, the similarities begin. On Windows 8, it tries the hard sell to get you to use a Microsoft Account for your identity – not linking it to a local account as you can do with an Active Directory account, but making your Microsoft Account a local account, and enabling you to synchronize settings (but currently not applications and the Start screen) between two or more computers.
Apple, on the other hand, doesn’t embed iCloud quite as in-your-face, and doesn’t use it to synchronize most settings (or Dock items - unlike its predecessor, MobileMe) but does embed it all over the operating system with several built-in features (such as Safari tab synching across OS X and iOS) Photo Stream, Notes, and Reminders, with applications also able to hook in on their own for storage. Unlike SkyDrive, iCloud (like the file system on iOS) is opaque, and not user navigable – only exposed through applications and operating system features that elect to hook into iCloud. Speaking of hooking into iCloud, some apps like TextEdit ask if you want to save new or existing documents locally or in iCloud (with a dialog that is, honestly, un-Apple-like).
Both Windows 8 and Mountain Lion provide a “heads-up” approach to launching applications. With Windows 8, this is the Start screen. With OS X, it is Launchpad, first introduced with OS X Lion in 2011. Windows 8′s Start screen (love it or hate it), is a full-screen (usually multi-screen, continuously scrolling) launcher. This launcher can feature notifications and additional information from the applications themselves. Applications can be grouped, and “tiles” can be resized, but not combined into collapsible folders, and are somewhat fussy about placement. Windows does provide interactivity through the Start screen, in the form of Live tiles. See the Weather app below for an example of a Live tile, and Productivity as an example of a group. To my point about fussiness – note the Remote Desktop tile, and the two to its left. Remote Desktop cannot currently be placed underneath CalcTrek in that column – the Start screen always wants columns of a set width (one wide column or two double-width columns), not a single-width column.
Since OS X Lion (10.7, almost two years ago), Apple has included Launchpad, which is a feature that presents a (drum-roll, please) full-screen (usually multi-screen, individually paged, as in iOS) application launcher. Unlike the Start screen, Launchpad does not feature any sort of status for applications. They are a static “sea of icons” as Microsoft likes to say about iOS. Instead, notifications now use the Apple Notification Center, which is integrated into the shell. Launchpad application icons don’t ever have notification “badges”, say for reminders or new mail. Instead, notifications are available for applications that are in the OS X Dock or in Notification Center. One or more application icons in Launchpad can be grouped together into a folder, which can be named – just as in iOS. Here is Launchpad:
Intriguingly, OS X Mountain Lion added a much needed feature to Launchpad (which Windows 8 featured from the first day the public saw it), type to search the list of applications. Here is Windows 8 app search, and here is the same feature in OS X.
File under “obvious comparison point”. Beginning with OS X Lion in 2011, the Mac App Store offered a limited selection of applications for free download or purchase. In Lion, these were effectively just Mac Apps that were willing to forego 30% of their sales revenue to be in the store (they didn’t have to live within tight constraints). In Mountain Lion, apps were forced to live within the confines of a sandbox, much like applications on iOS – where the damage one app can do to others, the operating system, or user data, is limited. Windows Store applications (WinRT applications) by definition must live within a very strict sandbox – in many ways more strict than the rules required beginning with Mountain Lion.
The Windows Store follows the same design paradigms as other Windows 8 applications. In general, the design of the Windows Store and the App Store on OS X are remarkably similar. A significant difference is that Windows Store applications can be – at the developer’s discretion – provided as trials. No such feature is explicitly available in the App Store, though some developers do achieve a similar goal by providing a free simplified or limited version of the application that is unlocked through an in-app purchase.
Windows Store applications, by definition, are immersive. The full-screen user interface is designed to remove window chrome and let the application itself shine through. Windows Store applications must be either full-screen, snapped, or backgrounded. The next release of Windows is expected to add more window modes for Windows Store applications, but will still not add (back) overlapping windows – in other words, it will still be more like Windows 2.0 than Windows 3.0.
Here is an example of a Windows Store application, the immersive mode of Internet Explorer – which is only capable of being run full-screen or snapped with another app, not in a standalone window:
Here is an example of a full-screen application on OS X Mountain Lion. Note that not all applications can run full-screen. However all applications that can be can also be run windowed. Here is an example of Pages running full-screen on Mountain Lion:
Here is Pages with that same document in a window. The full-screen models of both Mountain Lion and Windows 8 feature hidden menus. The Windows 8 App bar as implemented for Windows Store applications is hidden off the screen to the top or bottom of the application, and can be implemented in wildly varying implementations by developers. The menus for full-screen applications in Mountain Lion are effectively the same Apple Menu-based menu that would normally appear when it was running not in full-screen. The main difference is that the Apple Menu in non Full-screen mode is detached – like Mac applications have always been. In full-screen mode, the menu behaves much more like a Windows application, stuck to the application running full-screen. The menu is hidden until the cursor is hovered over an area a few pixels tall across the top of the screen. Similarly, the Dock is always hidden when applications are running full-screen, until the cursor hovers over a similar bar of space across the bottom of the screen.
What is kind of fascinating to consider here is that Internet Explorer 10 in Windows 8 is, in many ways, mirroring the functionality provided by a Lion/Mountain Lion full-screen application. It is one binary, with two modes – Windowed Win32, and full-screen immersive – just as Pages is displaying in the images shown and linked earlier.
In “desktop mode”, both Windows 8 and OS X Mountain Lion focus more on gestures than previous releases of both. With a touch-screen or trackpad, Windows 8 is very usable (I believe more usable than it is with a mouse), once you have mastered the gestures included. Both have aspects of the shell and many applications that recognize now common gestures such as pull to refresh, pinch to zoom, and rotation with two fingers.
Windows 8 provides a single, single-finger in from the left, gesture to switch applications one at a time, which can be expanded to show a selection of previously run applications to be available, but also includes the desktop. Though I feel Windows 8′s app switching gesture to be limited, it works, and could be expanded in the future to be more powerful. Here you can see Windows 8′s application switcher.
I have used gestures in iOS for the iPad since they first arrived in a preview form that required you to enable them through Xcode. The funny thing about these gestures is, while they aren’t necessary to use on the iPad, they are pretty easy to learn, and can make navigating around the OS much easier. When I started using my rMBP with its built-in trackpad and a Magic Trackpad at my desk, I quickly realized that knowing those gestures immediately translated to OS X. While you don’t need to know them there either, they make getting around much easier. Key gestures are common between iOS on the iPad and on OS X:
Here is OS X’s Mission Control feature, exposing two full-screen applications (iTunes and Pages) and three applications on the desktop (Reminders, Safari, and Mail):
The most fascinating thing here is that, while Windows 8 has been maligned for it’s forced duality of immersive-land and the legacy desktop, the Mac is actually doing the same thing – it just isn’t forcing applications to be full-screen (yet). Legacy applications run on the desktop, and new applications written to the latest APIs run full-screen and support gestures. Quick – was that sentence about Windows 8, or Mountain Lion? It applies equally to both!
I think it’s very interesting to take a step back and see where Apple has very gradually moved forward over the last several instances of OS X, towards a more touch and immersive model, where Microsoft took the plunge with both feet, focusing first on touch, while leaving the Win32 desktop in place – but seemingly as a second-class citizen in priority to WinRT and Windows Store applications.
The next several years will be quite interesting to watch, as I think Apple and Microsoft will wind up at a similar place – just taking very different steps, and very different timeframes, to get there.
I’ve been pondering the existence of devices like the Asus PadFone and PadFone 2 recently.
Not really convertible devices, not really hybrid devices, they’re an electronic centaur. Like an Amphicar or a Taylor Aerocar, the PadFone devices compromise their ability to be one good device by instead being two less than great devices.
I haven’t found a good description of devices like the PadFone – I refer to them as “form integrated”. One device is a dumb terminal and relies on the brain of the other.
While a novel approach, the reality is that form integrated devices are a bit nonsensical. Imagine a phone that integrates with a tablet, or a tablet that integrates into a larger display. To really work well, the devices must be acquired together, and if one breaks, it kills the other (lose your Fone from the PadFone, and you’ve got a PadBrick).
You also wind up with devices where the phone must be overpowered in order to drive the tablet (wasting battery) or a weak phone that results in a gutless tablet when docked.
Rather than this “host/parasite” model of the form integrated approach, I would personally much rather see a smart pairing of devices. Pairing of my iPhone, iPad, and Mac, or pairing of a Windows Phone, Windows 8 tablet, and a Windows 8 desktop.
What do I mean by smart pairing? I sit down at my desktop, and it sees my phone automatically over Bluetooth or the like. No docking, no need to even remove it from my pocket. Pair it once, and see all the content on it. Search for “Rob”, and see email that isn’t even on the desktop. Search for “Windows Blue”, and it opens documents that are on the iPhone.
The Documents directory on my desktop should be browsable from my phone, too (when on the same network or if I elect to link them over the Internet).
Content, even if it is stored in application silos, as Windows Store applications and iOS/OS X applications do, should be available from any device.
I think it would also be ideal if applications could keep context wherever I go. Apple’s iCloud implementation begins to do this. You can take a document in Pages across the Mac, iPad, and iPhone, and access the document wherever you are. Where Asus is creating a hardware-based pairing between devices, Apple is creating a software-based pairing, through iCloud. It is still early, and rough, but I personally like that approach better.
My belief is that people don’t want to dock devices and have one device be the brain of another. They don’t want to overpay for a pair of devices that aren’t particularly good at either role and instead will pay a premium for two great devices, especially if they integrate together seamlessly and automatically.
Much as I believe the future of automotive electronics is in “smartphone software integrated” head units rather than overly-complex integrated computing built into the car, the future of ubiquitous computing lies in a fabric of smart devices that work together, with the smartphone most likely being the key “brain” among them. Not with its CPU driving everything else, but instead with it’s storage being pervasively available wherever you are, without needing to be docked or plugged in.
Just under one year from now, on April 8, 2014, Windows XP leaves Extended Support.
There are three key questions I’ve been asked a lot during the past week, related to this milestone:
All important questions.
The first question can be exceedingly complex to answer. But for all intents and purposes, the end of Extended Support means that you will receive absolutely no updates – including security updates – after that date. While there are some paid support options for Windows XP after 4/8/2014, however as we understand it they will be very tightly time limited, very expensive, and implemented with a contractual, date-driven expectation for a retirement of the organization’s remaining Windows XP desktops. There’s no “get out of jail” card, let alone a “get out of jail free” card. If you have Windows XP desktops today, you have work to do, and it will cost you money to migrate away.
If you want to look for yourself, you can go to Microsoft’s downloads site and look – but Windows XP still receives patches for both Windows itself or Internet Explorer (generally 6,7, and 8 all get patched) every month. From April 2012 to April 2013, every month saw security updates to either Windows XP or IE on it – and 8 of the 13 months saw both. Many of these are not pretty vulnerabilities, and if left unpatched, could leave targeted organizations exceedingly vulnerable after that date.
This leads us to the second question. In a game of chicken, will Microsoft turn and offer support after 4/8/2013?
Why are you asking? Seriously. Why? I was on the team that shipped Windows XP. I wish that, like a work of art, Windows XP could be timeless and run forever. But it can’t (honestly, that theme is starting to get rather long in the tooth too). It’s a piece of machinery – and machinery needs maintenance (and after a time, it usually needs replacement). Windows 2000 received it’s last patch the month before it left Extended Support. So, while 4/8/2014 is technically a Patch Tuesday, and Microsoft might give you one last free cup of joe, I’d put a good wager down that if you want patches after that day, you’d better plan your migration, get on the phone to Microsoft relatively soon, get a paid support contract in place, and be prepared to pay for the privilege of support while you migrate away.
Companies that are running Windows XP today – especially in any sort of mission critical or infrastructure scenario – especially if connected to the Internet, need to have a migration plan away to a supported operating system.
At a security startup I used to work at (not that long ago), it shocked me how many of our prospects had Windows 2000, Windows NT, or even older versions of NT or 9x, in production (and often connected to networks or the Internet. Even more terrifying, many of these were mission critical systems.
And this segues us to the third question. What happens to systems running after 4/8/2014? You can quote Clint Eastwood’s “Dirty Harry” character, “Do I feel lucky? Well do you?” It’s not a good bet to make. Again, we’ve seen some nasty bugs patched in IE 6,7, and 8, and Windows XP itself over the last year. While one would hope an OS 12 years out would be battle-hardened to the point of being bulletproof, that is not the case. Windows XP isn’t bulletproof. It’s weary. It’s ready to be retired. Organizations with critical infrastructure roles still running Windows XP will have giant targets on them after next April, and no way to defend those systems.
A common thread I’ve also seen is a belief that a wave of Windows XP migrations over the next 12 months will mean anything, economically. It really isn’t likely to. While we will likely see a good chunk of organizations move away from Windows XP over the next year, doing so may mean finding budget to replace 5+ year old PCs, and patch, update, or purchase replacement Windows, Java, and Web applications that can run on newer operating systems. Most of the easy lifting has already been done. The last customers remaining are likely extremely hard, extremely “financially challenged”, or both. It may be unfortunate, but this time next year (and likely the year after that, and years after that), there will still be Windows XP systems out there, some of them running in highly critical infrastructure. Dangerous, but unfortunately, likely to be the case.