Sep 15

You have the right… to reverse engineer

This NYTimes article about the VW diesel issue and the DMCA made me think about how, 10 years ago next month, the Digital Millennium Copyright Act (DMCA) almost kept Mark Russinovich from disclosing the Sony BMG Rootkit. While the DMCA provides exceptions for reporting security vulnerabilities, it does nothing to allow for reporting breaches of… integrity.

I believe that we need to consider an expansion of how researchers are permitted to, without question, reverse engineer certain systems. While entities need a level of protection in terms of their copyright and their ability to protect their IP, VW’s behavior highlights the risks to all of us when of commercial entities can ship black box code and ensure nobody can question it – technically or legally.

In October of 2005, Mark learned that a putting a particular Sony BMG CD in a Windows computer would result in it installing a rootkit. Simplistically, a rootkit is a piece of software – usually installed by malicious individuals – that sits at a low level within the operating system and returns forged results when a piece of software at a higher level asks the operating system to perform an action. Rootkits are usually put in place to allow malware to hide. In this case, the rootkit was being put in place to prevent CDs from being copied. Basically, a lame attempt at digital rights management (DRM) gone too far.

In late October, Mark researched this, and prepped a blog post outlining what was going on. We talked at length, as he was concerned that his debugging and disclosure of the rootkit might violate the DMCA, a piece of legislation put in place to protect copyrights and prevent reverse engineering of DRM software, among other things. So in essence, to stop exactly what Mark had done. I read over the DMCA several times during the last week of October, and although I’m not a lawyer, I was pretty satisfied that Mark’s actions fit smack dab within the part of the DMCA that was placed there to enable security professionals to diagnose and report security holes. The rootkit that Sony BMG had used to “protect” their CD media had several issues in it, and was indeed creating security holes that were endangering the integrity of Windows systems where the software had unwittingly been installed.

Mark decided to go ahead and publish the blog post announcing the rootkit on October 31, 2005 – Halloween. Within 48 hours, Mark was being pulled in on television interviews, quoted in major press publications, and was repeatedly a headline on Slashdot, the open-source focused news site over the next several months – an interesting occurrence for someone who had spent almost his entire career in the Windows realm.

The Sony BMG disclosure was very important – but it almost never happened. Exceptions that allow reverse engineering are great. But security isn’t the only kind of integrity that researchers need to diagnose today. I don’t think we should tolerate laws that keep researchers from ensuring our systems are secure, and that they operate the way that we’ve been told they do.

Aug 15

Continuum vs. Continuity – Seven letters is all they have in common

It’s become apparent that there’s some confusion between Microsoft’s Continuum feature in Windows 10, and Apple’s Continuity feature in OS X. I’ve even heard technical people get them confused.

But to be honest, the letters comprising “Continu” are basically all they have in common. In addition to different (but confusingly similar) names, the two features are platform exclusive to their respective platform, and perform completely different tasks that are interesting to consider in light of how each company makes money.

Apple’s Continuity functionality, which arrived first, on OS X Yosemite late in 2014, allows you to hand off tasks between multiple Apple devices. Start a FaceTime call on your iPhone, finish it on your Mac. Start a Pages document on your Mac, finish it on your iPad. If they’re on the same Wi-Fi network, it “just works”. The Handoff feature that switches between the two devices works by showing an icon for the respective app you were using, that lets you begin using the app on the other device. Switching from iOS to OS X is easy. Going the other way is a pain in the butt, IMHO, largely because of how iOS presents the app icon on the iOS login screen.

Microsoft’s Continuum functionality, which arrived in one form with Windows 10 in July, and will arrive in a different (yet similar) form with Windows 10 Mobile later this year, lets the OS adapt to the use case of the device you’re on. On Windows 10 PC editions, you can switch Tablet Mode off and on, or if the hardware provides it, it can switch automatically if you allow it. Windows 10 in Tablet Mode is strikingly similar to, but different from, Windows 8.1. Tablet mode delivers a full screen Start screen, and full-screen applications by default. Turning tablet mode off results in a Start menu and windowed applications, much like Windows 7.

When Windows 10 Mobile arrives later this year, the included incarnation of Continuum will allow phones that support the feature to connect to external displays in a couple of ways. The user will see an experience that will look like Windows 10 with Tablet mode off, and windowed universal apps. While it won’t run legacy Windows applications, this means a Windows 10 Mobile device could act as a desktop PC for a user that can live within the constraints of the Universal application ecosystem.

Both of these pieces of functionality (I’m somewhat hesitant to call either of them “features”, but I digress) provide strategic value for Apple, and Microsoft, respectively. But the value that they provide is different, as I mentioned earlier.

Continuity is sold as a “convenience” feature. But it’s really a great vehicle for hardware lock-in and upsell. It only works with iOS and OS X devices, so it requires that you use Apple hardware and iCloud. In short: Continuity is intended to help sell you more Apple hardware. Shocker, I know.

Continuum, on the other hand, is designed to be more of a “flexibility” feature. It adds value to the device you’re on, even if that is the only Windows device you own. Yes, it’s designed to be a feature that could help sell PCs and phones too – but the value is delivered independently, on each device you own.

With Windows 8.x, your desktop PC had to have the tablet-based features of the OS, even if they worked against your workflow. Your tablet couldn’t adapt well if you plugged it into an external display and tried to use it as a desktop. Your phone was… well… a phone. Continuum is intended to help users make the most of any individual Windows device, however they use it. Want a phone or tablet to be a desktop and act like it? Sure. Want a desktop to deliver a desktop-like experience and a tablet to deliver a tablet-like experience? No problem. Like Continuity, Continuum is platform-specific, and features like Continuum for Windows 10 Mobile will require all-new hardware. I expect that this Fall’s hardware season will likely continue to bring many new convertibles that automatically switch, helping to make the most of the feature, and could help sell new hardware.

Software vendors made Continuity-like functionality before Apple did it, and that’ll surely continue. We’ll see more and more device to device bridging in Android and Windows. However, Apple has an advantage here, with their premium consumer, and owning their entire hardware and software stack.

People have asked me for years if I see Apple making features that look like Continuum. I don’t. At least not trying to make OS X into iOS. We may see Apple try and bridge the tablet and small laptop market here in a few weeks with an iOS device that can act like a laptop, but arguably that customer wouldn’t be a MacBook (Air) customer anyway. It’ll be interesting to see how the iPad evolves/collides into the low-end laptop market.

Hopefully if you were confused about these two features, that helps clarify what they are – and that they’re actually completely different things, designed to accomplish completely different things.

Jun 15

Windows 10 and free. Free answers to frequently asked questions.

I keep hearing the same questions over and over again about Windows 10 and the free* upgrade, so I have decided to put together a set of frequently asked questions about the Windows 10 promotion.

Who gets it?

Q: Is Windows 10 really free?

Yes. It is free. Completely free. But only if you meet the qualifications and take Microsoft up on the offer from a qualified PC before July 29th, 2016.

You must have Windows 7, 8, or 8.1 installed on your x86 or x64 system, and it cannot be an Enterprise edition of Windows (only Home, Pro/Professional, Ultimate, or similar. See the bottom of this page for a significant disclaimer.

Q: Can I get the free upgrade if I have some version of Windows RT?

No free upgrade for you. Microsoft has indicated there’s a little something coming in the pipeline for you at some point, but haven’t indicated what that would be. It won’t be Windows 10, and won’t be the full Windows 10 for smartphones and small tablets either. MHO: Expect something more akin to Windows Phone 7.8.

Q: Can I get it for free if I have Enterprise edition of Windows 7, 8, or 8.1?

No. Enterprise edition must be purchased through the Volume Licensing channel, as it always has had to be. Talk to the people in your organization who handle Windows volume licensing.

Q: Can I get it for free if I’m in the Windows Insider program?

No. There’s no magic program rewarding Windows Insiders with a completely free full product. You have to have upgraded the system from a valid license for 7, 8, or 8.1. (See this tweet from @GabeAul.)

Q: Can I get it for free if I have Windows XP or Windows Vista?

No. You’ll need to either buy a legal copy of Windows 7, 8, or 8.1, or just purchase Windows 10 when it becomes available at retail, supposedly in late August, 2015. Your install of Windows does not qualify for the offer.

Q: Can I get it for free if I pirated Windows 7, 8, or 8.1?

Not really, no. If it was “Non-Genuine” before your upgrade, or Windows 10 recognizes it as such, it will still be Non-Genuine after the fact. You may be upgraded, but expect to be nagged. Your OEM might also be able to help you get legit… Or you could always buy a copy.

Q: Can I perform a clean install of Windows 10?

Yes, but you’ll have to do it after you’ve upgraded from a qualified install of Windows 7, 8, or 8.1 first. Then you can perform clean installs on that device at any time. (See yet another tweet from @GabeAul.)

Q: Can I upgrade all of my PCs for free?

Yes, if they each have a qualifying OS version and edition installed. But installing on one device doesn’t give you rights to run Windows 10 on any other system, or move an OEM install to a virtual machine.

Q: Can I upgrade my phone?

This is all about Windows 10 for your x86 or x64 PC, not your Windows Phone. Microsoft will have more details about Windows for phones at some point later this year, when they talk about it being released. It won’t be available at the same time as Windows 10 for PCs and tablets.


What edition do I get?

Q: I have Media Center, K, N, Ultimate, or some other transient edition – what do I get?

Check out “What edition of Windows will I get as a part of this free upgrade?” on this page. If you have a K or N install, you will be upgraded to the parent edition for the K or N OS you are licensed for.

Q: When will I get the upgrade?

See “What happens when I reserve?” on this page. In general, once you reserve on that device, it’ll download automatically and you’ll be notified when it is ready to install, on or about July 29th, 2015.


What breaks if I upgrade?

Q: Can I still run Windows Media Center after I upgrade to Windows 10?

No. According to this page, if you upgrade a system that is running Media Center software to Windows 10, it will be uninstalled. If you use/love Media Center on a given system, I would strongly advise not upgrading to Windows 10 on that system, as it will be deleted.

Mass hysteria

Q: Is this thing running in my system notification area malware?

You might have malware, but the little flag running over there isn’t it. It’s just Microsoft working to get every qualified Windows install that they can to Windows 10 within a year’s time. Enjoy your free lunch.

Q: How do I stop users in my organization from installing Windows 10 on systems I manage?

If it’s a domain-joined Windows Pro system, or a Windows Enterprise system, have no fear. They aren’t getting prompted.

Q: How do I stop users in my organization from installing Windows 10 on BYOD systems I don’t manage?

If it is a system running Windows Home (or similar, like “Windows 8.1” with no suffix), or a Windows Pro/Professional) system that isn’t joined to the domain, and you don’t manage it in any way, you’re kind of up the creek on this one. This article provides info on KB3035583, which needs to be uninstalled to stop the promotion, and you’ll need to figure out a way to remove it on each of those systems.


Q: Microsoft will charge me in a year for updates, won’t they?

No. They won’t. Microsoft has stated that they will not charge for “free, ongoing security updates for the supported lifetime of the device.” Microsoft may well charge for a future upgrade to some other version of the OS. But I don’t see them going back on this as stated.


May 15

Farewell, floppy diskette

I never would have imagined myself in an arm-wrestling match with the floppy disk drive. But sitting where I did in Windows setup, that’s exactly what happened. A few times.

When I had started at Microsoft, a boot floppy was critical to setting up a new machine. Not by the time I was in setup. Since Remote Installation Services (RIS) could start with a completely blank machine, and you could now boot a system to WinPE using a CD, there were two good-sized nails in the floppy diskette’s coffin.

Windows XP was actually the first version of Windows that didn’t ship with boot floppies. It only shipped with a CD. While you could download a tool that would build boot floppies for you, most computers that XP happily ran on supported CD boot by that time. The writing was on the wall for the floppy diskette. In the months after XP released, Bill Gates made an appearance on the American television sitcom Frasier. Early in the episode, a caller asks about whether they need diskettes to install Windows XP. For those of us on the team, it was amusing. Unfortunately, the reality was that behind the scenes, there were some issues with customers whose systems didn’t boot from CD, or didn’t boot properly, anyway. We made it through most of those those birthing pains, though.

It was both a bit amusing and a bit frustrating to watch OEMs early on during the early days of Windows XP; while customers often said, “I want a legacy free system”, they didn’t know what that really meant. By “legacy free”, customers usually meant they wanted to abandon all of the legacy connectors (ports) and peripherals used on computers before USB had started to hit its stride with Windows 98.

While USB had replaced serial in terms of mice – which were at one time primarily serial – the serial port, parallel port, and floppy disk controller often came integrated together in the computer. We saw some OEMs not include a parallel port, and eventually not include a floppy diskette, but still include a serial port – at least inside – for when you needed to debug the computer. When a Windows machine has software problems, you often hook it up to a debugger, an application on another computer, where the developer can “step through” the programming code to figure out what is misbehaving. When Windows XP shipped, a serial cable connection was the primary way to debug.  Often, to make the system seem more legacy free than it actually was, this serial port was tucked inside the computer’s case – which made consumers “think” it was legacy free when it technically wasn’t. PCs often needed BIOS updates, too – and even when Windows XP shipped with them, these PCs would still usually boot to an MS-DOS diskette in order to update the BIOS.

My arrival in the Windows division was timely; when I started, USB Flash Drives (UFDs) were just beginning to catch on, but had very little storage space, and the cheapest ones were slow and unreliable. 32MB and 64MB drives were around, but still not commonplace. In early 2002, the idea of USB booting an OS began circling around the Web, and I talked with a few developers within The Firm about it. Unfortunately, there wasn’t a good understanding of what would need to happen for it to work, nor was the UFD hardware really there yet. I tabled the idea for a year, but came back to it every once in a while, trying to research the missing parts.

As I tinkered with it, I found that while many computers supported boot from USB, they only supported USB floppy drives (a ramshackle device that had come about, and largely survived for another 5-10 years, because we were unable to make key changes to Windows that would have helped killed it). I started working with a couple of people around Microsoft to try and glue the pieces together to get WinPE booting from a UFD. I was able to find a PC that would try to boot from the disk, and failed because the disk wasn’t prepared for boot as a hard disk normally would be. I worked with a developer from the Windows kernel team and one of our architects to get a disk formatted correctly. Windows didn’t like to format UFDs as bootable because they were removable drives; even Windows to Go in Windows 8.1 today boots from special UFDs which are exceptionally fast, and actually lie to the operating system about being removable disks. Finally, I worked with another developer who knew the USB stack when we hit a few issues booting. By early 2003, we had a pretty reliable prototype that worked on my Motion Computing Tablet PC.

Getting USB boot working with Windows was one of the most enjoyable features I ever worked on, although it wasn’t a formal project in my review goals (brilliant!). USB boot was even fun to talk about, amongst co-workers and Microsoft field employees. You could mention the idea to people and they just got it. We were finally killing the floppy diskette. This was going to be the new way to boot and repair a PC. Evangelists, OEM representatives, and UFD vendors came out of the woodwork to try and help us get the effort tested and working. One UFD manufacturer gave me a stash of 128MB and larger drives – very expensive at the time – to prepare and hand out to major PC OEMs. It gave us a way to test, and gave the UFD vendor some face time with the OEMs.

For a while, I had a shoebox full of UFDs in my office which were used for testing; teammates from the Windows team would often email or stop by asking to get a UFD prepped so they could boot from it. I helped field employees get it working so many times that for a while, my nickname from some in the Microsoft field was “thumbdrive”, one of the many terms used to refer to UFDs.

Though we never were able to get UFD booting locked in as an official feature until Windows Vista, OEMs used it before then, and it began to go mainstream. Today, you’d be hard pressed to find a modern PC that can’t boot from UFD, though the experience of getting there is a bit of a pain, since the PC boot experience, even with new EFI firmware, still (frankly) sucks.

Computers usually boot from their HDD all the time. But when something goes wrong, or you want to reinstall, you have to boot from something else; a UFD, CD/DVD, PXE server like RIS/WDS, or sometimes an external HDD. Telling your Windows computer what to boot from if something happens is a pain. You have to hit a certain key sequence that is often unique to each OEM. Then you often have to hit yet another key (like F12) to PXE boot. It’s a user experience only a geek could love. One of my ideas was to try and make it easier not only for Windows to update the BIOS itself, but for the user to more easily say what they wanted to boot the PC from (before they shut it down, or selecting from a pretty list of icons or a set of keys – like Macs can do). Unfortunately, this effort largely stalled out for over a decade until Microsoft delivered a better recovery, boot, and firmware experience with their Surface tablets. Time will tell whether we’re headed towards a world where this isn’t such a nuisance anymore.

It’s actually somewhat amusing how much of my work revolved around hardware even though I worked in an area of Windows which only made software. But if there was one commonly requested design change request that I wish I could have accommodated but couldn’t ever get done, it was F6 from UFD. Let me explain.

When you install Windows, it attempts to use the drivers it ships with on the CD to begin copying Windows down onto the HDD, or to connect over the network to start setup through RIS.

This approach worked alright, but it had one little problem which became significant. Not long after Windows XP shipped, new categories of networking and storage devices began arriving on high-end computers and rapidly making their way downmarket; these all required new drivers in order for Windows to work. Unfortunately, none of these drivers were “in the box” (on the Windows CD) as we liked to say. While Windows Server often needed special drivers to install on some high-end storage controllers before, this was really a new problem for the Windows consumer client. All of a sudden we didn’t have drivers on the CD for the devices that were shipping on a rapidly increasing number of new PCs.

In other words, even with a new computer and a stock Windows XP CD in your hand, you might never get it working. You needed another computer and a floppy diskette to get the ball rolling.

Early on during Windows XP’s setup, it asks you to press the keyboard’s F6 function key if you have special drivers to install. If it can’t find the network and you’re installing from CD, you’ll be okay through setup – but then you have no way to add new drivers or connect to Windows Update. If you were installing through RIS and you had no appropriate network driver, setup would fail. Similarly, if you had no driver for the storage controller on your PC, it wouldn’t ever find find a HDD where it could install Windows – so it would terminally fail too. It wasn’t pretty.

Here’s where it gets ugly. As I mentioned, we were entering an era where OEMs wanted to ship, and often were shipping, those legacy-free PCs. These computers often had no built-in floppy diskette – which was the only place we could look for F6 drivers at the time. As a result, not long after we shipped Windows XP, we got a series of design change requests (DCRs) from OEMs and large customers to make it so Windows setup could search any attached UFD for drivers as well. While this idea sounds easy, it isn’t. This meant having to add Windows USB code into the Windows kernel so it could search for the drives very early on, before Windows itself has actually loaded and started the normal USB stack. While we could consider doing this for a full release of Windows, it wasn’t something that we could easily do in a service pack – and all of this came to a head in 2002.

Dell was the first company to ever request that we add UFD F6 support. I worked with the kernel team, and we had to say no – the risk of breaking a key part of Windows setup was too great for a service pack or a hotfix, because of the complexity of the change, as I mentioned. Later, a very large bank requested it as well. We had to say no then as well. In a twist of fate, at Winternals I would later become friends with one of the people who had triggered that request, back when he was working on a project onsite at that bank.

Not adding UFD F6 support was, I believe, a mistake. I should have pushed harder, and we should have bitten the bullet in testing it. As a result of us not doing it, a weird little cottage industry of USB floppy diskette drives continued for probably a decade longer than it should have.

So it was, several years after I left, that the much maligned Windows Vista brought both USB boot of WinPE and also brought USB F6 support so you could install the operating system on hardware with drivers newer than Windows XP, and not need a floppy diskette drive to get through setup.

As I sit here writing this, it’s interesting to consider the death of CD/DVD media (“shiny media”, as I often call it) on mainstream computers today. When Apple dropped shiny media on the MacBook Air, people called them nuts – much as they did when Apple dropped the floppy diskette on the original iMac years before. As tablets and Ultrabooks have finally dropped shiny media drives, there’s an odd echo of the floppy drive from years ago. Where external floppy drives were needed for specific scenarios (recovery and deployment), external shiny media drives are still used today for movies, some storage and installation of legacy software. But in a few years, shiny media will be all but dead – replaced by ubiquitous high-speed wired and wireless networking and pervasive USB storage. Funny to see the circle completed.

Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.

Aug 14

My path forward

Note: I’m not leaving Seattle, or leaving Directions on Microsoft. I just thought I would share the departure email I sent in 2004. Today, August 6, 2014 marks the tenth anniversary of the day I left Microsoft and Seattle to work at Winternals in Austin. For those who don’t know – earlier that day, Steve Ballmer had sent a company-wide memo entitled “Our path forward”, hence my tongue-in cheek subject selection.

From: Wes Miller
Sent: Tuesday, July 06, 2004 2:32 PM
To: Wes Miller
Subject: My path forward

Seven years ago, when I moved up from San Jose to join Microsoft, I wondered if I was doing the right thing… Not that I was all that elated working where I was, but rather we all achieve a certain level of comfort in what we know, and we fear that which we don’t know. I look back on the last seven years and it’s been an amazing, fun, challenging, and sometimes stressful experience – experiences that I would never trade for anything.

At the same time, for family reasons and for personal reasons, I’ve had to do some soul searching that retraced the memories I have from, and steps I went through when I initially came to Microsoft, and I have accepted a position working for a small software company in Austin, TX. My last day at Microsoft will be Friday August 6, one month from today. The best way to reach me after that until my new address is set up is <redacted>. Between now and August 6th I will be doing my best to meet with any of you that need closure on deployment or LH VPC related issues before my departure. Please do let me know if you need something from me between now and then.

Many thanks to those of you who I have worked with over the years – take care of yourselves, and stay in touch.


Apr 14

Complex systems are complex (and fragile)

About every two months, a colleague and I travel to various cities in the US (and sometimes abroad) to teach Microsoft customers how to license their software effectively over a rather intense two-day course.

Almost none of these attendees want to game the system. Instead, most come (often repeatedly, sometimes with more people each time) to simply understand the ever-changing rules, how to apply them correctly, and how to (as I often hear it said) “do the right thing”.

Doing the right thing, whether we’re talking licensing, security, compliance, and beyond, often isn’t cheap. It takes planning, auditing, understanding the entire system, understanding an application lifecycle, and hiring competent developers and testers to help build and verify everything.

In the case of software licensing, we’ve generally found that there is no one single person that knows the breadth of a typical organization’s infrastructure. How can there possibly be? But the problem is if you want to license effectively (or build systems that are secure, compliant, or reliable), an individual or group of individuals must understand the entire integrated application stack – or face the reality that there will be holes. But what about the technology, when issues like Heartbleed come along and expose fundamental flaws across the Internet?

The reality is that complex systems are complex. But it is because of this complexity that these systems must be planned, documented, and clearly understood at some level, or we’re kidding ourselves that we can secure, protect, defend (and properly pay for) these systems, and have them be available with any kind of reliability.

Two friends on Twitter had a dialog the other day about responsibility/culpability when open source components are included in an application/system. One commented, “I never understand why doing it right & not getting sued for doing it wrong aren’t a strong argument.”

I get what she means. But unfortunately having been at a small ISV who wound up suing a much larger retail company because they were pirating our software, “doing the right thing” in business sometimes comes down to “doing the cheap, quick, or lazy thing”. In our case, an underling at the retail company had told us they were pirating our software, and he wanted to rectify it. He wanted to do the right thing. Negotiations occurred to try and come to closure about the piracy, but when it came down to paying the bill for the software that had been used/was being used, a higher up vetoed the payment due to us. Why? Simple risk management. Cheaper was believed to be better than the right thing.This tiny Texas software company couldn’t ever challenge them in court and win (for posterity: we could, and we did).

Unfortunately we hear stories all the time of this sort of thing. It’s a game of chicken. This isn’t unusual – it happens in software all the time.

I wish I could say that I was shocked when I hear of companies taking shortcuts – improperly using open-source (or commercial) software out of the bounds of how it is licensed, deploying complex systems without understanding their security threat model, or continuing to run software after it has left support. But no. Not much really surprises me much anymore.

What does concern me, though, is that the world assumed that OpenSSL was secure, and that it had been reviewed and audited by enough skilled eyes to avoid elementary bugs like the one that created Heartbleed. But no, that’s not the case. Like any complex system, there’s a certain point where an innumerable number of people around the world just assumed that OpenSSL worked, accepted it, and deployed it; yet here it failed at a fundamental level for two years.

In a recent interview, the developer responsible for the flaw behind Heartbleed discussed the issue, stating, “But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area.”

I can’t tell you how troubling I find that statement. Long ago, Microsoft had a sea change with regard to how software was developed. Key components of this change involved

  1. Developing threat models in order to be certain we understood the types and angles of approach for any threat vectors we could find
  2. Deeper security foundations across the OS and applications
  3. Finally, a much more comprehensive approach to testing (in large part to try and ensure that “simple programming errors in new features” wouldn’t blow the entire system apart.

No, even Microsoft’s system is not perfect, and flaws still happen, even with new operating systems. But as I noted, I find it remarkably troubling that a flaw as significant as Heartbleed can make it through development, peer review, any bounds-checking testing done in the OpenSSL development process, and into release (where it will generally be accepted as “known good” by the community at large – warranted or not) for two years. It’s also concerning that the statement included that the Heartbleed flaw “unfortunately occurred in a security relevant area“. As I said on Twitter – this is OpenSSL. The entire thing should be considered to be a security relevant area.

The biggest problem with this issue is that there should be ongoing threat modeling and bounds checking amongst users of OpenSSL (or any software – open or commercial), and in this case the OpenSSL development community to ensure that the software is actually secure. But as with any complex system, there’s a uniform expectation that this type of project results in code that could be generally regarded as safe. But most companies will simply assume a project as mature and ubiquitous as OpenSSL is so, and do little to no verification of the software, deploy it, and later hear through others about vulnerabilities in the software.

In the complex stacks of software today, most businesses aren’t qualified to, simply aren’t willing to, or aren’t aware of the need to, perform acceptance checking on third-party software they’re using in their own systems (and likely don’t really have developers on staff that are qualified to review software such as OpenSSL. As a result, a complex and fragile system becomes even more complex. And even more fragile. Even more dangerous, without any level of internal testing, these systems of internal and external components are assumed to be reliable, safe, and secure – until time (and usually a highly technical developer being compensated for finding vulnerabilities) show it to not be the case, and then we find ourselves in goose chase mode, as we are right now.

Apr 14

The end is near here!

Imagine I handed you a Twinkie (or your favorite shelf-stable food item), and asked you to hold on to it for almost 13 years, and then eat it.

Aw, c’mon. Why the revulsion?

It’s been hard for me to watch the excited countdown to the demise of Windows XP. Though I did help ship Windows Server 2003 as well, no one product (or service) that I’ve ever worked on became so popular, for so long – by any stretch of the imagination – as Windows XP did.

Yet, here we are, reading articles discussing the topic of what country or what company is now shelling out $M to get support coverage for Windows XP for the next 1, 2, or 3 years (getting financially more painful as the year count goes up). It’s important to note that this is no “get out of jail free” card. Nope. This is just life support for an OS that has terminal zero-day. These organizations still have to plan and execute a migration to a newer version of Windows that isn’t on borrowed time.

Why didn’t these governments and companies execute an XP evacuation plan? That’s a very good question. Putting aside the full blame for a second, there’s a bigger issue to consider.

Go back and think of that Twinkie. Contrary to popular opinion, Twinkies don’t last forever (most sources say it’s about 25 days). Regardless, you get the idea that for most normal things, even shelf-stable isn’t shelf-stable forever. Heck, even most MRE‘s need to be stored at a reasonable temperature and will taste suboptimal after 5 or more years.

While I can perhaps excuse consumers who decide to hang on to an operating system past it’s expiration date, I have a harder time understanding how organizations and governments with any long-term focus sat by and let XP sour on them. It would be one thing if XP systems were all standalone and not connected to the Internet. Perhaps then we could turn a blind eye to it. But that’s not usually the case; XP systems in business environments, which lack most of the security protections delivered later for Windows Vista, 7, and 8.x, are largely defenseless, and will be standing there waiting to get pwned as the vulnerabilities stack up after tomorrow. In my mind, the most dangerous thing is security vendors claiming to be able to protect the OS after April 8. In most cases, that’s an all but impossible feat, and instills a false sense of confidence in XP users and administrators.

The key concern I have is that people are looking at Windows XP as if software dying is a new thing, or something unusual. It isn’t. In fact, tomorrow, the entire spectrum of Office 2003 software (the Office productivity suite, SharePoint, Exchange, and more) also leave support and could have their own set of security compromises down the road. But as I said, this isn’t the first time software has entered an unsupportable realm, and it won’t be the last. It’s just a unique combination as we get the perfect storm of XP’s pervasiveness, the ubiquity of the Internet, and the increasing willingness of bad people to do bad things to computers for money. Windows Server 2003 (and 2003 R2) are next, coming up in July of 2015.

People across the board seem to have this odd belief that when they buy a perpetual license to software, it can be used forever (versus Office 365, which people more clearly understand as a subscription that expires if not paid in an ongoing manner). But no software, even if “perpetually licensed”, is actually perpetual. Like that Twinkie I’ve mentioned a few times, even good software goes bad. As an industry, we need to start getting customers throughout the world to understand that, and get more organizations to begin planning software deployments as an ongoing lifecycle, rather than a one-time expense that is ignored until it goes terminal.

Mar 14

The trouble with DaaS

I recently read a blog post entitled DaaS is a Non-Starter, discussing how Desktop as a Service (DaaS) is, as the title says, a non-starter. I’ll have to admit, I agree. I’m a bit of a naysayer about DaaS, just as I have long been about VDI itself.

In talking with a colleague the other day, as well as customers at a recent licensing boot camp, it sure seems like VDI, like “enterprise social” is a burger with a whole lot of bun, and not as much meat as you might hope for (given your investment). The promise as I believe it to be is that by centralizing your desktops, you get better manageability. To a degree, I believe that to be true. To a huge degree, I don’t. It really comes down to how standardized you make your desktops, how centrally you manage user document storage, and how much sway your users have (are they admin or can they install their own Win32 apps).

With VDI, the problem is, well… money. First you have server hardware and software costs, second, you have the appropriate storage and networking to actually execute a a VDI implementation, and third, you finally have to spend the money to hire people who can glue it all together in an end-user experience that isn’t horrible. It feels to me that a lot of businesses fall in love with VDI (true client OS-based VDI) without taking the complete cost into account.

With DaaS, you pay a certain amount per month, and your users can access a standardized desktop image hosted on a service provider’s server and infrastructure – which is created and managed by them. The OS here is actually usually Windows Server, not a Windows desktop OS – I’ll discuss that in a second. But as far as infrastructure, using DaaS from a service provider means you usually don’t have to invest the cash in corporate standard Windows desktops or laptops (or Windows Server hardware if you’re trying VDI on-premises), or the high-end networking and storage, or the people to glue that architecture together. Your users, in turn, get (theoretically) the benefits of VDI, regardless of what device they come at it with (a personally owned PC, tablet, whatever).

However, as with any *aaS, you’re then at the mercy of your DaaS purveyor. In turn, you’re also at the mercy of their licensing limitations as it regards Windows. This is why  most of them run Windows Server; it’s the only version of Windows that can generally be made available by hosting providers, and Windows desktop OSs can’t be. You also have to live within the constraints of their DaaS implementation (HW/SW availability, infrastructure, performance, and architecture, etc). To date, most DaaS offerings I’ve seen focused on “get up and running fast!”, not “we’ll work with you to make sure your business needs are solved!”.

Andre’s blog post, mentioned at the beginning of my post here, really hit the nail on the head. In particular, he mentioned good points about enterprise applications, access to files and folders the user needs, adequate bandwidth for real-world use, and DaaS vs. VDI.

To me, the main point here is that with a DaaS, your service provider, not you, get to call a lot of the shots here, and not many of them consider the end-to-end user workflow necessary for your business.

Your users need to get tasks done, wherever they are. Fine. Can they get access to their applications that live on premises, through VDI in the cloud, from a tablet at the airport? How about their files? Does your DaaS require a secondary logon, or does it support SSO from their tablet or other non-company owned/managed device? How fat of a pipe is necessary for your users before they get frustrated? How close can your DaaS come to on-premises functionality (as if the user was sitting at an actual PC with an actual keyboard and mouse (or touch)?

On Twitter, I mentioned to Andre that Microsoft’s own entry into the DaaS space would surely change the game. I don’t know anything (officially or unofficially) here, but it has been long suspected that Microsoft has planned their own DaaS offering.

When you combine the technologies available in Windows Server 2012 R2, Windows Azure, and Office 365, the scenario for a Microsoft DaaS actually starts to become pretty amazing. There are implementation costs to get all of this deployed, mind you – including licensing and deployment/migration. That isn’t free. But it might be worth it if DaaS sounds compelling and I’m right about Microsoft’s approach.

Microsoft’s changes to Active Directory in Server 2012 R2 (AD FS, the Web Application Proxy [WAP]) mean that users can get to AD from wherever they are, and Office 365 and third party services (including a Microsoft DaaS) can have seamless SSO.

Workplace Join can provide that SSO experience, even from a Windows 7, iOS, or Samsung Knox device, and the business can control which assets and applications the user can connect to, even if they’re on the inside of the firewall and the user is not (through WAP, mentioned previously), or available through another third party.

Work Folders enables synchronized access to files and folders that are stored on-premises in Windows file shares, to user devices. This could conceptually be extended to work with a Microsoft (or third-party) DaaS as well, and I have to think OneDrive for Business could be made to work as well given the right VDI/DaaS model.

In a DaaS, applications the user needs could be provided through App-V, RemoteApp running from an on-premises Remote Desktop server (a bit of redundancy, I know), or again, published out through WAP so users could connect to them as if the DaaS servers were on-premises.

When you add in Office 365, it continues building out the solution, since users can again be authenticated using their AD credentials, and OneDrive for Business can provide synchronization to their work PCs and DaaS, or access on their personally owned device.

Performance is of course a key bottleneck here, assuming all of the above pieces are in place, and work as advertised (and beyond). Microsoft’s RemoteFX technology has been advancing in terms of offering a desktop-like experience regardless of the device (and is now supported by Microsoft’s recently acquired RDP clients for OS X, iOS, and Android). While Remote Desktop requires a relatively robust connection to the servers, it degrades relatively gracefully, and can be tuned down for connections with bandwidth/latency issues.

All in all, while I’m still a doubter about VDI, and I think there’s a lot of duct tape you’d need to put in place for a DaaS to be the practical solution to user productivity that many vendors are trying to sell it as, there is promise here, and given the right vendor, things could get interesting.

Mar 14

Considering CarPlay

Late last week, some buzz began building that Apple, alongside automaker partners, would formally reveal the first results of their “iOS in the Car” initiative. Much as rumors had suspected, the end result, now dubbed CarPlay, was demonstrated (or at least shown in a promo video) by initial partners Ferrari, Mercedes-Benz, and Volvo. If you only have time to watch one of them, watch the video of the Ferrari. Though it is an ad-hoc demo, the Ferrari video isn’t painfully overproduced as the Mercedes-Benz video unfortunately is, and isn’t just a concept video as the Volvo is.

The three that were shown are interesting for a variety of reasons (though it is also notable that all three are premium brands). The Ferrari and Volvo videos demonstrate touch-based navigation, and the Mercedes-Benz video uses what (I believe) is their knob-based COMAND system. While CarPlay is navigable using all of them, using the COMAND knob to control the iOS-based experience feels somewhat contrived or forced; like using an old iPod click wheel to navigate a modern iPhone). It just looks painful (to me that’s a M-B issue, not an Apple issue).

Outside of the initial three auto manufacturers, Apple has said that Honda, Hyundai, and Jaguar will also have models in 2014 with CarPlay functionality.

So what exactly is CarPlay?

As I initially looked at CarPlay, it looked like a distinct animal in the Apple ecosystem. But the more I thought about it, the more familiar it looked. Apple pushing their UX out into a new realm, on a device that they don’t own the final interface of… It’s sort of Apple TV, for the car. In fact, pondering what the infrastructure might look like, I kept getting flashbacks to Windows Media Center Extenders, which are remote thin clients that rendered a Windows Media Center UI over a wired or wireless connection.

Apple’s  CarPlay involves a cable-based connection (this seems to be a requirement at this point, I’ll talk about it a bit later) which is used to remotely display several key functions of your compatible iPhone (5s, 5c, 5) on the head unit of your car. That is, the display is that of your auto head unit – but for CarPlay features, your iPhone looks to be what’s actually running the app, and the head unit is simply a dumb terminal rendering it. All data is transmitted through your phone, not some in-car LTE/4G connection, and all of the apps reside, and are updated on your phone, not on the head unit. CarPlay seems to be navigable regardless of the type of touch support your screen has (if it has touch), but also works with buttons, and again, works with knob-based navigation like COMAND.

Apple seems to be requiring two key triggers for CarPlay – 1) a voice command button on the steering wheel, and 2) an entry point into CarPlay itself, generally a button on the head unit (quite easy to see if you watch the Ferrari video, labeled APPLE CARPLAY). Of course these touches are in addition to integrating in the required Apple Lightning cable to tether it all together.

In short, Apple hasn’t done a complete end around of the OEM – the automaker can still have their own UI for their own in-car functions, and then Apple’s distinct CarPlay UI (very familiar to anyone who has used iOS 7) is there when you’re “in CarPlay”, if you will. It seems to me that CarPlay can best be thought of as a remote display for your iPhone, designed to fit the display of your car’s entertainment system. Some have said that “CarPlay systems” are running QNX – perhaps some are. The head unit manufacturer doesn’t really appear to be important here. The main point of all of this is it appears the OEM doesn’t have to do massive work to make it functional, it really looks to primarily be integrating in the remote display functionality and the I/O to the phone. In fact, the UI of the Ferrari as demonstrated doesn’t look to be that different from head units in previous versions of the FF (from what I can see). Also, if you watch the Apple employee towards the end, you can see her press the FF “app”, exiting out to the FF’s own user interface, which is distinctly different from the CarPlay UI. The CarPlay UI, in contrast, is remarkably consistent across the three examples shown so far. While the automakers all have their own unique touches, and controls for the rest of the vehicle, these distinct things that the phone is, frankly, better at, are done through the CarPlay UI.

The built-in iPhone apps supported with CarPlay at this point appear to be:

  • Phone
  • Messages
  • Maps
  • Music
  • Podcasts

The obvious scenarios here are making/receiving phone calls or sending/receiving SMS/iMessages with your phone’s native contact list, and navigation. Quick tasks. Not surfing or searching the Web while you’re driving. Yay! The Maps app has an interesting touch that the Apple employee chose to highlight in the Ferrari video, where maps you’ve been sent in messages are displayed in the list of potential destinations you can choose from. Obviously the CarPlay solution enables Apple’s turn-by-turn maps. If you’re an Apple Maps fan, that’s great news (I’m quite happy with them at this point, personally). If you like using Google Maps or another mapping/messaging or VOIP solution, it looks like you’re out of luck at this point.

In addition to touch, button, or knob-based navigation, Siri is omnipresent in CarPlay, and the system can use voice as your primary input mechanism (triggered through a voice command button on the steering wheel), and is used for reading text messages out loud to you, and responding to them. I use that Siri feature pretty often, myself.

The Music and Podcasts seem like obvious apps to make available, especially now that iTunes Radio is available (although most people either either love or hate the Podcasts app). Just as importantly, Apple is making a handful of third-party applications at this point. Notably:

  • Spotify
  • iHeartRadio
  • Stitcher

Though Apple’s CarPlay site does call out the Beats Music app as well, I noticed it was missing in the Ferrari demo.

Overall, I like Apple’s direction with this. Of course, as I said on Twitter, I’m so vested in the walled garden, I don’t necessarily care that it doesn’t integrate in with handsets from other platforms. That said, I do think most OEMs will be looking at alternatives and implementing one or more of them simultaneously (hopefully implementing all of them that they choose to in a somewhat consistent manner).

Personally, I see quite a few positives to CarPlay:

  • If you have an iPhone, it takes advantage of the device that is already your personal  hub, instead of trying to reinvent it
  • It isolates the things the manufacturer may either be good at or may want to control, and the CarPlay UX. In short, Apple gets their own UX, presented reliably
  • It uses your existing data connection, not yet another one for the car
  • It uses one cable connection. No WiFi or BLE connectivity, and charges while it works
  • I trust Apple to build a lower-distraction (Siri-centric) UI than most automakers
  • It can be updated by Apple, independent of the car head unit
  • Apple can push new apps to it independent of the manufacturer
  • Apple Maps may suck in some people’s perspective (not mine), but it isn’t nearly as bad as some in-dash nav systems (watch some of Brian’s car reviews if you don’t believe me), and doesn’t require shelling out for shiny-media based updates!

Of course, there are some criticisms I or others have already mentioned on Twitter or in reviews:

  • It requires, and uses, iOS 7. Don’t like the iOS 7 UI? You’re probably not going to be a fan
  • It requires a cable connection. Not WiFi or BLE. This is a good/bad thing. I think in time, we’ll see considerate design of integrated phone slots or the like – push the phone in, flat, to dock it. The cables look hacky, but likely enable the security, performance, low latency, and integrated charging that are a better experience overall (also discourages you from picking the phone up while driving)
  • Apple Maps. If you don’t like it, you don’t like it. I do, but lots of people still seem to like deriding it
  • It is yet another Apple walled garden (like Apple TV, or iOS as a whole). Apple controls the UI of CarPlay, how it works, and what apps and content are or are not available. Just like Apple TV is at present. The fact that it is not an open platform or open spec also bothers some.

Overall, I really am excited by what CarPlay represents. I’ve never seen an in-car entertainment system I really loved. While I don’t think I really love any of the three head units I’ve seen so far, I do relish the idea of being able to use the device I like to use already, and having an app experience I’m already familiar with. Now I just need to have it hit some lower-priced vehicles I actually want to buy.

Speaking of that; Apple has said that, beyond the makers above, the following manufacturers have also signed on to work with CarPlay:

BMW Group (which includes Mini and Rolls-Royce), Chevrolet, Ford, Kia, Land Rover, Mitsubishi, Nissan, Opel PSA Peugeot Citroen, Subaru, Suzuki, and Toyota.

As a VW fan, I was disheartened to not see VW on the list. Frankly I wouldn’t be terribly surprised to see a higher-end VW marque opt into it before too long (Porsche, Audi, or Bentley seem like obvious ones to me – but we’ll see). Also absent? Tesla. But I wouldn’t be surprised to see that show up in time as well.

It’s an interesting start. I look forward to seeing how Google, Microsoft, and others continue to evolve their own automotive stories over the coming years – but I think one thing is for sure; the beginning of the phone as the hub of the car (and beyond) is just beginning.