25
Jul 14

You have a management problem.

I have three questions for you to start off this post. I don’t care if you’re “in the security field” or not. In fact, I’m more interested in your answers if you aren’t tasked with security, privacy, compliance, or risk management as a part of your defined work role.

The questions:

  1. If I asked you to show me threat models for your major line of business applications, could you?
  2. If I asked you to define the risks (all of them) within your business, could you?
  3. If I asked you to make a decision about what kind of risks are acceptable for your business to ignore, could you?

In most businesses, the answer to all three is probably no, especially the further you get away from your security or IT teams. Unfortunately, I also believe the answer is pretty firmly no as you roll up the management chain of your organization into the C-suite.

Unless your organization consists of just you or a handful of users, nobody in your organization understands all of the systems and applications in use across the org. That’s a huge potential problem.

The other day I was talking with three of our customers, and the conversation started around software licensing, then spun into software asset management, auditing, and finally to penetration testing and social engineering.

At first glance, that conversation thread may seem diverse and disconnected. But they are so intertwined. Every one of those topics involves risk. Countering risk, in turn, requires adequate management.

By management, I mean two things:

  1. Management of the all components involved (people, process, and technology – to borrow a line from a friend)
  2. Involvement of management. From your CEO or top-level leadership, down.

You certainly can’t expect your C-level executives to intimately know every application or piece of technology within the organization. That’s probably not tractable. What is crucial is that there is accountability down the chain, and trust up the chain. If an employee responsible for security or compliance says there’s a problem that needs to be immediately addressed, they need to be trusted. They can’t run their concern up the flagpole and have someone who is incapable of adequately assessing the technical or legal (or both) implications of hedging on addressing it, and cannot truthfully attest to the financial risk of fixing the issue or doing nothing.

  • If you hire a security team and you don’t listen to them, what’s the point of hiring them? Just run naked through the woods.
  • If you hire a compliance team (or auditor) and don’t listen to them, what’s the point of hiring them? Just be willing to bring in an outside rubber-stamp auditor, and do the bare minimum.
  • If you have a team that is responsible for software asset management, and you don’t empower them to adequately (preemptively) assess your licensing posture, what’s the point of hiring them? Just wait and see if you get audited by a vendor or two, and accept the financial pit.

If you’re not going to empower and listen to people in your organization who with risk management skills, don’t hire them. If you’re going to hire them, listen to them, and work preemptively to manage risk. If you’re going to try and truly mitigate risk across your business, be willing to preemptively invest in people, processes and technology (not bureaucracy!) to discover and address risk before it becomes damage.

So much of the bullshit that we see happening in terms of unaddressed security vulnerabilities, breaches (often related to vulns), social engineering and (spear)phishing, and just plain bad software asset management has everything to do with professionals who want to do the right thing not being empowered to truly find, manage, and address risk throughout the enterprise, and a lack of risk education up and down the org. Organizations shouldn’t play chicken with risk and be happy with saving a fraction of money up front. It can well become exponentially larger if it is ignored.


13
Apr 14

Complex systems are complex (and fragile)

About every two months, a colleague and I travel to various cities in the US (and sometimes abroad) to teach Microsoft customers how to license their software effectively over a rather intense two-day course.

Almost none of these attendees want to game the system. Instead, most come (often repeatedly, sometimes with more people each time) to simply understand the ever-changing rules, how to apply them correctly, and how to (as I often hear it said) “do the right thing”.

Doing the right thing, whether we’re talking licensing, security, compliance, and beyond, often isn’t cheap. It takes planning, auditing, understanding the entire system, understanding an application lifecycle, and hiring competent developers and testers to help build and verify everything.

In the case of software licensing, we’ve generally found that there is no one single person that knows the breadth of a typical organization’s infrastructure. How can there possibly be? But the problem is if you want to license effectively (or build systems that are secure, compliant, or reliable), an individual or group of individuals must understand the entire integrated application stack – or face the reality that there will be holes. But what about the technology, when issues like Heartbleed come along and expose fundamental flaws across the Internet?

The reality is that complex systems are complex. But it is because of this complexity that these systems must be planned, documented, and clearly understood at some level, or we’re kidding ourselves that we can secure, protect, defend (and properly pay for) these systems, and have them be available with any kind of reliability.

Two friends on Twitter had a dialog the other day about responsibility/culpability when open source components are included in an application/system. One commented, “I never understand why doing it right & not getting sued for doing it wrong aren’t a strong argument.”

I get what she means. But unfortunately having been at a small ISV who wound up suing a much larger retail company because they were pirating our software, “doing the right thing” in business sometimes comes down to “doing the cheap, quick, or lazy thing”. In our case, an underling at the retail company had told us they were pirating our software, and he wanted to rectify it. He wanted to do the right thing. Negotiations occurred to try and come to closure about the piracy, but when it came down to paying the bill for the software that had been used/was being used, a higher up vetoed the payment due to us. Why? Simple risk management. Cheaper was believed to be better than the right thing.This tiny Texas software company couldn’t ever challenge them in court and win (for posterity: we could, and we did).

Unfortunately we hear stories all the time of this sort of thing. It’s a game of chicken. This isn’t unusual – it happens in software all the time.

I wish I could say that I was shocked when I hear of companies taking shortcuts – improperly using open-source (or commercial) software out of the bounds of how it is licensed, deploying complex systems without understanding their security threat model, or continuing to run software after it has left support. But no. Not much really surprises me much anymore.

What does concern me, though, is that the world assumed that OpenSSL was secure, and that it had been reviewed and audited by enough skilled eyes to avoid elementary bugs like the one that created Heartbleed. But no, that’s not the case. Like any complex system, there’s a certain point where an innumerable number of people around the world just assumed that OpenSSL worked, accepted it, and deployed it; yet here it failed at a fundamental level for two years.

In a recent interview, the developer responsible for the flaw behind Heartbleed discussed the issue, stating, “But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area.”

I can’t tell you how troubling I find that statement. Long ago, Microsoft had a sea change with regard to how software was developed. Key components of this change involved

  1. Developing threat models in order to be certain we understood the types and angles of approach for any threat vectors we could find
  2. Deeper security foundations across the OS and applications
  3. Finally, a much more comprehensive approach to testing (in large part to try and ensure that “simple programming errors in new features” wouldn’t blow the entire system apart.

No, even Microsoft’s system is not perfect, and flaws still happen, even with new operating systems. But as I noted, I find it remarkably troubling that a flaw as significant as Heartbleed can make it through development, peer review, any bounds-checking testing done in the OpenSSL development process, and into release (where it will generally be accepted as “known good” by the community at large – warranted or not) for two years. It’s also concerning that the statement included that the Heartbleed flaw “unfortunately occurred in a security relevant area“. As I said on Twitter – this is OpenSSL. The entire thing should be considered to be a security relevant area.

The biggest problem with this issue is that there should be ongoing threat modeling and bounds checking amongst users of OpenSSL (or any software – open or commercial), and in this case the OpenSSL development community to ensure that the software is actually secure. But as with any complex system, there’s a uniform expectation that this type of project results in code that could be generally regarded as safe. But most companies will simply assume a project as mature and ubiquitous as OpenSSL is so, and do little to no verification of the software, deploy it, and later hear through others about vulnerabilities in the software.

In the complex stacks of software today, most businesses aren’t qualified to, simply aren’t willing to, or aren’t aware of the need to, perform acceptance checking on third-party software they’re using in their own systems (and likely don’t really have developers on staff that are qualified to review software such as OpenSSL. As a result, a complex and fragile system becomes even more complex. And even more fragile. Even more dangerous, without any level of internal testing, these systems of internal and external components are assumed to be reliable, safe, and secure – until time (and usually a highly technical developer being compensated for finding vulnerabilities) show it to not be the case, and then we find ourselves in goose chase mode, as we are right now.


06
Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.


01
Nov 12

Windows RT, Sideloading, and Office. Oh my.

When you start working with Microsoft licensing – well, to be fair, almost anyone’s enterprise licensing, it can be mind-numbing. Truth be told, when I stepped up to pinch hit for my colleague, to cover the immense changes to SQL Server 2012 licensing, I developed a migraine with vertigo – something that hadn’t occurred for several years. While it could have been coincidence, we’ve taken liberty with it at work, and turned it into a running joke for our boot camps, that enterprise licensing can give you migraines.

In junior high school, we had a science experiment using perspective-flipping glasses (kind of like these). Now the lore goes, if you wear this kind of glasses day in and day out for 3-5 days, your mind will actually adjust, and flip the image right side up (take them off and it’ll take a while to reverse again). I could barely walk, and felt like I was going to hurl when I tried the glasses.

But licensing? I’ve been wearing those glasses for around six months, and you know what? My vision is stabilizing, and I can honestly almost walk straight. So while some people new to (Microsoft) licensing may look at certain things that Microsoft does and say, “WTH?”, I say, “It makes perfect sense – squint and turn your head upside down for a second”.

Two recent decisions from Microsoft fall in this same category:

  1. Office Home and Student in Windows RT not including commercial use rights.
  2. Windows RT requiring a… bit of work to enable sideloading of applications.

Now, these don’t necessarily have anything to do with each other, except they do. Follow along.

When you have a business model – whether it’s working or not, you like the line for revenue to go up (and operating expenses to ideally go down) – even if it’s just a little bit. Microsoft is fastidious about this. Keep earnings up, and don’t drop the income ball.

So why is H&S free for non-commercial users? Easy. Windows RT (and largely Windows 8) is all about consumers. Look no further than the marketing materials. Windows RT and Windows 8 are intended to bring Windows, touch, and power efficiency to a new world of devices (and ideally, stave off some/much of the appeal of the iPad by doing so). Some businesses may move to Windows 8 in short order, but most won’t. They’ll stick with Windows 7 until they see how, and where, they want to deploy Windows 8. In the meantime, the users within these businesses will buy iPads, Android tablets (somebody does, right?) and Windows RT tablets for home use, and wind up bringing them into the office. All three platforms bring legal landmines for Microsoft and other enterprise software. But this isn’t the place for me to dive into that. We offer a whole 2 day course that covers many of those issues. 🙂 So Windows RT includes Office as, really, a loss leader. It’s a prize at the bottom of the Windows RT box. I don’t mean to denegrate either product by saying that – but the goal is very clearly a better together strategy, even though Windows RT includes only a few of the Office apps, and limited  functionality when compared to Office on x86/x64.

By offering H&S as free on Windows RT, Microsoft can make that platform more appealing to consumers. By not including commercial use rights, Microsoft can ensure that (back to two paragraphs ago) it doesn’t harm their enterprise sales/Office 365 subscriptions/Software Assurance revenue as it does so. All of those are non-small numbers for Office.

Mary Jo Foley walked through how businesses can obtain commercial use rights, and in a nutshell, you buy Office 2013 for a user’s Windows 7/8 PC, they get commercial use rights for Windows RT (turning RT into a companion device by definition). Now, that means that for the business, Office on Windows RT isn’t free, but it also means it isn’t full price. In many ways, businesses get to take advantage of the multiple-device licenses that Office has had for some time (install on your primary and a secondary device), it’s just that the license is applied to Windows RT, rather than the actual bits as users would have historically done in the past. So that’s Office. What’s the deal with sideloading?

Matt’s lengthy walkthrough demonstrates the technical hurdles of sideloading apps (putting apps on Windows 8 or Windows RT without going through the Windows Store), but there’s a licensing angle here too – and in many ways it’s the same one I just demonstrated, if you put your glasses on and turn your head over.

Why is sideloading so complicated? Because there are three competing forces at play (in no particular order):

  1. Microsoft’s desire to keep the WinRT platform and Windows Store secure – sideloading gates what can/cannot run on these devices.
  2. Microsoft’s desire to keep the Windows Store as the preferred means of obtaining apps written for WinRT – retaining the 30% (or 20%) of revenue from app sales.
  3. Microsoft’s desire to (hum along if you know the tune) maintain Windows enterprise licensing sales – Enterprise includes sideloading. It’s a paid option on other editions.

By requiring a key for other versions, and requiring payment for that key, and requiring a minimum number of those licenses, Microsoft discourages “casual bypassing” or piracy of those keys as a mechanism to try and avoid using the Windows Store by tinkerers or hackers, or commercial distribution of apps that wouldn’t meet store guidelines, which is something sideloaded apps can do (see 1 and 2).

By not requiring any special keys or costs in Windows 8 Enterprise, Microsoft rewards those customers who have invested in SA (or Intune) and incentivizes customers on the fence about Windows client SA (or Intune) to take one of those avenues (see 3).

Like Active Directory membership was in the beginning (guilty!), sideloading is important, but I think may have been overblown in terms of either importance or complexity. The more I look at it, the more I realize that there are very few apps that will really require sideloading. Most commercial apps should be distributed on the Windows Store, either for sale (sharing revenue with Microsoft) or for free with a subscription (which, unlike Apple, for now at least, does not require revenue sharing with Microsoft). Instead, only enterprises building in-house Windows 8 and Windows RT line of business (LOB) apps will really need sideloading – at least as Microsoft would like it to exist.

As we start progressing to more of these enterprises building LOB apps, the need for sideloading may become more important. But I don’t anticipate most enterprises going into their own Windows 8 development lifecycle in short order (<6-12 months). They’re still trying to get their hands around the platform as a whole. That, combined with the lack of guidance on building LOB apps that align with the design principles Microsoft has been evangelizing for the last year, are taking, and will continue to take, some time for them to digest. Not that some companies won’t build their own WinRT LOB apps – some already are, and those may likely require sideloading. For customers with SA, which are likely to align reasonably well with those who have the time and energy to build apps for Windows 8 and Windows RT, the licensing “bumps” put in front of sideloading are likely a non-event. For consumers or hobbyists? Sideloading is a non-starter. Exactly as it was likely intended.

 


17
Oct 12

VDI? OMG.

For two days last week, I was at the annual Chicago installment of our Microsoft Licensing Boot Camp. I’ve been to several of our camps to help present a couple of the topics. I’ve also noticed something unusual (and somewhat frightening) occurring.

What I’ve seen is the growth of – or at least growth of the interest of – Virtual Desktop Infrastructure (VDI). In VDI, the desktop operating system that a user interacts with is virtualized (and often remotely located) rather than being a desktop PC or even a laptop with Windows that the user runs locally. The theory is that by virtualizing, you can centralize deployment, management and servicing, spin VMs up or down as you need them, and sometimes use layering technologies to make this management more efficient. In an environment where you task users with buying/bringing their own work PC, VDI also gives you a way to secure the user’s work environment by providing a common image to all users, secured through RDP.

I say theory because, barring dramatic improvements in how Windows handles state separation (user/app/OS), layering technologies are fraught with some peril. Perhaps some of Citrix’s offerings, or other companies I haven’t seen have unwound the Windows state problem and really enabled efficient virtualization that isn’t just N VM’s for N users. As I’ve never seen otherwise, though, I’m inclined to believe that VDI – and virtualization as a whole, save you money on hardware but do not save you nearly as much in terms of deployment, management and servicing as you might think. With client VDI in particular, you had 8 physical systems horizontally – now you have 8 virtual systems stacked vertically. Hope you’ve chosen a good hypervisor and clustered server to run it on so those virtual desktops have high availability.

VDI has this certain ring to it. If you’ve been in IT, you know the sound. It’s the sound of a technology your CIO asked you to investigate because he heard from another CIO on the golf course, “Wait. You haven’t deployed VDI yet?” Yes, it’s a bright shiny object (BSO) with untold perils if you don’t license it properly.

In NYC when we asked who was looking at doing VDI, two – maybe three – people raised their hands. In Chicago, it was easily 85% of the room either looking at it or doing it now. In NYC, an attendee quietly asked me, “Why would someone ever do VDI instead of Remote Desktop?” Logical question, given RDP’s easily understood – and enforced – licensing, highly scalable architecture (far more users in far less space, RAM, and processor utilization), fault tolerance, etc. I quietly replied back, “I have no idea.” In Chicago, when we had wrapped, an attendee walked up and basically asked me the same thing. He wanted me to help him understand why people are so in love with VDI. I told him, much like NYC, “I don’t understand it either.

VDI isn’t cheap. It’s definitely not free. While you can theoretically remove Windows desktops as the client endpoint and use an RDP dumb terminal (or an iPad), you face licensing complexities as a direct result of doing so.

Microsoft is a better chess player than you are when it comes to licensing. Depending on what you access a Windows VDI system from (using RDP from a user-owned Windows laptop, for example), sometimes you may have, or may not have, properly licensed the client system to ever connect. There’s no magical licensing to prevent you from doing the wrong thing – only the potential penalty of an audit for not having done so correctly. What I’m saying is, there are some huge licensing qualifications that you have to work through in order to implement VDI with the Windows desktop, and not understanding them before you ever look into implementing VDI is kind of like asking Felix Baumgartner to jump from his capsule without ever doing any sort of testing. You could very easily wind up hurting yourself.

As to using an iPad as a VDI client, I’m really confused as to who (if anyone, actually) does this. Accessing Win32 applications from an iPad is akin to torture. It’s a sub 10″ screen, with touch only, no mouse, and a soft keyboard. What kind of tasks are you asking users to perform with this? Either move the task to a proper task-optimized  Web app or iOS app, or give them a proper desktop or laptop system on which to perform their task. I may well dive into this topic in a future post. Sure to generate some conversation.

Are you using VDI? Do you understand the licensing of Windows, Office, and every other software component you’re using? Do you disagree with me that VDI is just a BSO (and believe that you’re saving tons of money with it)? Let me know what you think.


09
Sep 12

Windows to Go where exactly?

Recently, I’ve seen a lot of excitement around Windows to Go, a new feature available in Windows 8. Windows to Go (WTG) enables Windows 8 (Enterprise) to boot from a USB Flash Drive (UFD).

Fundamentally, WTG includes three technical features:

  1. Windows support for USB boot (including USB 3.0)
  2. Support for installing and running Windows from a removable USB hard drive (yes, this is a different line item than 1)
  3. Support for handling “surprise removal” of Windows without hanging or crashing.

The latter is a rather nifty trick – since UFDs can be yanked from a system much more readily than internal drives, Windows has to handle the scenario of it’s main boot drive being pulled – which it hasn’t ever handled gently before. Historically, you unplug Windows’ boot drive (the one it’s actually running from, not the one it booted from, which is called the system drive – yes, really) and Windows crashes immediately. Windows Embedded has supported a few tricks here, but packaged versions of Windows never supported it, nor did they support booting and running from USB storage – which Windows XP Embedded (and WinPE) has done since they both added it a few years after Windows XP released to manufacuring.

The other thing that WTG adds that Windows never had before was a license to boot Windows this way. You see, there are few things stronger than the bond Microsoft has when it comes to Windows licenses being glued to PCs.

From the first time I saw WTG, I knew where it would end up, licensing-wise, in Microsoft’s product stack. It would land in Software Assurance (SA) – the featureset only available to enterprise customers paying annually for “subscriptions” to Windows. This means that as much fun as it could be for geeks, it is a feature unavailable to them unless they work in an organization that has SA on Windows. WTG is also available as a part of Windows Intune or a Virtual Desktop Access (VDA) subscription – but again, not available to organizations who only run the license of Windows that comes with their new PCs, and not available to consumers at all.

I’ve had many people comment on what a great solution WTG is – that it solves many problems. Frankly, I’m not sure. To me, WTG is effectively Virtual Desktop Infrastructure (VDI) where you take your desktop with you. It could prove useful in organizations where shift changes have multiple users using the same PC, or users simply have a collection of shared PCs to use. But really, all WTG does is enable users to roam their entire Windows PC state with them wherever they go. While the innate SkyDrive integration in Windows 8 and Office 2013, and SkyDrive Pro integration in SharePoint 2013 could enable seamless synchronization from WTG “PCs” to a central location, it means WTG needs to have Internet connectivity on a regular basis – or a user who loses their WTG key (as I lost mine – thankfully with no key data on it) loses their entire unsynchronized workload with it.

WTG does not perform any magic to keep Windows up to date, it requires patching and can get out of date – or compromised – just as easily as an install of Windows on a normal hard disk can.

I’m in an unusual situation to be considering WTG’s viability. In 2001, while working as a setup Program Manager in Windows, I started looking into what it would take to boot Windows PE (aka “WinPE”, our ultralight version of Windows, used during Windows setup since Vista) off of USB Flash Drives. With the help of two resourceful developers and an architect, we had a prototype running during 2002, and we worked hand in hand – 10 years ago – to get OEMs to build UFD boot support into their PCs. We talked about booting Windows itself from UFDs, and another project looked at storing user profiles themselves on UFDs.. While Windows Embedded did add the code to boot Windows from USB to their codebase, Windows itself (outside of WinPE) didn’t until Windows 8 added it for WTG. While I even have a patent that aligns with the idea of booting an entire PC from Windows, I still remain unconvinced that WTG makes sense for most scenarios. It makes sense where you need VDI but don’t have reliable access to a network (but then conversely creates issues where these Windows images can’t be patched or managed, and user documents can be loast as mentioned).

Some have suggested it’s suitable alternative to handing out tablets – I have to disagree, since you still need to provide hardware for WTG to boot from. It’s like handing a student a e-SATA drive and telling them to have a great schoolyear. The advantage of WTG is that it can boot Windows on – theoretically – any system the user has access to. The problems are rather significant, in my opinion (stated in order that users could hit them):

  1. PCs do not have a friendly boot structure. Even though we started the effort to boot Windows from USB 10 years ago, Windows PCs are no easier to boot from UFD than they were 10 years ago. I would have loved it if we could have gotten OEMs to standardize on say, hold down the Windows Key+U and it will boot from UFD. But no, that never happened. We’re still barely moving past BIOS and into EFI, which was already underway then to a degree as well.
  2. Organizations have spent so much time securing both their boot process (BIOS passwords anyone?) and USB connectivity (USB storage being a principal threat vector for PC infection) that in particular, passwords will ironically have to be thrown out. Users will need to, for the near term, be able to futz with BIOS settings – the last thing a non-technical user could or should be doing with Windows.
  3. Compatibility. WTG will work on PCs that have USB 2, but not terribly well. When we first tested UFD boot with OEMs, we found some took forever to boot (as they had questionable USB implementions in the BIOS itself), some crashed, some wound up with race conditions that caused issues once Windows was up and running. I don’t believe all of this has been rectified, since Windows still doesn’t get booted up on PCs regularly over USB.  WTG works best with USB 3 – which most PCs don’t have, and even those that do, like my Lenovo, often sport USB 3 on select ports, but not all – making it a confusing user proposition to actually try and work with. WTG is also the same as Windows 8, and has the same hardware requirements. It’s not as if you can take a bunch of early-era Windows XP systems out of service and have them serve as WTG hosts. Finally, WTG isn’t magic – it still requires drivers to work with the hardware it is running  from. If you can’t get the wired or wireless network up and running – which are still the most frequently unavailable devices I run into, personally, what hope do you have of downloading drivers for them or for any other device not included in your WTG image?
  4. Servicing. To that same end – if your WTG image isn’t connected regularly, yet is connecting to PCs regularly so you can add or remove files from it, you’ve created a new threat vector for the company network, since you have a “loosely managed” device that can become patient zero when it does finally connect to the network..
  5. Data loss. There are two aspects to this. One, most UFDs that exist today do not support two partitions. There are two drives that are tested and supported with WTG – because these two show up as “fixed”, not removable, USB 3 drives, they work great with WTG and – crucially – support BitLocker drive encryption. Nobody on this planet should EVER use WTG without BitLocker or a third-party disk encryption tool. NOBODY. The idea, in this time where we are constantly hearing about this laptop or that laptop being stolen or lost without it having drive encryption – and a new model for Windows that’s even easier to lose, and that it can even be deployed without BitLocker? Terrifies me. Second, as mentioned earlier if these Windows instances don’t have reliable network connectivity and the WTG device is lost – as tiny flash drives can be – the WTG device owner’s hard work is lost forever. A roaming user profile can’t save you if you never log on and sync it. :-/
Short story long, I think WTG is interesting technology, as it was when we started playing with the primordial ooze of Windows from USB 10 years ago. But I’m still not convinced that the problems it can create are outweighed by the small list of benefits it could bring. Disagree? Have  specific scenarios where you think WTG makes sense, or you think WTG solves problems for IT in a way that Windows on a HDD doesn’t, let me know in the comments!