24
Dec 14

Mobile devices or cloud as a solution to the enterprise security pandemic? Half right.

This is a response to Steven Sinofsky’s blog post, “Why Sony’s Breach Matters”. While I agree with parts of his thesis – the parts about layers of complexity leaving us where we are, and secured, legacy-free mobile OS’s helping alleviate this on the client side, I’m not sure I agree with his points about the cloud being a path forward – at least in any near term, or to the degree of precision he alludes to.

The bad news is that the Sony breach is not unique. Not by a long shot. It’s not the limit. It’s really the beginning. It’s the shot across the bow for companies that will let them see one example of just how bad this can get. Of course, they should’ve been paying attention to Target, Home Depot, Michaels, and more by this point already.

Instead, the Sony breach is emblematic of the security breaking point that has become increasingly visible over the last 2 years. It would be the limit if the industry turned a corner tomorrow and treated security like their first objective. But it won’t. I believe what I’ve said before – the poor security practices demonstrated by Sony aren’t unique. They’re typical  of how too many organizations treat security. Instead of trying to secure systems, they grease the skids just well enough to meet their compliance bar, turning an eye to security that’s just “too hard”.

 

While the FBI has been making the Sony attack sound rather unique, the only unique aspect of this one, IMHO, is the scale of success it appears to have achieved. This same attack could be replayed pretty easily. A dab of social engineering… a selection of well-chosen exploits (they’re not that hard to get), and Windows’ own management infrastructure appears to have been used to distribute it.

 

I don’t necessarily see cloud computing yet as the holy grail that you do. Mobile? Perhaps.

 

The personal examples you discussed were all interesting examples, but indeed were indicative of more of a duct-tape approach, similar to what we had to do with some things in Windows XP during the security push that led up to XPSP2 after XPSP1 failed to fill the holes in the hull of the ship. A lot of really key efforts, like run as non-admin just couldn’t have been done in a short timeframe to work with XP – had to be pushed to Vista (where they honestly still hurt users) or Windows 7 where the effort could be taken to really make them work for users from the ground up. But again, much of this was building foundations around the Win32 legacy, which was getting a bit sickly in a world with ubiquitous networking and everyone running as admin.

 

I completely agree as well that we’re long past adding speed bumps. It is immediately apparent, based upon almost every breach I can recall over the past year, that management complexity as a security vector played a significant part in the breach.

If you can’t manage it, you can’t secure it. No matter how many compliance regs the government or your industry throws at you. It’s quite the Gordian knot. Fun stuff.

 

 

I think we also completely agree about how the surface area exposed by today’s systems is to blame for where we are today as well. See my recent Twitter posts. As I mentioned, “systems inherently grow to become so complex nobody understands them.” – whether you’re talking about programmers, PMs, sysadmins, or compliance auditors.

 

 

I’m inclined to agree with your point about social and the vulnerabilities of layer 8, and yet we also do live in a world where most adults know not to stick a fork into an AC outlet. (Children are another matter.)

Technology needs to be more resilient to user-error or malignant exploitation, until we can actually solve the dancing pigs problem where it begins. Mobile solves part of that problem.

 

When Microsoft was building UAC during Longhorn -> Vista, Mark Russinovich and I were both frustrated that Microsoft wasn’t really doing anything with Vista to really nail security down, and so we built a whitelisting app at Winternals to do this for Windows moving forward. (Unfortunately, Protection Manager was crushed for parts after our acquisition, and AppLocker was/is too cumbersome to accomplish this for Win32. Outside of the longshot of ditching the Intel processor architecture completely, whitelisting is the only thing that can save Win32 from the security mayhem it is experiencing at the moment.

 

I do agree that moving to hosted IaaS really does nothing for an organization, except perhaps drive them to reduce costs in a way that on-premises hosting can’t.

But I guess if there was one statement in particular that I would call out in your blog as something I heartily disagree with, it’s this part:

 

“Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company.”

 

This is a wonderful goal, and it’ll be great for startups that have no legacy codebase (and don’t bring in hundreds of open-source or shared libraries that none of their dev team understands down to the bottom of the stack). But most existing companies can’t do what they should, and cut back the overgrowth in their systems.

I believe pretty firmly that what I’ve seen in the industry over the last decade since I left Microsoft is also, unfortunately, the norm – that management – as demonstrated by Sony’s leadership in that interview, will all too often let costs win over security.

 

For organizations that can redesign for a PaaS world, the promise offered by Azure was indeed what you’ve suggested – that designing new services and new applications for a Web-first world can lead to much more well-designed, refined, manageable, and securable applications and systems overall. But the problem is that that model only works well for new applications – not applications that stack refinement over legacy goo that nobody understands. So really, clean room apps only.

The slow uptake of Azure’s PaaS offerings unfortunately demonstrates that this is the exception, and an ideal, not necessarily anything that we can expect to see become the norm in the near future.

 

Also, while Web developers may not be integrating random bits of executable code into their applications, the amount of code reuse across the Web threatens to do the same, although the security perimeter is winnowed down to the browser and PII shared within it. Web developers can and do grab shared .js libraries off the Web in a heartbeat.

Do they understand the perimeter of these files? Absolutely not. No way.

Are the risks here as big as those posed by an unsecured Win32 perimeter? Absolutely not – but I wouldn’t trivialize them either.

There are no more OS hooks, but I’m terrified about how JS is evolving to mimic many of the worst behaviors that Win32 picked up over the years. The surface has changed, as you said – but the risks – loss of personal information, loss of data, phishing, DDOS, are so strikingly similar, especially as we move to a “thicker”, more app-centric Web.

 

Overall, I think we are in for some changes, and I agree with what I believe you’ve said both in your blog and on Twitter, that modern mobile OS’s with a perimeter designed in them are the only safe path forward. The path towards a secure Web application perimeter seems less clear, far less immediate, and perhaps less explicit than your post seemed to allude to.

 

There is much that organizations can learn from the Sony breach.

 

But will they?

 


12
Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.


23
Apr 13

On peanut butter and chocolate and APIs…

A friend recently posted a link to this blog. It’s an interesting read about where you should focus when building your app; should you have one app for each platform, or an API that goes as high up as possible into each platform?

In particular, he quotes the expression, “the API is the asset, the UI is simply throwaway”.

I get the point he’s trying to say. Platforms come and go – but an API should be designed to be durable. I kind of agree, and I kind of don’t. Let me explain.

When a developer builds an API, it generally exposes rough verbs that relate to user tasks. When a designer or developer builds an application, it should be entirely defined by the tasks that a user needs to complete, and ideally, take advantage of distinct benefits of each platform where the investment to comply with those hooks increases the ease of use of the application.

In a nutshell, you are designing an API to expose a service, and an application to deliver an experience. The goal of a good development team should be to take the API as high up the stack as the application will allow – without exposing the user to the flow of the API directly. Think of an old recliner with the padding crushed down over time. You feel every nuance of the springs or metal bars holding it together. A good application design provides the padding to shield the end user from that pain, without overstuffing it. You want to invest enough in the UI to deliver an experience representative of (your application + that platform). Perhaps the expression quoted isn’t intended to be so harsh towards the UI as to make it seem like a wood veneer appliqué, but that’s how I read it. It’s true – you want to make as much of your code as portable as possible (the API), but invest where you need to in order to provide the best experience (the UI).

The goal of the API is to provide structure, the goal of the user interface is to provide the abstraction between your API and the user experience your application seeks to deliver for that platform. Peanut butter and chocolate.


06
Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.


27
Nov 09

iPhone Security

I like opening with that subject – because it’s two words that Apple seems to never want to see next to each other.

On Slashdot today, an article covered my friends from F-Secure discussing the barriers that are precluding the antivirus industry from making inroads in protecting iPhones from malware.

Indeed, they are correct, you cannot build A/V into the iPhone platform – the API is explicitly designed to forbid that. However, I have to counterpoint. I mentioned in a tweet several days ago:

The constraints keeping security s/w from diving deeper into the iPhone platform are the same ones precluding any need for them.

Yes, you read that right. I’m saying that the iPhone doesn’t need antivirus. Instead, Apple’s bigger problem is the lack of a mature platform management solution for the iPhone. Let me show you why.

When I went to Winternals, we rapidly discovered a giant chasm in security as Mark and I discussed how UAC (LUA at the time) would fall far short of creating a security boundary for Windows Vista (and continues to do so for Windows 7). The chasm is the latency between these steps:

  1. Exploit is identified
  2. Malware is authored and released
  3. Malware spreads
  4. Malware is identified
  5. Malware can be contained

You see, the flaw is that step 4 has to exist at all.

The fundamental flaw is blacklisting. Instead of fighting the good (but intractable) fight trying to identify all of the bad code, whitelisting relies on the premise that only known good, known trusted, code can start at all.

At Winternals, we created Protection Manager to respond to this hole in the security market. The key goals of the product were to only let known trusted code run, and to optionally run it with least privilege. In 2006, Microsoft acquired Winternals and, regrettably, discontinued the Protection Manager product. While Windows 7 features AppLocker, which theoretically applies whitelisting to Software Restriction Policies, I believe AppLocker has some fundamental shortcomings that I’ll discuss in a future post. Some aspects of Protection Manager, most notably the premise that a Digital Signature (code signing) is the best way of authenticating that code is:

  1. From a trusted source and
  2. Not been tampered with since publication

After Winternals, I worked on whitelisting again at CoreTrace, where the Bouncer product evolved to also recognize the importance of Digital Signatures, as one of the sources of Trusted Change. Only known trusted code is allowed to execute first off, and only code with specific properties is allowed to enable new code to be added to the whitelist.

Today, you hear mention all over the Internet of the rickrolling iPhone worm. Many have mimicked the code created on a whim by Ashley Towns, the worm’s creator. But the fundamental issue here isn’t the iPhone’s susceptibility to malware. Nope. Not at all.

You see, all existing worms that have compromised the iPhone rely on the fact that the iPhone must be both jailbroken and it must then have SSH installed, with an unmodified root password. Both qualify as best of breed “worst practices” from a security perspective.

In fact, those of us who haven’t jailbroken our iPhones (not arguing the ethics of that – that’s a separate conversation for another time) were not, and are not, susceptible at all. Why? Because the iPhone infrastructure as defined by Apple utilizes whitelisting. Only applications signed by software vendors that Apple has authorized (and that have signed the code) are ever countersigned by Apple and pushed through the App Store to be downloaded for purchase. Similar, but not as restrictive, constraints exist for Apple’s Enterprise program for application publishing.

To date, I have not seen any published malware that runs on an iPhone that has not been jailbroken or otherwise forced to run unsigned code (see Law #1 in the 10 Immutable Laws of Security. Any hack that does ever do so will rely on somehow compromising the signature infrastructure used for application publishing on the iPhone by Apple.

You may recall my original point – that the problem was the lack of enterprise management software of the iPhone itself. At CoreTrace, we were approached by an organization we were already working with that was realizing the growing number of Macs – and of even more concerning, the number of “rogue” iPhones (phones brought in by employees, and connected to the local wireless network and/or Exchange Server without IT ownership at any level).

The more we dug into it and researched, including the limited analysis necessary of the iPhone API and two fun, but largely circular conversations with Apple in Cupertino, the more we realized that they weren’t asking for, nor could we deliver (at least on non-jailbroken hardware) any form of “Bouncer for iPhone”.

Instead of security, the problem posed to an enterprise admin by the iPhone is that as an organization, you don’t need to control what is running on your iPhones from a “bad code” perspective, rather that the iPhone needs hardcore, Apple provided (and secured) management in order to control how “renegade” the devices themselves are. That means the ability to:

  1. Prevent connectivity of jailbroken hardware to an organization (Exchange, wireless, Bluetooth, or other)
  2. Prevent jailbreaking of connected hardware (or sever connectivity at a hardware level when it occurs)
  3. Explicitly control which Apple or Enterprise published applications can be downloaded or run on connected iPhones (don’t allow games, allow only these 10 applications, etc)
  4. Explicitly control the iPhone’s software image, configuration, and settings (much as Group Policy can do with Microsoft Windows systems) – NOT trying to reverse engineer how images get pushed out in a decentralized way via iTunes itself
  5. Explicitly control how applications can access any PII on the device or in documents (GPS location, email addresses, address book or call history info, etc)
  6. Explicitly control document DRM on the platform as IRM/RMS can do for Microsoft Office and Windows

Today (even following those conversations with Apple), KACE is the only vendor I’m aware of that performs any aspect of this kind of work, besides Apple’s weak Configuration Utility. KACE’s is very comprehensive – but both approaches suffer from the fact that they are after the fact management solutions, not built into the hardware and software of the iPhone itself.

From the time that I was at Microsoft, I kept hearing more and more “security experts” talk about how the impending doomsday was coming for handhelds. It still hasn’t really come. I believe that through their native use of whitelisting, Apple has fended this threat off for the foreseeable future for the iPhone platform. Instead, I believe that the biggest problem facing the iPhone isn’t “potential attackers” – there will be plenty of those – but their chance of success is very low.

Instead, it is the iPhone’s impending success eating into the enterprise market from the bottom up that is the problem. The lack of an enterprise management solution that is built into the deepest aspects of the system will not preclude the iPhone’s success at building up a rogue enterprise following. But it will both leave a bad taste in the mouth of the IT admins fighting the good fight to try and keep their organizations secure, and potentially introduce some bad compliance-related headaches in organizations already struggling to keep/retain compliance, due to the lack of DRM and platform control over the device itself and any information on it.

Apple itself needs to come to terms that the iPhone (and the Mac platform itself, frankly) need proper security and policy management at the lowest levels, or de-emphasize their viability as an enterprise platform on both counts.

Sorry for the length of this post – but this topic has been burning in me for a bit – I needed to get it all down for the record.