22
Sep 15

You have the right… to reverse engineer

This NYTimes article about the VW diesel issue and the DMCA made me think about how, 10 years ago next month, the Digital Millennium Copyright Act (DMCA) almost kept Mark Russinovich from disclosing the Sony BMG Rootkit. While the DMCA provides exceptions for reporting security vulnerabilities, it does nothing to allow for reporting breaches of… integrity.

I believe that we need to consider an expansion of how researchers are permitted to, without question, reverse engineer certain systems. While entities need a level of protection in terms of their copyright and their ability to protect their IP, VW’s behavior highlights the risks to all of us when of commercial entities can ship black box code and ensure nobody can question it – technically or legally.

In October of 2005, Mark learned that a putting a particular Sony BMG CD in a Windows computer would result in it installing a rootkit. Simplistically, a rootkit is a piece of software – usually installed by malicious individuals – that sits at a low level within the operating system and returns forged results when a piece of software at a higher level asks the operating system to perform an action. Rootkits are usually put in place to allow malware to hide. In this case, the rootkit was being put in place to prevent CDs from being copied. Basically, a lame attempt at digital rights management (DRM) gone too far.

In late October, Mark researched this, and prepped a blog post outlining what was going on. We talked at length, as he was concerned that his debugging and disclosure of the rootkit might violate the DMCA, a piece of legislation put in place to protect copyrights and prevent reverse engineering of DRM software, among other things. So in essence, to stop exactly what Mark had done. I read over the DMCA several times during the last week of October, and although I’m not a lawyer, I was pretty satisfied that Mark’s actions fit smack dab within the part of the DMCA that was placed there to enable security professionals to diagnose and report security holes. The rootkit that Sony BMG had used to “protect” their CD media had several issues in it, and was indeed creating security holes that were endangering the integrity of Windows systems where the software had unwittingly been installed.

Mark decided to go ahead and publish the blog post announcing the rootkit on October 31, 2005 – Halloween. Within 48 hours, Mark was being pulled in on television interviews, quoted in major press publications, and was repeatedly a headline on Slashdot, the open-source focused news site over the next several months – an interesting occurrence for someone who had spent almost his entire career in the Windows realm.

The Sony BMG disclosure was very important – but it almost never happened. Exceptions that allow reverse engineering are great. But security isn’t the only kind of integrity that researchers need to diagnose today. I don’t think we should tolerate laws that keep researchers from ensuring our systems are secure, and that they operate the way that we’ve been told they do.


07
Sep 15

How I learned to stop worrying and love the cloud

For years, companies have regularly asked me for my opinion on using cloud-based services. For the longest time, my response was one about, “You should investigate what types of services might fit best for your business,” followed by a selection of caveats reminding them about privacy, risk, and compliance, since their information will be stored off-premises.

But I’ve decided to change my tune.

Beginning now, I’m going to simply start telling them to use cloud where it makes sense, but use the same procedures for privacy, risk, and compliance that they use on-premises.

See what I did there?

The problem is that we’ve treated hosted services (née cloud) as something distinctly different from the way we do things on-premises. But… is it really? Should it be?

It’s hard to find a company today that doesn’t do some form of outsourcing. You’re trusting people who don’t work “for” you with some of your company’s key secrets. Every company I can think of does it. If you don’t want to trust a contract-based employee with your secrets, you don’t give them access, right? Deny them access to your network, key server, or files shares (or SharePoint servers<ahem/>). Protect documents with things like Azure Rights Management. Encrypt data that needs to be protected.

These are all things that you should have been doing anyway, even before you might have had any of your data or operations off-premises. If you had contract/contingent staff, those systems should have been properly secured in order to avoid <ahem/> an overzealous admin (see link above) from liberating information that they shouldn’t really have access to. Microsoft and Amazon (and to a lesser extent at this point), have been putting a lot of effort into securing your data while it lives within their clouds, and that’s going to continue over the next 2-5 years, to the point where, honestly, with a little investment in tech and process – and likely a handful of new subscription services that you won’t be able to leave – you’ll be able to secure data better than you can in your infrastructure today.

Yeah. I said it.

A lot of orgs talk big about how awesome their on-premises infrastructure is, and how uncompromisingly secure it is. And that’s nice. Some of them are right. Many of them aren’t. In the end, in addition to systems and employees you can name, you’re probably relying on a human element of contractors, vendors, part-time employees, “air-gapped” systems that really aren’t, sketchy apps that should have been retired years ago, and security software that promised the world, but that can’t really even secure a tupperware container. We assume that cloud is something distinctly different from on-premises outsourcing of labor. But it isn’t really that different. The only difference is that today, unsecured (or unsecurable) data may have to leave your premises. That will improve over time, if you work at it. The perimeter, like that of smart phones has since 2007, will allow you to secure data flow between systems you own, and on systems you own – whether those live on physical hardware in your datacenter, or in AWS or Azure. But it means recognizing this perimeter shift – and working to reinforce that new perimeter in terms of security and auditing.

Today, we tend to fear cloud because it is foreign. It’s not what we’re all used to. Yet. Within the next 10 years, that will change. It probably already has changed within the periphery (aka the rogue edges) or your organization today. Current technology lets users deploy “personal cloud” tools, whether business intelligence, synchronization, desktop access, and more – without letting you have veto power, unless you own and audit the entirety of your network (and any telecom access), and admin access to all PCs. And you don’t.

The future involves IT being proactive about providing cloud access ahead of rogue users. Deciding where to be more liberal about access to tools than orgs are used to, and being able to secure perimeters that you may not even be aware of. Otherwise, you get to be dragged along on the choose your own adventure that your employees decide on for you.


24
Dec 14

Mobile devices or cloud as a solution to the enterprise security pandemic? Half right.

This is a response to Steven Sinofsky’s blog post, “Why Sony’s Breach Matters”. While I agree with parts of his thesis – the parts about layers of complexity leaving us where we are, and secured, legacy-free mobile OS’s helping alleviate this on the client side, I’m not sure I agree with his points about the cloud being a path forward – at least in any near term, or to the degree of precision he alludes to.

The bad news is that the Sony breach is not unique. Not by a long shot. It’s not the limit. It’s really the beginning. It’s the shot across the bow for companies that will let them see one example of just how bad this can get. Of course, they should’ve been paying attention to Target, Home Depot, Michaels, and more by this point already.

Instead, the Sony breach is emblematic of the security breaking point that has become increasingly visible over the last 2 years. It would be the limit if the industry turned a corner tomorrow and treated security like their first objective. But it won’t. I believe what I’ve said before – the poor security practices demonstrated by Sony aren’t unique. They’re typical  of how too many organizations treat security. Instead of trying to secure systems, they grease the skids just well enough to meet their compliance bar, turning an eye to security that’s just “too hard”.

 

While the FBI has been making the Sony attack sound rather unique, the only unique aspect of this one, IMHO, is the scale of success it appears to have achieved. This same attack could be replayed pretty easily. A dab of social engineering… a selection of well-chosen exploits (they’re not that hard to get), and Windows’ own management infrastructure appears to have been used to distribute it.

 

I don’t necessarily see cloud computing yet as the holy grail that you do. Mobile? Perhaps.

 

The personal examples you discussed were all interesting examples, but indeed were indicative of more of a duct-tape approach, similar to what we had to do with some things in Windows XP during the security push that led up to XPSP2 after XPSP1 failed to fill the holes in the hull of the ship. A lot of really key efforts, like run as non-admin just couldn’t have been done in a short timeframe to work with XP – had to be pushed to Vista (where they honestly still hurt users) or Windows 7 where the effort could be taken to really make them work for users from the ground up. But again, much of this was building foundations around the Win32 legacy, which was getting a bit sickly in a world with ubiquitous networking and everyone running as admin.

 

I completely agree as well that we’re long past adding speed bumps. It is immediately apparent, based upon almost every breach I can recall over the past year, that management complexity as a security vector played a significant part in the breach.

If you can’t manage it, you can’t secure it. No matter how many compliance regs the government or your industry throws at you. It’s quite the Gordian knot. Fun stuff.

 

 

I think we also completely agree about how the surface area exposed by today’s systems is to blame for where we are today as well. See my recent Twitter posts. As I mentioned, “systems inherently grow to become so complex nobody understands them.” – whether you’re talking about programmers, PMs, sysadmins, or compliance auditors.

 

 

I’m inclined to agree with your point about social and the vulnerabilities of layer 8, and yet we also do live in a world where most adults know not to stick a fork into an AC outlet. (Children are another matter.)

Technology needs to be more resilient to user-error or malignant exploitation, until we can actually solve the dancing pigs problem where it begins. Mobile solves part of that problem.

 

When Microsoft was building UAC during Longhorn -> Vista, Mark Russinovich and I were both frustrated that Microsoft wasn’t really doing anything with Vista to really nail security down, and so we built a whitelisting app at Winternals to do this for Windows moving forward. (Unfortunately, Protection Manager was crushed for parts after our acquisition, and AppLocker was/is too cumbersome to accomplish this for Win32. Outside of the longshot of ditching the Intel processor architecture completely, whitelisting is the only thing that can save Win32 from the security mayhem it is experiencing at the moment.

 

I do agree that moving to hosted IaaS really does nothing for an organization, except perhaps drive them to reduce costs in a way that on-premises hosting can’t.

But I guess if there was one statement in particular that I would call out in your blog as something I heartily disagree with, it’s this part:

 

“Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company.”

 

This is a wonderful goal, and it’ll be great for startups that have no legacy codebase (and don’t bring in hundreds of open-source or shared libraries that none of their dev team understands down to the bottom of the stack). But most existing companies can’t do what they should, and cut back the overgrowth in their systems.

I believe pretty firmly that what I’ve seen in the industry over the last decade since I left Microsoft is also, unfortunately, the norm – that management – as demonstrated by Sony’s leadership in that interview, will all too often let costs win over security.

 

For organizations that can redesign for a PaaS world, the promise offered by Azure was indeed what you’ve suggested – that designing new services and new applications for a Web-first world can lead to much more well-designed, refined, manageable, and securable applications and systems overall. But the problem is that that model only works well for new applications – not applications that stack refinement over legacy goo that nobody understands. So really, clean room apps only.

The slow uptake of Azure’s PaaS offerings unfortunately demonstrates that this is the exception, and an ideal, not necessarily anything that we can expect to see become the norm in the near future.

 

Also, while Web developers may not be integrating random bits of executable code into their applications, the amount of code reuse across the Web threatens to do the same, although the security perimeter is winnowed down to the browser and PII shared within it. Web developers can and do grab shared .js libraries off the Web in a heartbeat.

Do they understand the perimeter of these files? Absolutely not. No way.

Are the risks here as big as those posed by an unsecured Win32 perimeter? Absolutely not – but I wouldn’t trivialize them either.

There are no more OS hooks, but I’m terrified about how JS is evolving to mimic many of the worst behaviors that Win32 picked up over the years. The surface has changed, as you said – but the risks – loss of personal information, loss of data, phishing, DDOS, are so strikingly similar, especially as we move to a “thicker”, more app-centric Web.

 

Overall, I think we are in for some changes, and I agree with what I believe you’ve said both in your blog and on Twitter, that modern mobile OS’s with a perimeter designed in them are the only safe path forward. The path towards a secure Web application perimeter seems less clear, far less immediate, and perhaps less explicit than your post seemed to allude to.

 

There is much that organizations can learn from the Sony breach.

 

But will they?

 


15
Dec 14

Who shot Sony?

I’m curious about the identity of the group that broke in to Sony, apparently caused massive damage, and compromised a considerable amount of information that belongs to the company.

For some reason, journalists aren’t focusing on this, however. Probably because it doesn’t generate the clicks and ad views that publishing embarrassing emails, salary disclosures, and documented poor security practices do. Instead, they’re primarily focusing on revealing Sony’s confidential information, conveniently provided in multiple, semi-regular doc dumps by the party behind the breach.

Sony’s lawyers recently sent several publications a cease & desist letter, to get reporters to stop publishing the leaked information, since Sony “does not consent to your possession, review, copying, dissemination, publication, uploading, downloading or making any use” of the documents”. There’s been quite a stir that in doing this, Sony is likely invoking the Streisand effect, and it will probably not only backfire, but result in more, not less, coverage of the information.

In information available long before the breach, Sony’s executive director of information security was quoted as saying,“it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss”. Given that sort of security posture, it’s not surprising that even though he was able to talk an auditor out of dinging them for SOX compliance, Sony organizations have faced not one, but two rather devastating hacks in recent years.

So it would seem that Sony’s management is likely to blame for leaving doors open by reinforcing poor security practices and actually fighting back against well-intentioned compliance efforts (thus reinforcing what I’ve long said, “Compliance and security can go hand in hand. But security is never achieved by stamping a system as ‘compliant’.”)

It’s also obvious that the group that hacked in to Sony (perhaps with the assistance of either existing or previous employees), compromised confidential information and destroyed systems deserves a huge amount of blame in terms of the negative effects Sony is currently experiencing. Again, if Sony had proper security in place (and execs more interested in security than rubber-stamping systems), perhaps these people wouldn’t have stood a chance. In terms of media coverage, this is what I’d like to know more about. Who  actually broke in?

However, years from now, when people are looking back at the broad damage caused by the breach and the leaked information, I believe it’ll be important to really note who caused the most damage to Sony over the long run. Yes, the people who broke in started it all. But the damage being caused by journalists taking advantage of the document dumps is, and will continue to, result in significant damage to Sony. For myself, from now on, I’m only linking to, and reposting articles that appear to be using information that has not been sourced from the breach from now on.

I’m no longer feeding the clickbait machine that enthusiastically awaits the next doc drop of Sony confidential information, like a vulture ready to pick them while they’re weak, and expose the inner disfunction of an organization (not something unique to Sony – every org has some level of dysfunction).

On Twitter this morning, I pondered whether the NYT would be so enthusiastic and supportive about the journalistic value of confidential info that was regularly being pushed out by hackers if they themselves had been breached, and it was their secrets, their dysfunction, their personal information, their source lists that was being taken advantage of to generate ad views.

For some reason, I have to think the answer is no. So why are journalists so enthusiastic about kicking Sony while they’re down after a breach?


12
Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.


25
Jul 14

You have a management problem.

I have three questions for you to start off this post. I don’t care if you’re “in the security field” or not. In fact, I’m more interested in your answers if you aren’t tasked with security, privacy, compliance, or risk management as a part of your defined work role.

The questions:

  1. If I asked you to show me threat models for your major line of business applications, could you?
  2. If I asked you to define the risks (all of them) within your business, could you?
  3. If I asked you to make a decision about what kind of risks are acceptable for your business to ignore, could you?

In most businesses, the answer to all three is probably no, especially the further you get away from your security or IT teams. Unfortunately, I also believe the answer is pretty firmly no as you roll up the management chain of your organization into the C-suite.

Unless your organization consists of just you or a handful of users, nobody in your organization understands all of the systems and applications in use across the org. That’s a huge potential problem.

The other day I was talking with three of our customers, and the conversation started around software licensing, then spun into software asset management, auditing, and finally to penetration testing and social engineering.

At first glance, that conversation thread may seem diverse and disconnected. But they are so intertwined. Every one of those topics involves risk. Countering risk, in turn, requires adequate management.

By management, I mean two things:

  1. Management of the all components involved (people, process, and technology – to borrow a line from a friend)
  2. Involvement of management. From your CEO or top-level leadership, down.

You certainly can’t expect your C-level executives to intimately know every application or piece of technology within the organization. That’s probably not tractable. What is crucial is that there is accountability down the chain, and trust up the chain. If an employee responsible for security or compliance says there’s a problem that needs to be immediately addressed, they need to be trusted. They can’t run their concern up the flagpole and have someone who is incapable of adequately assessing the technical or legal (or both) implications of hedging on addressing it, and cannot truthfully attest to the financial risk of fixing the issue or doing nothing.

  • If you hire a security team and you don’t listen to them, what’s the point of hiring them? Just run naked through the woods.
  • If you hire a compliance team (or auditor) and don’t listen to them, what’s the point of hiring them? Just be willing to bring in an outside rubber-stamp auditor, and do the bare minimum.
  • If you have a team that is responsible for software asset management, and you don’t empower them to adequately (preemptively) assess your licensing posture, what’s the point of hiring them? Just wait and see if you get audited by a vendor or two, and accept the financial pit.

If you’re not going to empower and listen to people in your organization who with risk management skills, don’t hire them. If you’re going to hire them, listen to them, and work preemptively to manage risk. If you’re going to try and truly mitigate risk across your business, be willing to preemptively invest in people, processes and technology (not bureaucracy!) to discover and address risk before it becomes damage.

So much of the bullshit that we see happening in terms of unaddressed security vulnerabilities, breaches (often related to vulns), social engineering and (spear)phishing, and just plain bad software asset management has everything to do with professionals who want to do the right thing not being empowered to truly find, manage, and address risk throughout the enterprise, and a lack of risk education up and down the org. Organizations shouldn’t play chicken with risk and be happy with saving a fraction of money up front. It can well become exponentially larger if it is ignored.


17
Jun 14

Is the Web really free?

When was the last time you paid to read a piece of content on the Web?

Most likely, it’s been a while. The users of the Web have become used to the idea that Web content is (more or less) free. And outside of sites that put paywalls up, that indeed appears to be the case.

But is the Web really free?

I’ve had lots of conversations lately about personal privacy, cookies, tracking, and “getting scroogled“. Some with technical colleagues, some with non-technical friends. The common thread is that most people (that world full of normal people, not the world that many of my technical readers likely live in) have no idea what sort of information they give up when they use the Web. They have no idea what kind of personal information they’re sharing when they click <accept> on that new mobile app that wants to upload their (Exif geo-encoded) photos, that wants to track their position, or wants to harmlessly upload their phone’s address book to help “make their app experience better”.

My day job involves me understanding technology at a pretty deep level, being pretty familiar with licensing terms, and previous lives have made me deeply immersed in the world of both privacy and security. As a result, it terrifies me to see the crap that typical users will click past in a licensing agreement to get to the dancing pigs. But Pavlov proved this all long ago, and the dancing pigs problem has highlighted this for years, to no avail. Click through software licenses exist primarily as a legal CYA, and terms of service agreements full of legalese gibberish could just as well say that people have to eat a sock if they agree to the terms – they’ll still agree to them (because they won’t read them).

On Twitter, the account for Reputation.com posted the following:

A few days later, they posted this:

I responded to the first post with the statement that accurate search results have intrinsic value to users, but most users can’t actually quantify a loss of privacy. What did I mean by that? I mean that most normal people will tell you they value their privacy if you ask them, but if you take away the free niblets all over the Web that they get for giving up their privacy little by little, they’ll actually renege on how important privacy really is.

Imagine the response if you told a friend, family member, or colleague that you had a report/blog/study you were working on, and asked them, “Hey, I’m going to shoulder-surf you for a day and write down which Websites you visit, how often and how long you visit them, and who you send email to, okay?” In most cases, they’d tell you no, or tell you that you’re being weird.

Then ask them how much you’d need to pay them in order for them to let you shoulder-surf. Now they’ll be creeped out.

Finally, tell them you installed software on their computer last week, so you’ve already got the data you need, is it okay if you use that for your report. Now they’re going to probably completely overreact, and maybe even get angry (so tell them you were kidding).

More than two years ago, I discussed why do-not-track would stall out and die, and in fact, it has. This was completely predictable, and I would have been completely shocked if this hadn’t happened. It’s because there is one thing that makes the Web work at all. It’s the cycle of micropayments of personally identifiable information (PII) that, in appropriate quantities, allow advertisers (and advertising companies) to tune their advertising. In short, everything you do is up for grabs on the Web to help profile you (and ideally, sell you something). Some might argue that you searching for “schnauzer sweaters” isn’t PII. The NSA would beg to differ. Metadata is just as valuable, if not more, than data itself, to uniquely identify an individual.

When Facebook tweaked privacy settings to begin “liberating” personal information, it was all about tuning advertising. When we search using Google (or Bing, or Yahoo), we’re explicitly profiling ourselves for advertisers. The free Web as we know it is sort of a mirage. The content appears free, but isn’t. Back in the late 1990’s, the idea of micropayments was thrown about, and has in my opinion come and gone. But it is far from dead. It just never arrived in the form that people expected. Early on, the idea was that individuals might pay a dollar here for a news story, a few dollars there for a video, a penny to send an email, etc. Personally, I never saw that idea actually taking off, primarily because the epayment infrastructure wasn’t really there, and partially because, well, consumers are cheap and won’t pay for almost anything.

In 1997, Nathan Myhrvold, Microsoft’s CTO, had a different take. Nathan said, “Nobody gets a vig on content on the Internet today… The question is whether this will remain true.”

Indeed, putting aside his patent endeavors, Nathan’s reading of the tea leaves at that time was very telling. My contention is that while users indeed won’t pay cash (payments or micropayments) for the activities they perform on the Web, they’re more than willing to pay for their use of the Web with picopayments of personal information.

If you were to ask a non-technical user how much they would expect to be paid for an advertiser to know their home address, how many children they have, or what the ages of their children are, or that they suffer from psoriasis, most people would be pretty uncomfortable (even discounting the psoriasis). People like to assume, incorrectly, that their privacy is theirs, and the little lock icon on their browser protects all of the niblets of data that matter. While it conceptually does protect most of the really high financial value parts of an individual’s life (your bank account, your credit card numbers, and social security numbers), it doesn’t stop the numerous entities across the Web from profiling you. Countless crumbs you leave around the Web do allow you to be identified, and though they may not expose your personal, financial privacy, do expose your personal privacy for advertisers to peruse. It’s easy enough for Facebook (through the ubiquitous Like button) or Google (through search, Analytics, and AdSense) to know your gender, age, marital/parental status, any medical or social issues you’re having, what political party you favor, and what you were looking at on that one site that you almost placed an order on, but wound up abandoning.

If you could truly visualize all of the personal attributes you’ve silently shared with the various ad players through your use of the Web, you’d probably be quite uncomfortable with the resulting diagram. Luckily for advertisers, you can’t see it, and you can’t really undo it even if you could understand it all. Sure, there are ways to obfuscate it, or you could stay off the Web entirely. For most people, that’s not a tradeoff they’re willing to make.

The problem here is that human beings, as a general rule, stink at assessing intangible risk, and even when it is demonstrated to us in no uncertain terms, we do little to rectify it. Free search engines that value your privacy exist. Why don’t people switch? Conditioning to Google and the expected search result quality, and sheer laziness (most likely some combination of the two). Why didn’t people flock from Facebook to Diaspora or other alternatives when Facebook screwed with privacy options? Laziness, convenience, and most likely, the presence of a perceived valuable network of connections.

It’s one thing to look over a cliff and sense danger. But as the dancing pigs phenomenon (or the behavior of most adolescents/young adults, and some adults on Facebook) demonstrates, a little lost privacy here and a little lost privacy there is like the metaphoric frog in a pot. Over time it may not feel like it’s gotten warmer to you. But little by little, we’ve all sold our privacy away to keep the Web “free”.


17
Jan 14

Running Windows XP after April? A couple of suggestions for you

Yesterday on Twitter, I said the following:

Suggestion… If you have an XP system that you ABSOLUTELY must run after April, I’d remove all JREs, as well as Acrobat Reader and Flash.

This was inspired by an inquiry from a customer about Windows XP support that arrived earlier in the day.

As a result of that tweet, three things have happened.

  1. Many people replied “unplug it from the network!” 1
  2. Several people asked me why I suggested doing these steps.
  3. I’ve begun working on a more comprehensive set of recommendations, to be available shortly. 2

First off… Yes, it’d be ideal if we could just retire all of these XP systems on a dime. But that’s not going to happen. If it was easy (or free), businesses and consumers wouldn’t have waited until the last second to retire these systems. But there’s a reason why they haven’t. Medical/dental practices have practice management or other proprietary software that isn’t tested/supported on anything newer, custom point of sale software from vendors that disappeared, were acquired, or simply never brought that version of their software… There’s a multitude of reasons, and these systems aren’t all going to disappear or be shut off by April. It’s not going to happen. It’s unfortunate, but there are a lot of Windows XP systems that will be used for many years still in many places that we’d all rather not see happen. There’s no silver bullet for that. Hence, my off the cuff recommendations over Twitter.

Second, there’s a reason why I called out these three pieces of software. If you aren’t familiar with the history, I’d encourage you to go Bing (or Google, or…) the three following searches:

  1. zero day java vulnerability
  2. zero day Flash vulnerability
  3. zero day Acrobat vulnerability

Now if you looked carefully, each one of those, at least on Bing, returned well over 1M results, many (most?) of them from the last three years. In telling me that these XP systems should be disconnected from the Web, many people missed the point I was making.

PCs don’t get infected from the inside out. They get infected from the outside in. When Microsoft had the “Security Push” over ten years ago that forced us to reconsider how we designed, built and tested software, it involved stopping where we were, and completely thinking about how Windows was built. Threat models replaced ridiculous statements like, “We have the very best xx encryption, so we’re ‘secure'”. While Windows XP may be more porous than Vista and later are (because the company was able to implement foundational security even more deeply, and engineer protections deeply into IE, for example, as well as implement primordial UAC), Windows XPSP2 and later are far less of a threat vector than XPSP1 and earlier were. So if you’re a bad guy and you want to get bad things to happen on a PC today, who do you go after? It isn’t Windows binaries themselves, or even IE. You go next for the application runtimes that are nearly as pervasive. Java, Flash, and Acrobat. Arguably, Acrobat may or may not be a runtime, depending on your POV. But the threat is still there, especially if you haven’t been maintaining these as they’ve been updated over the last few years.

As hard as Adobe and Oracle may try to keep these three patched, these three codebases have significant vulnerabilities that are found far too often. Those vulnerabilities, if not patched by vendors and updated by system owners incredibly quickly, become the primary vector of infecting both Windows and OS X systems by executing shellcode.

After April, Windows XP is expected to get no updates. Got that? NO UPDATES. NONE. Nada. Zippo. Zilch. So while you may get antivirus updates from Microsoft and third parties, but at that point you honestly have a rotting wooden boat. I say this in the nicest way possible. I was on the team shipping Windows XP, and it saddens me to throw it under the bus, but I don’t think people get the threat here. Antivirus simply cannot protect you from every kind of attack. Windows XP and the versions of IE (6-8) have still regularly received patches almost every month for the past several years. So Windows XP isn’t “war hardened”, it is brittle. So after April, you won’t even get those patches trying to spackle over newly found vulnerabilities in the OS and IE. Instead, these will become exploit vectors ready to be hit by shellcode coming in off of the Internet (or even the local network) and turned into opportunistic infections.

Disclaimer: This is absolutely NOT a guarantee that systems won’t get infected, and you should NOT remove these or any piece of Microsoft or third-party software if a business-critical application actually depends on them or if you do not understand the dependencies of the applications in use on a particular PC or set of PCs! 

So what is a business or consumer to do? Jettison, baby. Jettison. If you can’t retire the entire Windows XP system, retire every single piece of software on that system that you can, beginning with the three I mentioned above. Those are key connection points of any system to the Web/Internet. Remove them and there is a good likelihood of lessening the infection vector.   But it is a recommendation to make jetsam of any software on those XP systems that you really don’t need. Think of this as not traveling to a country where a specific disease is breaking out until the threat has passed. In the same vein, I’d say blocking Web browsers and removing email clients coming in a close second, since they’re such a great vector for social engineering-based infections today.

Finally, as I mentioned earlier, I am working on an even more comprehensive set of recommendations to come in a more comprehensive report to be published for work, in our next issue, which should be live on the Web during the last week of January. My first recommendation would of course be to, if at all possible, retire your Windows XP systems as soon as possible. But I hope that this set of recommendations, while absolutely not a guarantee, can help some people as they move away, or finally consider how to move away, from Windows XP.

Footnotes

  1. Or unplug the power, or blow it up with explosives, or…
  2. These recommendations will be included in the next issue of Update.

20
Dec 13

Security and Usability – Yes, you read that right.

I want you to think for a second about the key you use most. Whether it’s for your house, your apartment, your car, or your office, just think about it for a moment.

Now, this key you’re thinking of is going to have a few basic properties. It consists of metal, has a blade extending out of it that has grooves along one or both sides, and either a single set of teeth cut into the bottom, or two sets of identical teeth cut into both the top and bottom.

If it is a car key, it might be slightly different; as car theft has increased, car keys have gotten more complex, so you might be thinking about a car key that is just a wireless fob that unlocks and or starts the car based on proximity, or it might be an inner-cut key as is common with many Asian and European cars today.

Aside from the description I just gave you, when was the last time you thought about that key? When did you actually last look at the ridges on it?

It’s been a while, hasn’t it? That’s because that key and the lock it works with provide the level of security you feel that you need to protect that place or car, yet it doesn’t get in your way, as long as the key and the lock are behaving properly.

Earlier this week, I was on a chat on Twitter, and we were discussing aspects of security as they relate to mobile devices. In particular, the question was asked, “Why do users elect to not put a pin/passcode/password on their mobile devices?” While I’ve mocked the idea of considering security and usability in the same sentence, let alone the same train of thought while developing technology, I was wrong. Yes, I said it. I was wrong. Truth be told, Apple’s Touch ID is what finally schooled me on it. Security and usability should be peers today.

When Apple shipped the iPhone 5s and added the Touch ID fingerprint sensor, it was derided by some as not secure enough, not well designed, not a 100% replacement for the passcode, or simply too easy to defeat. But Touch ID does what it needs to do. It works with the user’s existing passcode – which Apple wisely tries to coax users into setting up on iOS 7, regardless of whether they have a 5s or not – to make day to day use of the device easier while living with a modicum of security, and a better approach to securing the data, the device, and the credentials stored in it and iCloud in a better way than most users had prior to their 5s.

That last part is important. When we shipped Windows XP, I like to think we tried to build security into it to begin with. But the reality is, security wasn’t pervasive. It took setting aside a lot of dedicated time (two solid months of security training, threat modeling, and standing down on new feature work) for the Windows Security Push. We had to completely shift our internal mindset to think about security from end to end. Unlike the way we had lived before, security wasn’t to be a checkbox, it wasn’t a developer saying, “I used the latest cryptographic APIs”, and it wasn’t something added on at the last minute.

Security is like yeast in bread. If you add it when you’re done, you simply don’t have bread – well, at least you don’t have leavened bread. So it took us shipping Windows XP SP2 – an OS update so big and so significant many people said it should have been called a new OS release – before we ever shipped a Windows release where security was baked in from the beginning of the project, across the entirety of the project.

When it comes to design, I’ve mentioned this video before, but I think two of Jonathan Ives’ quotes in it are really important to have in your mind here. Firstly:

“A lot of what we seem to be doing in a product like that (the iPhone) is getting design out of the way.”

and secondarily:

“It’s really important in a product to have a sense of the hierarchy of what’s important and what’s not important by removing those things that are all vying for your attention.”

I believe that this model of thought is critical to have in mind when considering usability, and in particular where security runs smack dab into usability (or more often, un-usability). I’ve said for a long time that solutions like two-factor security won’t take off until they’re approachable by, and effectively invisible to, normal people. Heck, too much of the world didn’t set ever set their VCR clocks for the better part of a decade because it was too hard, and it was a pain in the ass to do it again every time the power went out. You really don’t understand why they don’t set a good pin, let alone a good passcode, on their phone?

What I’m about to say isn’t meant to infer that usability isn’t important to many companies, including Microsoft, but I believe many companies run, and many software, hardware or technology projects are started, run, and finished, where usability is still just a checkbox. As security is today at Microsoft, usability should be embraced, taught, and rewarded across the organization.

One can imagine an alternate universe where a software project the world uses was stopped in it’s tracks for months, redesigned, and updated around the world because a user interface element was so poorly designed for mortals that they made a bad security decision. But this alternate universe is just that, an alternate universe. As you’re reading the above, it sounds wacky to you – but it shouldn’t! As technologists, it is our duty to build hardware, software, and devices where the experience, including the approach to security, works with the user, not against them. Any move that takes the status quo of “security that users self-select to opt into” and moves it forward a notch is a positive move. But any move here also has to just work. You can’t implement nerd porn like facial recognition if it doesn’t work all of the time or provide an alternative for when it fails.

Projects that build innovative solutions where usability and security intersect should be rewarded by technologists. Sure, they should be critiqued and criticized, especially if designing in a usable approach really compromises the security fundamentals of the – ideally threat-modeled – implementation. But critics should also understand where their criticism falls down in light of the practical security choices most end users make in daily life.

Touch ID,  with as much poking, prodding, questioning, and hacking as it received when it was announced, is a very good thing. It’s not perfect, and I’m sure it’ll get better in future iterations of the software and hardware, and perhaps as competitors come up with alternatives or better implementations, Apple will have to make it ever more reliable. But a solution that allows that bar to be moved forward, from a place where most users don’t elect to set a pin or passcode to a place where they do? That’s a net positive, in my book.

As Internet-borne exploits continue to grow in both intensity and severity, it is so critical that we all start taking the usability of security implementations by normal people seriously. If you make bad design decisions about the intersection where security and usability collide, your end users will find their own desire path through the mayhem, likely making the easiest, and not usually the best, security decisions.

 


11
Sep 13

Remember the Clipper chip?

I happened to bring up the Clipper chip in a conversation with a colleague today, where we were discussing the latest NSA-related news, communication privacy, (and of course the Apple 5s).

Looking back at it now, it’s fascinating how much advice the past gives us today. I encourage you to read the words of Whitfield Diffie in his testimony to the US House of Representatives on May 11, 1993:

“I submit to you that the most valuable secret in the world is the secret of democracy; that technology and policy should go hand in hand in guarding that secret; that it must be protected by security in depth.”

Whitfield Diffie, house testimony May 11, 1993