25
Jul 14

You have a management problem.

I have three questions for you to start off this post. I don’t care if you’re “in the security field” or not. In fact, I’m more interested in your answers if you aren’t tasked with security, privacy, compliance, or risk management as a part of your defined work role.

The questions:

  1. If I asked you to show me threat models for your major line of business applications, could you?
  2. If I asked you to define the risks (all of them) within your business, could you?
  3. If I asked you to make a decision about what kind of risks are acceptable for your business to ignore, could you?

In most businesses, the answer to all three is probably no, especially the further you get away from your security or IT teams. Unfortunately, I also believe the answer is pretty firmly no as you roll up the management chain of your organization into the C-suite.

Unless your organization consists of just you or a handful of users, nobody in your organization understands all of the systems and applications in use across the org. That’s a huge potential problem.

The other day I was talking with three of our customers, and the conversation started around software licensing, then spun into software asset management, auditing, and finally to penetration testing and social engineering.

At first glance, that conversation thread may seem diverse and disconnected. But they are so intertwined. Every one of those topics involves risk. Countering risk, in turn, requires adequate management.

By management, I mean two things:

  1. Management of the all components involved (people, process, and technology – to borrow a line from a friend)
  2. Involvement of management. From your CEO or top-level leadership, down.

You certainly can’t expect your C-level executives to intimately know every application or piece of technology within the organization. That’s probably not tractable. What is crucial is that there is accountability down the chain, and trust up the chain. If an employee responsible for security or compliance says there’s a problem that needs to be immediately addressed, they need to be trusted. They can’t run their concern up the flagpole and have someone who is incapable of adequately assessing the technical or legal (or both) implications of hedging on addressing it, and cannot truthfully attest to the financial risk of fixing the issue or doing nothing.

  • If you hire a security team and you don’t listen to them, what’s the point of hiring them? Just run naked through the woods.
  • If you hire a compliance team (or auditor) and don’t listen to them, what’s the point of hiring them? Just be willing to bring in an outside rubber-stamp auditor, and do the bare minimum.
  • If you have a team that is responsible for software asset management, and you don’t empower them to adequately (preemptively) assess your licensing posture, what’s the point of hiring them? Just wait and see if you get audited by a vendor or two, and accept the financial pit.

If you’re not going to empower and listen to people in your organization who with risk management skills, don’t hire them. If you’re going to hire them, listen to them, and work preemptively to manage risk. If you’re going to try and truly mitigate risk across your business, be willing to preemptively invest in people, processes and technology (not bureaucracy!) to discover and address risk before it becomes damage.

So much of the bullshit that we see happening in terms of unaddressed security vulnerabilities, breaches (often related to vulns), social engineering and (spear)phishing, and just plain bad software asset management has everything to do with professionals who want to do the right thing not being empowered to truly find, manage, and address risk throughout the enterprise, and a lack of risk education up and down the org. Organizations shouldn’t play chicken with risk and be happy with saving a fraction of money up front. It can well become exponentially larger if it is ignored.


17
Jun 14

Is the Web really free?

When was the last time you paid to read a piece of content on the Web?

Most likely, it’s been a while. The users of the Web have become used to the idea that Web content is (more or less) free. And outside of sites that put paywalls up, that indeed appears to be the case.

But is the Web really free?

I’ve had lots of conversations lately about personal privacy, cookies, tracking, and “getting scroogled“. Some with technical colleagues, some with non-technical friends. The common thread is that most people (that world full of normal people, not the world that many of my technical readers likely live in) have no idea what sort of information they give up when they use the Web. They have no idea what kind of personal information they’re sharing when they click <accept> on that new mobile app that wants to upload their (Exif geo-encoded) photos, that wants to track their position, or wants to harmlessly upload their phone’s address book to help “make their app experience better”.

My day job involves me understanding technology at a pretty deep level, being pretty familiar with licensing terms, and previous lives have made me deeply immersed in the world of both privacy and security. As a result, it terrifies me to see the crap that typical users will click past in a licensing agreement to get to the dancing pigs. But Pavlov proved this all long ago, and the dancing pigs problem has highlighted this for years, to no avail. Click through software licenses exist primarily as a legal CYA, and terms of service agreements full of legalese gibberish could just as well say that people have to eat a sock if they agree to the terms – they’ll still agree to them (because they won’t read them).

On Twitter, the account for Reputation.com posted the following:

A few days later, they posted this:

I responded to the first post with the statement that accurate search results have intrinsic value to users, but most users can’t actually quantify a loss of privacy. What did I mean by that? I mean that most normal people will tell you they value their privacy if you ask them, but if you take away the free niblets all over the Web that they get for giving up their privacy little by little, they’ll actually renege on how important privacy really is.

Imagine the response if you told a friend, family member, or colleague that you had a report/blog/study you were working on, and asked them, “Hey, I’m going to shoulder-surf you for a day and write down which Websites you visit, how often and how long you visit them, and who you send email to, okay?” In most cases, they’d tell you no, or tell you that you’re being weird.

Then ask them how much you’d need to pay them in order for them to let you shoulder-surf. Now they’ll be creeped out.

Finally, tell them you installed software on their computer last week, so you’ve already got the data you need, is it okay if you use that for your report. Now they’re going to probably completely overreact, and maybe even get angry (so tell them you were kidding).

More than two years ago, I discussed why do-not-track would stall out and die, and in fact, it has. This was completely predictable, and I would have been completely shocked if this hadn’t happened. It’s because there is one thing that makes the Web work at all. It’s the cycle of micropayments of personally identifiable information (PII) that, in appropriate quantities, allow advertisers (and advertising companies) to tune their advertising. In short, everything you do is up for grabs on the Web to help profile you (and ideally, sell you something). Some might argue that you searching for “schnauzer sweaters” isn’t PII. The NSA would beg to differ. Metadata is just as valuable, if not more, than data itself, to uniquely identify an individual.

When Facebook tweaked privacy settings to begin “liberating” personal information, it was all about tuning advertising. When we search using Google (or Bing, or Yahoo), we’re explicitly profiling ourselves for advertisers. The free Web as we know it is sort of a mirage. The content appears free, but isn’t. Back in the late 1990’s, the idea of micropayments was thrown about, and has in my opinion come and gone. But it is far from dead. It just never arrived in the form that people expected. Early on, the idea was that individuals might pay a dollar here for a news story, a few dollars there for a video, a penny to send an email, etc. Personally, I never saw that idea actually taking off, primarily because the epayment infrastructure wasn’t really there, and partially because, well, consumers are cheap and won’t pay for almost anything.

In 1997, Nathan Myhrvold, Microsoft’s CTO, had a different take. Nathan said, “Nobody gets a vig on content on the Internet today… The question is whether this will remain true.”

Indeed, putting aside his patent endeavors, Nathan’s reading of the tea leaves at that time was very telling. My contention is that while users indeed won’t pay cash (payments or micropayments) for the activities they perform on the Web, they’re more than willing to pay for their use of the Web with picopayments of personal information.

If you were to ask a non-technical user how much they would expect to be paid for an advertiser to know their home address, how many children they have, or what the ages of their children are, or that they suffer from psoriasis, most people would be pretty uncomfortable (even discounting the psoriasis). People like to assume, incorrectly, that their privacy is theirs, and the little lock icon on their browser protects all of the niblets of data that matter. While it conceptually does protect most of the really high financial value parts of an individual’s life (your bank account, your credit card numbers, and social security numbers), it doesn’t stop the numerous entities across the Web from profiling you. Countless crumbs you leave around the Web do allow you to be identified, and though they may not expose your personal, financial privacy, do expose your personal privacy for advertisers to peruse. It’s easy enough for Facebook (through the ubiquitous Like button) or Google (through search, Analytics, and AdSense) to know your gender, age, marital/parental status, any medical or social issues you’re having, what political party you favor, and what you were looking at on that one site that you almost placed an order on, but wound up abandoning.

If you could truly visualize all of the personal attributes you’ve silently shared with the various ad players through your use of the Web, you’d probably be quite uncomfortable with the resulting diagram. Luckily for advertisers, you can’t see it, and you can’t really undo it even if you could understand it all. Sure, there are ways to obfuscate it, or you could stay off the Web entirely. For most people, that’s not a tradeoff they’re willing to make.

The problem here is that human beings, as a general rule, stink at assessing intangible risk, and even when it is demonstrated to us in no uncertain terms, we do little to rectify it. Free search engines that value your privacy exist. Why don’t people switch? Conditioning to Google and the expected search result quality, and sheer laziness (most likely some combination of the two). Why didn’t people flock from Facebook to Diaspora or other alternatives when Facebook screwed with privacy options? Laziness, convenience, and most likely, the presence of a perceived valuable network of connections.

It’s one thing to look over a cliff and sense danger. But as the dancing pigs phenomenon (or the behavior of most adolescents/young adults, and some adults on Facebook) demonstrates, a little lost privacy here and a little lost privacy there is like the metaphoric frog in a pot. Over time it may not feel like it’s gotten warmer to you. But little by little, we’ve all sold our privacy away to keep the Web “free”.