19
Oct 14

On the Design of Toasterfridges

On my flight today, I rewatched the documentary Objectified. I’ve seen it a few times before, but it has been several years. While I don’t jibe with 100% of the sentiment of the documentary, it made me think a bit about design, as I was headed to Dallas. In particular, it made me consider Apple, Microsoft, and Google, and their dramatically different approaches to design – which are in fact a reflection of the end goal of each of the companies.

One of my favorite moments in the piece is Jony Ive’s section, early on. I’ve mentioned this one before. If you haven’t read that earlier blog post, you might want to before you read on.

Let’s pause for a moment and consider Apple, Microsoft, and Google. What does each make?

  • Apple – Makes hardware.
  • Microsoft – Makes software.
  • Google – Makes information from data.

Where does each one make the brunt of its money?

  • Apple – Consumer hardware and content.
  • Microsoft – Enterprise software licensing.
  • Google – Advertising.

What does each one want more of from the user?

  • Apple – Buy more of their devices and more content.
  • Microsoft – Use their software, everywhere.
  • Google – Share more of your information.

You can also argue that Apple makes software, Microsoft makes hardware, and Google makes both. Some of you will surely do so. But at the end of the day, software is a hobby for Apple to sell more hardware and content (witness the price of their OS and productivity apps), hardware is a hobby for Microsoft to try and sell more software and content, and hardware and software are both hobbies for Google to try and get you more firmly entrenched into their data ecosystem.

Some people were apparently quite sad that Apple didn’t introduce a ~12” so-called “iPad Pro” at their recent October event. People expecting such a device were hoping for a removable keyboard, perhaps like Microsoft’s Surface (ARM) and Surface Pro (Intel) devices. Hopes were there that such a device would be the best of both worlds… a large professional-grade tablet (because those are selling well), and a laptop of sorts, and it would feature side-by side application windows, as have been available on Windows nearly forever, and many Android devices for some time. In many senses, it would be Apple’s own version of the Surface Pro 3 with Windows 8.1 on it. Reporters have insisted, and keep insisting that Apple’s future will be based upon making a Surface clone of sorts. I’m not so sure.

I have a request for you. Either to yourself, in the comments below, or on Twitter, consider the following. When was the last time (since the era of Steve Jobs return) that you saw Apple hardware lean away, in order to let the software compromise it? Certainly, the hardware may defer to the software, as Ive says earlier about the screen and touch on the iPhone; but the role of the hardware is omnipresent – even if you don’t notice it.

I’ve often wondered what Microsoft’s tablets would look like today if Microsoft didn’t own Office as well as Windows; if they weren’t so interested in preserving the role of both at the same time. Could the device have been a pure tablet that deferred to touch, and didn’t try so hard to be a laptop? Could it have done better in such a scenario?

Much has been said about the “lapability” of the Surface family of devices. I really couldn’t disagree more.

More than one person I know has used either a cardboard platform or other… <ahem> surface as a flattop for their Surface to rest upon while sitting on their lap. I’ve seen innumerable reporters contort themselves while sitting in chairs at conferences to balance the device between the ultra-thin keyboards and the kickstand. A colleague recently stopped using his Surface Pro 2 because he was tired of the posture required to use the device while it is on your lap. It may be an acceptable tablet, especially in Surface Pro 3 guise – but I don’t agree that it’s a very good “laptop”.

The younger people that follow me on Twitter or read this blog may not get all of these examples, but hopefully will get several. Consider all of the following devices (that actually existed).

  • TV/VCR combination
  • TV/DVD combination
  • Stand mixers with pasta-making attachments
  • Smart televisions
  • Swiss Army Knife

Each of these devices has something in common. Absent a better name to apply to it, I will call that property toasterfridgality. Sure. Toasterfridge was a slam that Tim Cook came up with to describe Microsoft’s Surface devices. But regardless of the semi-derogatory term, the point is, I believe, valid.

Each of the devices above compromises the integrity with which it performs one or more roles in order to try and perform two or more roles. The same is true of Microsoft’s Surface and Surface Pro line.

For Microsoft, it was imperative that the Surface and Surface Pro devices, while tablets first and foremost (witness the fact that they are sold sans keyboard), be able to run Office and the rest of Win32 that couldn’t be ported in time for Windows 8 – even if it meant a sacrifice of software usability in order to do so. Microsoft’s fixation on selling the devices not as tablets but as laptop replacements (even though they come with no keyboard) leads to a real incongruity. There’s the device Microsoft made, the device consumers want, and the way Microsoft is trying to sell it. Even taking price out of the equation, is there any wonder that Surface sales struggled until Surface Pro 3?

Lenovo more harmoniously balances their toasterfridgality. Their design always seems to focus first on the device being a laptop – then how to incorporate touch. (And on some models, “tabletude”.) Take for example, the Lenovo ThinkPad Yoga  or Lenovo ThinkPad Helix. These devices are laptops, with a comprehensive hinge that enables them to have some role as a tablet while not completely sacrificing… well… lapability. In short, the focus is on the hinge, not on the keyboard.

To view the other end of the toasterfridge spectrum, check out the Asus Padfone X, device that tries to be your tablet by glomming on a smartphone. I’m a pretty strong believer that the idea of “cartridge” style computing isn’t the future, as I’ve also said before. Building devices that integrate with each other to transmogrify into a new role sounds good. But it’s horrible. It results in a device that performs two or more roles, but isn’t particularly good at either one. It’s a DVD/VCR combo all over again. Phone breaks, and now you don’t have either device anymore. If there was such a model that converted your phone into a desktop, one can only imagine how awesome it would be reporting to work on Monday, having lost your “work brain” by dropping your phone into the river.

I invite you to reconsider the task I asked of you earlier, to tell me where Apple’s hardware defers to the software. Admittedly, One can make the case that Apple is constantly deferring the software to the hardware; just try and find an actual fan of iTunes or the Podcasts app, or witness Apple’s recent software quality issues (a problem not unique to Apple). But software itself isn’t their highest priority; it’s the marriage of that software and the hardware (sometimes compromising them both a bit). Look at the iPhone 6 Plus and the iPad Air 2. Look how Apple moved – or completely removed – switchgear on them to align with both use cases (big phones are held differently) and evolving priorities (switches break, and the role of the side-switch in iOS devices is now completely made redundant by software).

Sidebar: Many people, including me, have complained that iOS devices start at 16GB of storage now. This is ridiculous. With the bloat of iOS, requirements for upgrading, and any sort of content acquisition by their users, these devices will be junk before the end of CY2016. Apple, of course, has made cohesive design, not upgradability, paramount in their iOS devices. This has earned them plenty of low scores for reparability and consumer serviceability/upgradeability in reviews. I think it is irresponsible of Apple, given that they have no upgradeability story, to sell these devices with 16GB. The minimum on any new iOS device should be 32GB. Upgradability or the ability to add peripherals is often touted by those dissing Apple as limitations of the platform. It’s true. They are limitations. But these limitations and a tight, cohesive hardware design, are what let these devices have value 4 years after you buy them. I recently got $100 credit from AT&T for my daughter’s iPhone 4 (from June, 2010). A device that I had used for two years, she had used for two more, and it still worked. It was just gasping for air under the weight of iOS 6, let alone iOS 7 (and the iPhone 4 can’t run 8). There is a reason why these devices aren’t upgradeable. Adding upgradeability means building the device with serviceability in mind, and compromising the integrity of the whole device just to make it expandable. I have no issue with Apple making devices user non-serviceable for their lifespan, as I believe it tends to result in devices that actually last longer rather than falling apart when screws unwind and battery or memory doors stop staying seated.

I’ve had several friends mention a few recent tablets and the fact that USB ports on the devices are very prone to failure. This isn’t new to me. In 2002, when I was working to make Windows boot from USB, I had a Motion Computing M1200 tablet. Due to constant insertion and removal of UFDs for testing and creation, both of the USB ports on the tablet had come unseated off of the motherboard and were useless. Motion wanted over $700 to repair a year old (admittedly somewhat abused) tablet. With <ahem> persuasion from an executive at Microsoft, Motion agreed to repair it for me for free. But this forever highlighted to me that more ports aren’t necessarily always something to be looked at in a positive light. The more things you add, the more complex the design becomes, and the more likely it becomes that one of these overwrought features added to please a product manager who has a list of competitive boxes to check will lead to a disappointed customer, product support issues and costs, or both. USB was never originally designed to have plugs inserted and removed willy-nilly (as Lightning and the now dead Apple 30-pin connector were), and I don’t think most boards are manufactured to have devices inserted and removed as often (and perhaps as haphazardly) as they are on modern PC tablets.

Every day, we use things made of components. These aren’t experiences, and they aren’t really even designed (at least not with any kind of cohesive aesthetic). Consider the last time you used a Windows-based ATM or point-of-sale/point-of-service device. It may not seem fair that I’m  glomming Windows into this, but Windows XP Embedded helped democratize embedded devices, and allowed for cheap devices to handle cash, digital currency, rent DVDs on demand, and make a heretofore unimaginable self-service soda fountain.

But there’s a distinct feel of toaster fridge every time I used one of these devices. You feel the sharp edges where the subcomponents it is made of come together (but don’t align). Where the designer compromised the design of the whole in order to accommodate the needs of the subcomponents.

The least favorite device I use with any regularity is the Windows-based ATM at my credit union. It has all of the following components:

  • A display screen (which at least supports touch)
  • An input slot for your ATM/credit/debit card
  • A numeric keypad
  • An input slot for one or more checks or cash
  • An output slot for cash
  • An output slot for receipts.

As you use this device, there are a handful of pain points that will start to drive you crazy if you actually consider the way you use it. When I say left or right, I mean in relation to the display.

  • The input slot for your card is on the right side.
  • The input slot for checks is on the left side.
  • The receipt printer is on the right side.
  • The output slots for cash are both below.

Arguably, there is no need for a keypad given that there is a touchscreen; but users with low visibility would probably disagree with that. Besides that, my credit union has not completely replaced the role of the keypad with the touchscreen. Entering PINs, for example, still requires the keypad.

So to deposit a check, you first put in your card (right), enter your pin (below), specify your transaction type (on-screen), deposit a stack of checks (no envelope, which is nice) on the left. Wait, get your receipt (top right), and get your card (next down on the right). My favorite part is that the ATM starts beeping at you to retrieve your card before it has released it.

This may all seem like a pedantic rant. But my primary point is that every day, we use devices that prioritize the business needs, requirements, or limitations of their creator or assembler, rather than their end user.

Some say that good design begins with the idea of creating experiences rather than products. I am inclined to agree with this ideology, one that I’ve also evangelized before. But to me, the most important role in designing a product is to pick the thing that your product will do best, and do that thing. If it can easily adapt to take on another role without compromising the first role? Then do that too. If adding the new features means compromising the product? Then it is probably time to make an additional product. I must admit – people who clamor for an Apple iPad Pro that would be a bit of (big) tablet and (small) notebook confuse me a bit. I have a 2013 iPad Retina Mini and a 2013 Retina MacBook Pro. Each device serves a specific purpose and does it exceptionally well.

I write for a living. I can never envision doing that just on an iPad, let alone my Mini (or even without the much larger Acer display that my rMBP connects to). In the same vein, I can’t really visualize myself laying down, turning on some music, and reading an eBook on my Mac. Yes. I had to pay twice to get these two different experiences. But if the alternative is getting a device that compromises both experiences just to save a bit of money? I don’t get that.


12
Oct 14

It is past time to stop the rash of retail credit card “breaches”

When you go shopping at Home Depot or Lowe’s, there are often tall ladders, saws, key cutters, and forklifts around the shopping floor. As a general rule, most of these tools aren’t for your use at all. You’re supposed to call over an employee if you need any of these tools to be used. Why? Because of risk and liability, of course. You aren’t trained to use these tools, and the insurance that the company holds would never cover their liability  if you were injured or died while operating these tools.

Over the past year, we have seen a colossal failure of American retail and restaurant establishments to adequately secure their point-of-sale (POS) systems. If you’ve somehow missed them all, Brian Krebs’ coverage serves as a good list of many of the major events.

As I’ve watched company after company fall prey to seemingly the same modus operandi as every company before, it has frustrated me more and more. When I wrote You have a management problem, my intention was to highlight the fact that there seems to be a fundamental disconnect in the strategies used to connect the risk to the security of key applications (and systems). But I think it’s actually worse than that.

If you’re a board member or CEO of a company in the US, and the CIO and CSO of the organizations you manage haven’t asked their staff the following question yet, there’s something fundamentally wrong.

That question every C-level in the US should be asking? “What happened at Target, Michael’s, P.F. Chang’s, etc… what have we done to ensure that our POS systems are adequately defended from this sort of easy exploitation?”

This is the most important question that any CIO and CSO in this country should be asking this year. They should be regularly asking this question, reviewing the threat models from within their organization created by staff to answer it, and performing the work necessary to validate they have adequately secured their POS infrastructure. This should not be a one time thing. It should be how the organization regularly operates.

My worry is that within too many orgs people are either a) not asking this question because they don’t know to ask it, b) dangerously assuming that they are secure, or c)  so busy, and nobody who knows better feels empowered to pull the emergency brake and bring the train to a standstill to truly examine the comprehensive security footing of their systems.

Don’t listen to people if they just reply by telling you that the systems are secure because, “We’re PCI compliant.” They’re ducking the responsibility of securing these systems through the often translucent facade of compliance.

Compliance and security can go hand in hand. But security is never achieved by stamping a system as “compliant”.

Security is achieved by understanding your entire security posture, through threat modeling. For any retailer, restaurateur, or hospitality organization in the US, this means you need to understand how you’re protecting the most valuable piece of information that your customers will be sharing with you, their ridiculously insecure 16-digit, magnetically encoded credit card/debit card number. Not their name. Not their email address. Their card number.

While it does take time to secure systems, and some of these exploits that have taken place over 2014 (such as Home Depot) may have even begun before Target discovered and publicized the attack on their systems, we are well past the point where any organization in the US should just be saying, “That was <insert already exploited retailer name>, we have a much more secure infrastructure.” If you’ve got a threat model that proves that, great. But what we’re seeing demonstrated time and again as these “breaches” are announced is that organizations that thought they were secure, were not actually secure.

During 2002, when I was in the Windows organization, we had, as some say, a “come to Jesus” moment. I don’t mean that expression to offend anyone. But there are few expressions that can adequately get the fundamental shift that happened. We were all excitedly working on several upcoming versions of Windows, having just sort of battened down some of the hatches that had popped open in XP’s original security perimeter, with XPSP1.

But due to several major vulnerabilities and exploits in a row, we were ordered (by Bill) to stop engineering completely, and for two months, all we were allowed to work on were tasks related to the Secure Windows Initiative and making Windows more secure, from the bottom up, by threat modeling the entire attack surface of the operating system. It cost Microsoft an immense amount of money and time. But had we not done so, customers would have cost the company far more over time as they gave up on the operating system due to insecurity at the OS level. It was an exercise in investing in proactive security in order to offset future risk – whether to Microsoft, to our customers, or to our customers’ customers.

I realize that IT budgets are thin today. I realize that organizations face more pressure to do more with less than ever before. But short of laws holding executives financially responsible for losses that are incurred under their watch, I’m not sure what will stop the ongoing saga of these largely inexcusable “breaches” we keep seeing. If your organization doesn’t have the resources to secure the technology you have, either hire the staff that can or stop using technology. I’m not kidding. Grab the knucklebusters and some carbonless paper and start taking credit cards like it’s the 1980’s again.

The other day, someone on Twitter noted that the recent spate of attacks shouldn’t really be called “breaches”, but instead should be called skimming attacks. Most of these attacks have worked by using RAM scrapers. This approach, first really seen in 2009, really hit the big time in 2013. RAM scrapers work through the use of a Windows executable (which, <ahem>, isn’t supposed to be there) scans memory (RAM) on POS systems when track data from US cards is scanned off of magnetically swiped credit cards. This laughably simple stunt is really the key to effectively all of the breaches (which I will now from here on out refer to as skimming attacks). A piece of software, which shouldn’t ever be on those systems, let alone be able to run on those systems, is freely scanning memory for data which, arguably, should be safe there, even though it is not encrypted.

But here we are, with these RAM scrapers violating law #2 of the 10 Immutable Laws of Security, these POS systems are obviously not secured as well as Microsoft, the POS manufacturer, or the VAR that installed it either would like them to be, and obviously everyone including the retailer assumed they were. Most likely, these RAM scrapers are usually going to be custom crafted enough to evade detection by (questionably useful) antivirus software. More importantly, many indications were that in many cases, these systems were apparently certified as PCI-DSS compliant in the exact same scenario that they were later compromised in. This indicates either a fundamental flaw in the compliance definition, tools, and/or auditor. It also indicates some fundamental holes in how these systems are presently defended against exploitation.

As someone who helped ship Windows XP (and contributed a tiny bit to Embedded, which was a sister team to ours), it makes me sad to see these skimming attacks happen. As someone who helped build two application whitelisting products, it makes me feel even worse, because… they didn’t need to happen.

Windows XP Embedded leaves support in January of 2016. It’s not dead, and can be secured properly (but organizations should absolutely be down the road of planning what they will replace XPE with). Both Windows and Linux, in embedded POS devices, suffer the same flaw; platform ubiquity. I can write a piece of malware that’ll run on my Windows desktop, or a Linux system, and it will run perfectly well on these POS systems (if they aren’t secured properly).

The bad guys always take advantage of the broadest, weakest link. It’s the reason why Adobe Flash and Acrobat, and Java are the points they go after on Windows and the OS X. The OSs are hardened enough up the stack that these unmanageable runtimes become the hole that exploitation shellcode often pole vaults through.

In many of these retail POS skimming attacks, remote maintenance software (to access a Windows desktop remotely) often secured with a poor password is the means that is being used to get code onto these systems. This scenario and exploit vector isn’t unique to retail, either. I guarantee you there are similar easy opportunities for exploit in critical infrastructure, in the US and beyond.

There are so many levels of wrong here. To start with, these systems:

  1. Shouldn’t have remote access software on them
  2. Shouldn’t have the ability to run every arbitrary binary that is put on them.

These systems shouldn’t have any remote access software on them at all. If they must, this software should implement physical, not password-based, authentication. These systems should be sealed, single purpose, and have AppLocker or third-party software to ensure that only the Windows (or Linux, as appropriate) applications, drivers, and services that are explicitly authorized to run on them can do so. If organizations cannot invest in the technology to properly secure these systems, or do not have the skills to do so, they should either hire staff skilled in securing them, cease using PC-based technology and start using legacy technology, or examine using managed iOS or Windows RT-based devices that can be more readily locked down to run only approved applications.


07
Sep 14

On the death of files and folders

As I write this, I’m on a plane at 30,000+ feet, headed to Chicago. Seatmates include a couple from Toronto headed home from a cruise to Alaska. The husband and I talk technology a bit, and he mentions that his wife particularly enjoys sending letters as they travel. He and I both smile as we consider the novelty in 2014 of taking a piece of paper, writing thoughts to friends and family, and putting it in an envelope to travel around the world to be warmly received by the recipient.

Both Windows and Mac computers today are centered around the classic files and folders nomenclature we’ve all worked with for decades. From the beginning of the computer, mankind has struggled to insert metaphors from the physical world into our digital environments. The desktop, the briefcase, files that look like paper, folders that look like hanging file folders. Even today as the use of removable media decreases, we hang on to the floppy diskette icon, a symbol that means nothing to pre-teens of today, to command an application to “write” data to physical storage.

Why?

It’s time to stop using metaphors from the physical world – or at least to stop sending “files” to collaborators in order to have them receive work we deign to share with them.

Writing this post involves me eating a bit of crow – but only a bit. Prior to me leaving Microsoft in 2004, I had a rather… heated… conversation with a member of the WinFS team about a topic that is remarkably close to this. WinFS was an attempt to take files as we knew them and treat them as “objects”. In short, WinFS would take the legacy .ppt files as you knew them, and deserialize (decompose) them into a giant central data store within Windows based upon SQL Server, allowing you to search, organize, and move them in an easier manner. But a fundamental question I could never get answered by that team (the core of my heated conversation) was how that data would be shared with people external to your computer. WinFS would always have to serialize the data back out into a .ppt file (or some other “container”) in order to be sent to someone else. The WinFS team sought to convert everything on your system into a URL, as well – so you would have navigated the local file system almost as if your local machine was a Web server rather than using the local file and folder hierarchy that we had all become used to since the earliest versions of Windows or the Mac.

So as I look back on WinFS, some of the ideas were right, but in classic Microsoft form, at best it may have been a bit of premature innovation, and at worst it may have been nerd porn relatively disconnected from actual user scenarios and use cases.

From the dawn of the iPhone, power users have complained that iOS lacked something as simple as a file explorer/file picker. This wasn’t an error on Apple’s part; a significant percentage of Apple’s ease of use (largely aped by Android and Windows (at least with WinRT and Windows Phone applications) is by abstracting away the legacy file and folder bird’s nest of Windows, the Mac, etc.

As we enter the fall cavalcade of consumer devices ahead of the holiday, one truth appears plainly clear; that standalone “cloud storage” as we know it is largely headed for the economic off-ramp. The three main platform players have now put cloud storage in as a platform pillar, not an opportunity to be filled by partners. Apple (iCloud Drive), Google (Google Drive), and Microsoft (OneDrive and OneDrive for Business – their consumer and business offerings, respectively), have all been placed firmly in as a part of their respective platform. Lock-in now isn’t just a part of the device or the OS, it’s about where your files live, as that can help create a platform network effect (AT&T Friends and Family, but in the cloud). I know for me, my entire family is iOS based. I can send a link from iCloud drive files to any member of my family and know they can see the photo I took or the words I wrote.

But that’s just it. Regardless of how my file is stored in Apple’s, Google’s, or Microsoft’s hosted storage, I share it through a link. Every “document” envelope as we knew it in the past is now a URL, with applications on each device capable of opening their file content.

Moreover, today’s worker generally wants their work:

  1. Saved automatically
  2. Backed up to the cloud automatically (within reason, and protected accordingly)
  3. Versioned and revertible
  4. Accessible anywhere
  5. Coauthoring capable (work with one or more colleagues concurrently without needing to save and exchange a “file”)
  6. As these sorts of features become ubiquitous across productivity tools, the line between a “file” and a “URL” becomes increasingly blurred, and the more, well, the more our computers start acting just like the WinFS team wanted them to over a decade ago.

    If you look at the typical user’s desktop, it’s a dumping ground of documents. It’s a mess. So are their favorites/bookmarks, music, videos, and any other “file type” they have.

    On the Mac, iTunes (music metadata), iPhoto (face/EXIF, and date info), and now the finder itself (properties and now tags) are a complete mess of metadata. A colleague in the Longhorn Client Product Management Group was responsible for owning the photo experience for WinFS. Even then I think I crushed his spirit by pointing out what a pain in the ass it was going to be to enter in all of the metadata for photos as users returned for trips, in order to make the photos be anything more than a digital shoebox that sits under the bed.

    I’m going to tell all the nerds in the world a secret. Ready? Users don’t screw around entering metadata. So anything you build that is metadata-centric that doesn’t populate the metadata for the user is… largely unused.

    I mention this because, as we move towards vendor-centered repositories of our documents, it becomes an opportunity for vendors to do much of what WinFS wanted to do, and help users catalog and organize their data; but it has to be done almost automatically for them. I’m somewhat excited about Microsoft’s Delve (nee Oslo) primarily because if it is done right (and if/when Google offers a similar feature), users will be able to discover content across the enterprise that can help them with their job. Written word will in so many ways become a properly archived, searchable, and collaboration-ready tool for businesses (and users themselves, ideally).

    Part of the direction I think we need to see is tools that become better about organizing and cataloging our information as we create it, and keeping track of the lineage of written word and digital information. Create a file using a given template? That should be easily visible. Take a trip with family members? Photos should be easily stitched together into a searchable family album.

    Power users, of course, want to feel a sense of control over the files and folders on their computing devices (some of them even enjoy filling in metadata fields). These are the same users who complained loudly that iOS didn’t have a Finder or traditional file picker, and who persuaded Microsoft to add a file explorer of sorts to Windows Phone, as Windows 8 and Microsoft’s OneDrive and OneDrive for Business services began blurring out the legacy Windows File Explorer. There’s a good likelihood that next year’s release of Windows 9 could see the legacy Win32 desktop disappear on touch-centric Windows devices (much like Windows Phone 8.x, where Win32 still technically exists, but is kept out of view. I firmly expect this move will (to say it gently) irk Windows power users. These are the same type of users who freaked out when Apple removed the save functionality from Pages/Numbers/Keynote. Yet that approach is now commonplace for the productivity suites of all of the “big 3” productivity players (Microsoft, Google, and Apple), where real-time coauthoring requires an abstraction of the traditional “Save” verb we all became used to since the 1980’s. For Windows to succeed as a novice-approachable touch environment as iOS is, it means jettisoning a visible Win32 and the File Explorer. With this, OneDrive and the simplified file pickers in Windows become the centerpiece of how users will interact with local files.

    I’m not saying that files and folders will disappear tomorrow, or that they’ll really ever disappear entirely at all. But increasingly, especially in collaboration-based use cases, the file and folder metaphors will largely move to the wayside, replaced by Web-based experiences and the use of URLs with dedicated platform-specific local, mobile or online apps interacting with them.


06
Aug 14

My path forward

Note: I’m not leaving Seattle, or leaving Directions on Microsoft. I just thought I would share the departure email I sent in 2004. Today, August 6, 2014 marks the tenth anniversary of the day I left Microsoft and Seattle to work at Winternals in Austin. For those who don’t know – earlier that day, Steve Ballmer had sent a company-wide memo entitled “Our path forward”, hence my tongue-in cheek subject selection.

From: Wes Miller
Sent: Tuesday, July 06, 2004 2:32 PM
To: Wes Miller
Subject: My path forward

Seven years ago, when I moved up from San Jose to join Microsoft, I wondered if I was doing the right thing… Not that I was all that elated working where I was, but rather we all achieve a certain level of comfort in what we know, and we fear that which we don’t know. I look back on the last seven years and it’s been an amazing, fun, challenging, and sometimes stressful experience – experiences that I would never trade for anything.

At the same time, for family reasons and for personal reasons, I’ve had to do some soul searching that retraced the memories I have from, and steps I went through when I initially came to Microsoft, and I have accepted a position working for a small software company in Austin, TX. My last day at Microsoft will be Friday August 6, one month from today. The best way to reach me after that until my new address is set up is <redacted>. Between now and August 6th I will be doing my best to meet with any of you that need closure on deployment or LH VPC related issues before my departure. Please do let me know if you need something from me between now and then.

Many thanks to those of you who I have worked with over the years – take care of yourselves, and stay in touch.

Thanks,
Wes


25
Jul 14

You have a management problem.

I have three questions for you to start off this post. I don’t care if you’re “in the security field” or not. In fact, I’m more interested in your answers if you aren’t tasked with security, privacy, compliance, or risk management as a part of your defined work role.

The questions:

  1. If I asked you to show me threat models for your major line of business applications, could you?
  2. If I asked you to define the risks (all of them) within your business, could you?
  3. If I asked you to make a decision about what kind of risks are acceptable for your business to ignore, could you?

In most businesses, the answer to all three is probably no, especially the further you get away from your security or IT teams. Unfortunately, I also believe the answer is pretty firmly no as you roll up the management chain of your organization into the C-suite.

Unless your organization consists of just you or a handful of users, nobody in your organization understands all of the systems and applications in use across the org. That’s a huge potential problem.

The other day I was talking with three of our customers, and the conversation started around software licensing, then spun into software asset management, auditing, and finally to penetration testing and social engineering.

At first glance, that conversation thread may seem diverse and disconnected. But they are so intertwined. Every one of those topics involves risk. Countering risk, in turn, requires adequate management.

By management, I mean two things:

  1. Management of the all components involved (people, process, and technology – to borrow a line from a friend)
  2. Involvement of management. From your CEO or top-level leadership, down.

You certainly can’t expect your C-level executives to intimately know every application or piece of technology within the organization. That’s probably not tractable. What is crucial is that there is accountability down the chain, and trust up the chain. If an employee responsible for security or compliance says there’s a problem that needs to be immediately addressed, they need to be trusted. They can’t run their concern up the flagpole and have someone who is incapable of adequately assessing the technical or legal (or both) implications of hedging on addressing it, and cannot truthfully attest to the financial risk of fixing the issue or doing nothing.

  • If you hire a security team and you don’t listen to them, what’s the point of hiring them? Just run naked through the woods.
  • If you hire a compliance team (or auditor) and don’t listen to them, what’s the point of hiring them? Just be willing to bring in an outside rubber-stamp auditor, and do the bare minimum.
  • If you have a team that is responsible for software asset management, and you don’t empower them to adequately (preemptively) assess your licensing posture, what’s the point of hiring them? Just wait and see if you get audited by a vendor or two, and accept the financial pit.

If you’re not going to empower and listen to people in your organization who with risk management skills, don’t hire them. If you’re going to hire them, listen to them, and work preemptively to manage risk. If you’re going to try and truly mitigate risk across your business, be willing to preemptively invest in people, processes and technology (not bureaucracy!) to discover and address risk before it becomes damage.

So much of the bullshit that we see happening in terms of unaddressed security vulnerabilities, breaches (often related to vulns), social engineering and (spear)phishing, and just plain bad software asset management has everything to do with professionals who want to do the right thing not being empowered to truly find, manage, and address risk throughout the enterprise, and a lack of risk education up and down the org. Organizations shouldn’t play chicken with risk and be happy with saving a fraction of money up front. It can well become exponentially larger if it is ignored.


17
Jun 14

Is the Web really free?

When was the last time you paid to read a piece of content on the Web?

Most likely, it’s been a while. The users of the Web have become used to the idea that Web content is (more or less) free. And outside of sites that put paywalls up, that indeed appears to be the case.

But is the Web really free?

I’ve had lots of conversations lately about personal privacy, cookies, tracking, and “getting scroogled“. Some with technical colleagues, some with non-technical friends. The common thread is that most people (that world full of normal people, not the world that many of my technical readers likely live in) have no idea what sort of information they give up when they use the Web. They have no idea what kind of personal information they’re sharing when they click <accept> on that new mobile app that wants to upload their (Exif geo-encoded) photos, that wants to track their position, or wants to harmlessly upload their phone’s address book to help “make their app experience better”.

My day job involves me understanding technology at a pretty deep level, being pretty familiar with licensing terms, and previous lives have made me deeply immersed in the world of both privacy and security. As a result, it terrifies me to see the crap that typical users will click past in a licensing agreement to get to the dancing pigs. But Pavlov proved this all long ago, and the dancing pigs problem has highlighted this for years, to no avail. Click through software licenses exist primarily as a legal CYA, and terms of service agreements full of legalese gibberish could just as well say that people have to eat a sock if they agree to the terms – they’ll still agree to them (because they won’t read them).

On Twitter, the account for Reputation.com posted the following:

A few days later, they posted this:

I responded to the first post with the statement that accurate search results have intrinsic value to users, but most users can’t actually quantify a loss of privacy. What did I mean by that? I mean that most normal people will tell you they value their privacy if you ask them, but if you take away the free niblets all over the Web that they get for giving up their privacy little by little, they’ll actually renege on how important privacy really is.

Imagine the response if you told a friend, family member, or colleague that you had a report/blog/study you were working on, and asked them, “Hey, I’m going to shoulder-surf you for a day and write down which Websites you visit, how often and how long you visit them, and who you send email to, okay?” In most cases, they’d tell you no, or tell you that you’re being weird.

Then ask them how much you’d need to pay them in order for them to let you shoulder-surf. Now they’ll be creeped out.

Finally, tell them you installed software on their computer last week, so you’ve already got the data you need, is it okay if you use that for your report. Now they’re going to probably completely overreact, and maybe even get angry (so tell them you were kidding).

More than two years ago, I discussed why do-not-track would stall out and die, and in fact, it has. This was completely predictable, and I would have been completely shocked if this hadn’t happened. It’s because there is one thing that makes the Web work at all. It’s the cycle of micropayments of personally identifiable information (PII) that, in appropriate quantities, allow advertisers (and advertising companies) to tune their advertising. In short, everything you do is up for grabs on the Web to help profile you (and ideally, sell you something). Some might argue that you searching for “schnauzer sweaters” isn’t PII. The NSA would beg to differ. Metadata is just as valuable, if not more, than data itself, to uniquely identify an individual.

When Facebook tweaked privacy settings to begin “liberating” personal information, it was all about tuning advertising. When we search using Google (or Bing, or Yahoo), we’re explicitly profiling ourselves for advertisers. The free Web as we know it is sort of a mirage. The content appears free, but isn’t. Back in the late 1990’s, the idea of micropayments was thrown about, and has in my opinion come and gone. But it is far from dead. It just never arrived in the form that people expected. Early on, the idea was that individuals might pay a dollar here for a news story, a few dollars there for a video, a penny to send an email, etc. Personally, I never saw that idea actually taking off, primarily because the epayment infrastructure wasn’t really there, and partially because, well, consumers are cheap and won’t pay for almost anything.

In 1997, Nathan Myhrvold, Microsoft’s CTO, had a different take. Nathan said, “Nobody gets a vig on content on the Internet today… The question is whether this will remain true.”

Indeed, putting aside his patent endeavors, Nathan’s reading of the tea leaves at that time was very telling. My contention is that while users indeed won’t pay cash (payments or micropayments) for the activities they perform on the Web, they’re more than willing to pay for their use of the Web with picopayments of personal information.

If you were to ask a non-technical user how much they would expect to be paid for an advertiser to know their home address, how many children they have, or what the ages of their children are, or that they suffer from psoriasis, most people would be pretty uncomfortable (even discounting the psoriasis). People like to assume, incorrectly, that their privacy is theirs, and the little lock icon on their browser protects all of the niblets of data that matter. While it conceptually does protect most of the really high financial value parts of an individual’s life (your bank account, your credit card numbers, and social security numbers), it doesn’t stop the numerous entities across the Web from profiling you. Countless crumbs you leave around the Web do allow you to be identified, and though they may not expose your personal, financial privacy, do expose your personal privacy for advertisers to peruse. It’s easy enough for Facebook (through the ubiquitous Like button) or Google (through search, Analytics, and AdSense) to know your gender, age, marital/parental status, any medical or social issues you’re having, what political party you favor, and what you were looking at on that one site that you almost placed an order on, but wound up abandoning.

If you could truly visualize all of the personal attributes you’ve silently shared with the various ad players through your use of the Web, you’d probably be quite uncomfortable with the resulting diagram. Luckily for advertisers, you can’t see it, and you can’t really undo it even if you could understand it all. Sure, there are ways to obfuscate it, or you could stay off the Web entirely. For most people, that’s not a tradeoff they’re willing to make.

The problem here is that human beings, as a general rule, stink at assessing intangible risk, and even when it is demonstrated to us in no uncertain terms, we do little to rectify it. Free search engines that value your privacy exist. Why don’t people switch? Conditioning to Google and the expected search result quality, and sheer laziness (most likely some combination of the two). Why didn’t people flock from Facebook to Diaspora or other alternatives when Facebook screwed with privacy options? Laziness, convenience, and most likely, the presence of a perceived valuable network of connections.

It’s one thing to look over a cliff and sense danger. But as the dancing pigs phenomenon (or the behavior of most adolescents/young adults, and some adults on Facebook) demonstrates, a little lost privacy here and a little lost privacy there is like the metaphoric frog in a pot. Over time it may not feel like it’s gotten warmer to you. But little by little, we’ve all sold our privacy away to keep the Web “free”.


20
May 14

Engage or die

I’m pretty lucky. For now, this is the view from my office window. You see all those boats? I get to look out at the water, and those boats, all the time (sun, rain, or snow). But those boats… honestly, I see most of those boats probably hundreds of days per year more than their owners do. I’d bet there’s a large number of them that haven’t moved in years.

IMG_0224The old adage goes “The two happiest days in a boat owner’s life are the day he buys it, and the day he sells it.”

All too often, the tools that we acquire in order to solve our problems or “make our lives better” actually add new problems or new burdens to our lives instead. At least that’s what I have found. You buy the best hand mixer you can find, but the gearing breaks after a year and the beaters won’t stay in, so you have to buy a new one. You buy a new task-tracking application, but the act of changing your work process to accommodate it actually results in lower efficiency than simply using lined paper with a daily list of tasks. As a friend says about the whole Getting Things Done (GTD) methodology, “All you have to do is change the way you work, and it will completely change the way you work.”

Perhaps that’s an unfair criticism of GTD, but the point stands for many tools or technologies. If the investment required to take advantage of, and maintain, a given tool exceeds the value returned by it (the efficiency it provides), it’s not really worth acquiring or using.

Technology promises you the world, but then winds up making the best part of using it when you cut yourself taking it out of the hermetically sealed package it was shipped in from China. Marketing will never tell you about the sharp edges, only the parts of the product that work within the narrow scenarios product management understood and defined.

Whether it’s software or hardware, I’ve spent a lot of time over the last year or so working to eliminate tools that fail to make me more productive or reduce day-to-day friction in my work or personal life. Basically looking around, pondering, “how often do I use this tool?”, and discarding it if the answer isn’t “often” or “all the time.” Tangentially, if there’s a tool that I even use at all because it’s the best option, but rarely do so, I’ll keep it around. PaperKarma is a good example of this, because there’s honestly no other tool that does what it does.

However, a lot of software and hardware that I might’ve found indispensable at one point is open for consideration, and I’m tired of being a technology pack-rat. If a tool isn’t something that I really want to (or have to) use all the time, if there’s no reason to keep it around, then why should I keep it? If it’s taking up space on my phone, tablet, or computer, but I never use it, why would I keep it at all?

As technology moves forward at a breakneck pace, with new model smartphones, tablets, and related peripherals for both arriving at incredible speed and with amazing frequency, we all have to make considered choices about when to acquire technology, when to retire it, and when to replace it. Similarly, as software purveyors all move to make you part of their own walled app and content gardens and mimic or pass each other, they also must fight to maintain relevance in the mind of their users every day.

This is why we see Microsoft building applications for iOS and Android, along with Web-based Office applications – to try and address scenarios that Apple and Google already do. It’s why we saw Apple do a reset on the iWork applications, add Web-based versions (to give PC users something to work with). Finally, it’s why we see Google building Hangout plug-ins for Outlook. It’s trying to inject your tools into a workflow where you are a foreign player.

The problem with this is that it is well-intended, but can only be modestly successful at best. As with the comment about GTD, you have to organically become a part of a user’s workflow. You can’t assert yourself into the space with your own workflow and expect to succeed. Great examples of this include Apple’s iWork applications where users on Macs are trying to collaborate with Microsoft Office users on Windows or Mac. Pages won’t seamlessly interact with Word documents – it always wants to save as a Pages document. The end result is that users are constantly frustrated throwing the documents back and forth, and will usually wind up caving and simply using Office.

Tools, whether hardware, or more likely software, that want to succeed over the long run must follow the below “rules of engagement”:

  1. Solve an actual problem faced by your potential users
  2. Seamlessly inject yourself into the workflow of the user any any collaborators the user must work with to solve that problem
  3. Deliver enough value such that users must engage regularly with your application
  4. Don’t create more friction than you remove for your users.

For me, I find that games are easily dismissed. They never solve a real problem, and are an idle-time consumer. Entertain the user or be dismissed and discarded. I downloaded a few photo synchronization apps, in the hopes that one could solve my fundamental annoyances with iPhoto. Both claimed to synchronize all of your photos from your iOS devices to their cloud. The problems with this were two-fold.

  1. They didn’t reliably synchronize on their own in the background. Both regularly nagged me to open the app so it could sync
  2. They synchronized to a cloud service, when I’ve already made a significant investment in iPhoto.

In the end, I stopped using both apps. They didn’t help me with the task I wanted to accomplish, and in fact made it more burdensome for the little value they did provide.

My primary action item out of this post, then, is a call to action for product managers (or anybody designing app[lication]s):

Make your app easy to learn, easy to engage with, friction-free, and valuable. You may think that the scenario you’ve decided to solve is invaluable, but it may actually be nerd porn that most users could care less about. Nerd porn as I define it is features that geeks creating things add to their technology that most normal users never care about (or miss if they’re omitted).

Solving a real-world problem with a general-use application means doing so in a simple, trivial, non-technical manner, and doing it in a way that makes users fall in love with the tool. It makes them want to engage with it as a tool that feels irreplaceable – that they couldn’t live without. When you’re building a tool (app/hardware/software or other), make your tool truly engaging and frictionless, or prepare to watch users acquire it, attempt to use it, and abandon it – and your business potential going with it.


17
May 14

BMW China CEO on how quality affects sales through word of mouth

“One of the most important ways to sell a car in China is word of mouth. People are listening to their friends, customers want to know what are the experiences of others with a product. So they are listening carefully. If you do not deliver the highest quality all of the time, your customer satisfaction goes down. Dissatisfied customers always talk about that they are not satisfied. So immediately if you don’t deliver, it would affect sales, [and] sales would be going down.” Karsten Engel, CEO of BMW China in a CNBC interview.

Thing is, Engel’s point applies whether you’re talking about BMW automobiles in China, or not. His point is spot on regardless of the product or geography. One of the most important ways to sell a product… any product… is word of mouth from satisfied consumers. The way to kill any product is by letting quality or your user experience suffer. Dissatisfied users share their dissatisfaction, and in doing so and can kill your product, your sales, your company, and your job.


06
May 14

Live in the moment.

The younger you are, the more you wish you were older, so you could do the things you’re not old enough to do yet.

The older you get, the more you wish you were younger, so you could do the things you’re too old to do now.


27
Apr 14

Job titles are free.

“The Sunscreen song”, which is actually named “Everybody’s Free (to Wear Sunscreen)”, by Baz Luhrmann, has been a (potentially odd) source of wisdom for me since it came out in 1998, just a few years after I graduated from college. I listen to the song periodically, and try to share it with my kids who, at 9 and 13, don’t yet “get” it.

The words of the song aren’t those of the artist, and they aren’t Kurt Vonnegut’s either, regardless of what urban legend says. No, the words come from Mary Schmich’s 1997 Chicago Tribune column, “Advice, like youth, probably just wasted on the young.” Much like Desiderata, the article attempted to gently deliver nuggets of wisdom about life to a younger generation – in this case as if Mary were delivering a graduation speech.

For years, I pondered how best to share my thoughts on surviving in the work world. While college prepares us for the world by chucking text at us page by page, it often can’t show us the deeper machinations of how the work world happens.

I present to you a non-conclusive collection of some of my thoughts about making the most of your career.

 

Ladies and gentlemen of the incoming workforce of 2014;
Job titles are free.

It’s true. You’ll bump into all sorts of people in your career, with lots of fancy, frilly titles. Chief of this. Executive of that. Founder of something you’ve never heard of.
Remember that titles cost nothing to hand out, and business cards are cheap to print.

Every time you go in for an interview, remember you’re interviewing the job just as much as the job is interviewing you. These are the people you’ll be working with as well as the job you’ll be working at.

Always ask, “Why did the last person in this position leave?”

Don’t settle.

Salary isn’t everything, but salary isn’t unimportant. Pats on the back won’t pay the electric bill. But if you’re only working somewhere because the pay is great, you’re cheating your colleagues, your employer, and yourself.

Typecasting isn’t just for actors. Don’t sit still. Always be working to improve yourself and your skills.

An employer who doesn’t value you improving your knowledge through training and doesn’t help you grow doesn’t value you. Don’t value them.

Age doesn’t equate to wisdom, and neither do words printed on a piece of paper in a frame on the wall. Wisdom almost exclusively arrives through experience, and experience results in both failures and successes. Humility comes from living through life’s failures, life’s successes, and learning over time that both can deliver valuable lessons.

“It seemed like a good idea at the time.” Whenever you run across the bad decisions of others who preceded you, shake your head, laugh, and repeat this to yourself. Make a plan and move forward. Don’t complain.

Consider yourself lucky if you ever work somewhere that an executive steps down because they, themselves (not the board) realize that someone else could do the job better than they could.

Murder your darlings. Suffer for your art. Take criticism as sunlight and water, and let it help you grow.

Simplify.

Surround yourself with people who make you wish you were smarter. Bolt from jobs where you’re always the smartest person in the room.

Value people who say “I don’t know” and ask “what do you need?”, guard yourself from people who keep secrets and never ask for help when things are going wrong.

Hiring the right people is hard. Hiring the wrong people is harder.

Firing someone, or laying someone off, is never fun.
Getting fired, or getting laid off, is never fun.

If your product or service isn’t selling, it’s probably not the marketing. It’s probably the product.

Perhaps you’ll find yourself at a startup. In such a situation, beware of strangers offering you sweat equity. Usually you’ll sweat, and they’ll get the equity.

There is no silver bullet.

You’ll probably find several stops along the way where “outsourcing” will be tossed out as the solution to a problem. With a perfect definition of the problem, a clear budget, and good management, it can be. Lacking any one of those three steps, you’ve got two problems instead.

Features, quality, or date. Choose any two.

In your career, you will likely have a spectrum of managers. Some will micromanage you, which is usually a result of their fear of failure and your failure to communicate with them enough to make them comfortable. Other managers will be so remote that you may fear failure, and feel like they aren’t communicating with you enough to make you comfortable. Communicate and collaborate, and it’ll all be fine.

When you find problems, point them out. If others around you tell you to keep it quiet, then they’re part of the problem too. If others above you tell you to keep it quiet, then you’ve got a real problem. Matches can become bonfires if you let them burn long enough.

If you make bad decisions, take the blame. If others make bad decisions, don’t feel the need to blame them.

Always be on the lookout for your next move. You may find yourself in a role that fits you from college to retirement. You may move to a new opportunity every few years. The main thing is to be cognizant that nothing is permanent, nothing is forever, and you should know what you would do the next day if your card-key stops working to unlock the door.

Do something you’re passionate about. If you’re not passionate about the thing you’re doing, you’re probably doing the wrong thing.

Meetings. Emails. Letters. Have a point, or there isn’t one.

Brevity.

Throughout your career, you will run into people whose primary skill is peacock language. They’ll tell you about themselves, strut around trying to look important, and talk in perfectly cromulent phrases. Smile to yourself, and remember that job titles are free.

 

An amendment: ˆTwo more sentiments I regret not adding to the above:

  • The unspoken word never needs to be taken back.
  • Burned bridges are hard to walk across when you need them.

I’m kind of surprised I forgot to put the first one one. It’s one of the earliest lessons I learned about work – through my father’s experiences, specifically around things that were said when leaving a job. Hint: If you think you might regret saying something to someone later, don’t. Just a good rule of thumb for life.