19
Oct 14

On the Design of Toasterfridges

On my flight today, I rewatched the documentary Objectified. I’ve seen it a few times before, but it has been several years. While I don’t jibe with 100% of the sentiment of the documentary, it made me think a bit about design, as I was headed to Dallas. In particular, it made me consider Apple, Microsoft, and Google, and their dramatically different approaches to design – which are in fact a reflection of the end goal of each of the companies.

One of my favorite moments in the piece is Jony Ive’s section, early on. I’ve mentioned this one before. If you haven’t read that earlier blog post, you might want to before you read on.

Let’s pause for a moment and consider Apple, Microsoft, and Google. What does each make?

  • Apple – Makes hardware.
  • Microsoft – Makes software.
  • Google – Makes information from data.

Where does each one make the brunt of its money?

  • Apple – Consumer hardware and content.
  • Microsoft – Enterprise software licensing.
  • Google – Advertising.

What does each one want more of from the user?

  • Apple – Buy more of their devices and more content.
  • Microsoft – Use their software, everywhere.
  • Google – Share more of your information.

You can also argue that Apple makes software, Microsoft makes hardware, and Google makes both. Some of you will surely do so. But at the end of the day, software is a hobby for Apple to sell more hardware and content (witness the price of their OS and productivity apps), hardware is a hobby for Microsoft to try and sell more software and content, and hardware and software are both hobbies for Google to try and get you more firmly entrenched into their data ecosystem.

Some people were apparently quite sad that Apple didn’t introduce a ~12” so-called “iPad Pro” at their recent October event. People expecting such a device were hoping for a removable keyboard, perhaps like Microsoft’s Surface (ARM) and Surface Pro (Intel) devices. Hopes were there that such a device would be the best of both worlds… a large professional-grade tablet (because those are selling well), and a laptop of sorts, and it would feature side-by side application windows, as have been available on Windows nearly forever, and many Android devices for some time. In many senses, it would be Apple’s own version of the Surface Pro 3 with Windows 8.1 on it. Reporters have insisted, and keep insisting that Apple’s future will be based upon making a Surface clone of sorts. I’m not so sure.

I have a request for you. Either to yourself, in the comments below, or on Twitter, consider the following. When was the last time (since the era of Steve Jobs return) that you saw Apple hardware lean away, in order to let the software compromise it? Certainly, the hardware may defer to the software, as Ive says earlier about the screen and touch on the iPhone; but the role of the hardware is omnipresent – even if you don’t notice it.

I’ve often wondered what Microsoft’s tablets would look like today if Microsoft didn’t own Office as well as Windows; if they weren’t so interested in preserving the role of both at the same time. Could the device have been a pure tablet that deferred to touch, and didn’t try so hard to be a laptop? Could it have done better in such a scenario?

Much has been said about the “lapability” of the Surface family of devices. I really couldn’t disagree more.

More than one person I know has used either a cardboard platform or other… <ahem> surface as a flattop for their Surface to rest upon while sitting on their lap. I’ve seen innumerable reporters contort themselves while sitting in chairs at conferences to balance the device between the ultra-thin keyboards and the kickstand. A colleague recently stopped using his Surface Pro 2 because he was tired of the posture required to use the device while it is on your lap. It may be an acceptable tablet, especially in Surface Pro 3 guise – but I don’t agree that it’s a very good “laptop”.

The younger people that follow me on Twitter or read this blog may not get all of these examples, but hopefully will get several. Consider all of the following devices (that actually existed).

  • TV/VCR combination
  • TV/DVD combination
  • Stand mixers with pasta-making attachments
  • Smart televisions
  • Swiss Army Knife

Each of these devices has something in common. Absent a better name to apply to it, I will call that property toasterfridgality. Sure. Toasterfridge was a slam that Tim Cook came up with to describe Microsoft’s Surface devices. But regardless of the semi-derogatory term, the point is, I believe, valid.

Each of the devices above compromises the integrity with which it performs one or more roles in order to try and perform two or more roles. The same is true of Microsoft’s Surface and Surface Pro line.

For Microsoft, it was imperative that the Surface and Surface Pro devices, while tablets first and foremost (witness the fact that they are sold sans keyboard), be able to run Office and the rest of Win32 that couldn’t be ported in time for Windows 8 – even if it meant a sacrifice of software usability in order to do so. Microsoft’s fixation on selling the devices not as tablets but as laptop replacements (even though they come with no keyboard) leads to a real incongruity. There’s the device Microsoft made, the device consumers want, and the way Microsoft is trying to sell it. Even taking price out of the equation, is there any wonder that Surface sales struggled until Surface Pro 3?

Lenovo more harmoniously balances their toasterfridgality. Their design always seems to focus first on the device being a laptop – then how to incorporate touch. (And on some models, “tabletude”.) Take for example, the Lenovo ThinkPad Yoga  or Lenovo ThinkPad Helix. These devices are laptops, with a comprehensive hinge that enables them to have some role as a tablet while not completely sacrificing… well… lapability. In short, the focus is on the hinge, not on the keyboard.

To view the other end of the toasterfridge spectrum, check out the Asus Padfone X, device that tries to be your tablet by glomming on a smartphone. I’m a pretty strong believer that the idea of “cartridge” style computing isn’t the future, as I’ve also said before. Building devices that integrate with each other to transmogrify into a new role sounds good. But it’s horrible. It results in a device that performs two or more roles, but isn’t particularly good at either one. It’s a DVD/VCR combo all over again. Phone breaks, and now you don’t have either device anymore. If there was such a model that converted your phone into a desktop, one can only imagine how awesome it would be reporting to work on Monday, having lost your “work brain” by dropping your phone into the river.

I invite you to reconsider the task I asked of you earlier, to tell me where Apple’s hardware defers to the software. Admittedly, One can make the case that Apple is constantly deferring the software to the hardware; just try and find an actual fan of iTunes or the Podcasts app, or witness Apple’s recent software quality issues (a problem not unique to Apple). But software itself isn’t their highest priority; it’s the marriage of that software and the hardware (sometimes compromising them both a bit). Look at the iPhone 6 Plus and the iPad Air 2. Look how Apple moved – or completely removed – switchgear on them to align with both use cases (big phones are held differently) and evolving priorities (switches break, and the role of the side-switch in iOS devices is now completely made redundant by software).

Sidebar: Many people, including me, have complained that iOS devices start at 16GB of storage now. This is ridiculous. With the bloat of iOS, requirements for upgrading, and any sort of content acquisition by their users, these devices will be junk before the end of CY2016. Apple, of course, has made cohesive design, not upgradability, paramount in their iOS devices. This has earned them plenty of low scores for reparability and consumer serviceability/upgradeability in reviews. I think it is irresponsible of Apple, given that they have no upgradeability story, to sell these devices with 16GB. The minimum on any new iOS device should be 32GB. Upgradability or the ability to add peripherals is often touted by those dissing Apple as limitations of the platform. It’s true. They are limitations. But these limitations and a tight, cohesive hardware design, are what let these devices have value 4 years after you buy them. I recently got $100 credit from AT&T for my daughter’s iPhone 4 (from June, 2010). A device that I had used for two years, she had used for two more, and it still worked. It was just gasping for air under the weight of iOS 6, let alone iOS 7 (and the iPhone 4 can’t run 8). There is a reason why these devices aren’t upgradeable. Adding upgradeability means building the device with serviceability in mind, and compromising the integrity of the whole device just to make it expandable. I have no issue with Apple making devices user non-serviceable for their lifespan, as I believe it tends to result in devices that actually last longer rather than falling apart when screws unwind and battery or memory doors stop staying seated.

I’ve had several friends mention a few recent tablets and the fact that USB ports on the devices are very prone to failure. This isn’t new to me. In 2002, when I was working to make Windows boot from USB, I had a Motion Computing M1200 tablet. Due to constant insertion and removal of UFDs for testing and creation, both of the USB ports on the tablet had come unseated off of the motherboard and were useless. Motion wanted over $700 to repair a year old (admittedly somewhat abused) tablet. With <ahem> persuasion from an executive at Microsoft, Motion agreed to repair it for me for free. But this forever highlighted to me that more ports aren’t necessarily always something to be looked at in a positive light. The more things you add, the more complex the design becomes, and the more likely it becomes that one of these overwrought features added to please a product manager who has a list of competitive boxes to check will lead to a disappointed customer, product support issues and costs, or both. USB was never originally designed to have plugs inserted and removed willy-nilly (as Lightning and the now dead Apple 30-pin connector were), and I don’t think most boards are manufactured to have devices inserted and removed as often (and perhaps as haphazardly) as they are on modern PC tablets.

Every day, we use things made of components. These aren’t experiences, and they aren’t really even designed (at least not with any kind of cohesive aesthetic). Consider the last time you used a Windows-based ATM or point-of-sale/point-of-service device. It may not seem fair that I’m  glomming Windows into this, but Windows XP Embedded helped democratize embedded devices, and allowed for cheap devices to handle cash, digital currency, rent DVDs on demand, and make a heretofore unimaginable self-service soda fountain.

But there’s a distinct feel of toaster fridge every time I used one of these devices. You feel the sharp edges where the subcomponents it is made of come together (but don’t align). Where the designer compromised the design of the whole in order to accommodate the needs of the subcomponents.

The least favorite device I use with any regularity is the Windows-based ATM at my credit union. It has all of the following components:

  • A display screen (which at least supports touch)
  • An input slot for your ATM/credit/debit card
  • A numeric keypad
  • An input slot for one or more checks or cash
  • An output slot for cash
  • An output slot for receipts.

As you use this device, there are a handful of pain points that will start to drive you crazy if you actually consider the way you use it. When I say left or right, I mean in relation to the display.

  • The input slot for your card is on the right side.
  • The input slot for checks is on the left side.
  • The receipt printer is on the right side.
  • The output slots for cash are both below.

Arguably, there is no need for a keypad given that there is a touchscreen; but users with low visibility would probably disagree with that. Besides that, my credit union has not completely replaced the role of the keypad with the touchscreen. Entering PINs, for example, still requires the keypad.

So to deposit a check, you first put in your card (right), enter your pin (below), specify your transaction type (on-screen), deposit a stack of checks (no envelope, which is nice) on the left. Wait, get your receipt (top right), and get your card (next down on the right). My favorite part is that the ATM starts beeping at you to retrieve your card before it has released it.

This may all seem like a pedantic rant. But my primary point is that every day, we use devices that prioritize the business needs, requirements, or limitations of their creator or assembler, rather than their end user.

Some say that good design begins with the idea of creating experiences rather than products. I am inclined to agree with this ideology, one that I’ve also evangelized before. But to me, the most important role in designing a product is to pick the thing that your product will do best, and do that thing. If it can easily adapt to take on another role without compromising the first role? Then do that too. If adding the new features means compromising the product? Then it is probably time to make an additional product. I must admit – people who clamor for an Apple iPad Pro that would be a bit of (big) tablet and (small) notebook confuse me a bit. I have a 2013 iPad Retina Mini and a 2013 Retina MacBook Pro. Each device serves a specific purpose and does it exceptionally well.

I write for a living. I can never envision doing that just on an iPad, let alone my Mini (or even without the much larger Acer display that my rMBP connects to). In the same vein, I can’t really visualize myself laying down, turning on some music, and reading an eBook on my Mac. Yes. I had to pay twice to get these two different experiences. But if the alternative is getting a device that compromises both experiences just to save a bit of money? I don’t get that.


07
Sep 14

On the death of files and folders

As I write this, I’m on a plane at 30,000+ feet, headed to Chicago. Seatmates include a couple from Toronto headed home from a cruise to Alaska. The husband and I talk technology a bit, and he mentions that his wife particularly enjoys sending letters as they travel. He and I both smile as we consider the novelty in 2014 of taking a piece of paper, writing thoughts to friends and family, and putting it in an envelope to travel around the world to be warmly received by the recipient.

Both Windows and Mac computers today are centered around the classic files and folders nomenclature we’ve all worked with for decades. From the beginning of the computer, mankind has struggled to insert metaphors from the physical world into our digital environments. The desktop, the briefcase, files that look like paper, folders that look like hanging file folders. Even today as the use of removable media decreases, we hang on to the floppy diskette icon, a symbol that means nothing to pre-teens of today, to command an application to “write” data to physical storage.

Why?

It’s time to stop using metaphors from the physical world – or at least to stop sending “files” to collaborators in order to have them receive work we deign to share with them.

Writing this post involves me eating a bit of crow – but only a bit. Prior to me leaving Microsoft in 2004, I had a rather… heated… conversation with a member of the WinFS team about a topic that is remarkably close to this. WinFS was an attempt to take files as we knew them and treat them as “objects”. In short, WinFS would take the legacy .ppt files as you knew them, and deserialize (decompose) them into a giant central data store within Windows based upon SQL Server, allowing you to search, organize, and move them in an easier manner. But a fundamental question I could never get answered by that team (the core of my heated conversation) was how that data would be shared with people external to your computer. WinFS would always have to serialize the data back out into a .ppt file (or some other “container”) in order to be sent to someone else. The WinFS team sought to convert everything on your system into a URL, as well – so you would have navigated the local file system almost as if your local machine was a Web server rather than using the local file and folder hierarchy that we had all become used to since the earliest versions of Windows or the Mac.

So as I look back on WinFS, some of the ideas were right, but in classic Microsoft form, at best it may have been a bit of premature innovation, and at worst it may have been nerd porn relatively disconnected from actual user scenarios and use cases.

From the dawn of the iPhone, power users have complained that iOS lacked something as simple as a file explorer/file picker. This wasn’t an error on Apple’s part; a significant percentage of Apple’s ease of use (largely aped by Android and Windows (at least with WinRT and Windows Phone applications) is by abstracting away the legacy file and folder bird’s nest of Windows, the Mac, etc.

As we enter the fall cavalcade of consumer devices ahead of the holiday, one truth appears plainly clear; that standalone “cloud storage” as we know it is largely headed for the economic off-ramp. The three main platform players have now put cloud storage in as a platform pillar, not an opportunity to be filled by partners. Apple (iCloud Drive), Google (Google Drive), and Microsoft (OneDrive and OneDrive for Business – their consumer and business offerings, respectively), have all been placed firmly in as a part of their respective platform. Lock-in now isn’t just a part of the device or the OS, it’s about where your files live, as that can help create a platform network effect (AT&T Friends and Family, but in the cloud). I know for me, my entire family is iOS based. I can send a link from iCloud drive files to any member of my family and know they can see the photo I took or the words I wrote.

But that’s just it. Regardless of how my file is stored in Apple’s, Google’s, or Microsoft’s hosted storage, I share it through a link. Every “document” envelope as we knew it in the past is now a URL, with applications on each device capable of opening their file content.

Moreover, today’s worker generally wants their work:

  1. Saved automatically
  2. Backed up to the cloud automatically (within reason, and protected accordingly)
  3. Versioned and revertible
  4. Accessible anywhere
  5. Coauthoring capable (work with one or more colleagues concurrently without needing to save and exchange a “file”)
  6. As these sorts of features become ubiquitous across productivity tools, the line between a “file” and a “URL” becomes increasingly blurred, and the more, well, the more our computers start acting just like the WinFS team wanted them to over a decade ago.

    If you look at the typical user’s desktop, it’s a dumping ground of documents. It’s a mess. So are their favorites/bookmarks, music, videos, and any other “file type” they have.

    On the Mac, iTunes (music metadata), iPhoto (face/EXIF, and date info), and now the finder itself (properties and now tags) are a complete mess of metadata. A colleague in the Longhorn Client Product Management Group was responsible for owning the photo experience for WinFS. Even then I think I crushed his spirit by pointing out what a pain in the ass it was going to be to enter in all of the metadata for photos as users returned for trips, in order to make the photos be anything more than a digital shoebox that sits under the bed.

    I’m going to tell all the nerds in the world a secret. Ready? Users don’t screw around entering metadata. So anything you build that is metadata-centric that doesn’t populate the metadata for the user is… largely unused.

    I mention this because, as we move towards vendor-centered repositories of our documents, it becomes an opportunity for vendors to do much of what WinFS wanted to do, and help users catalog and organize their data; but it has to be done almost automatically for them. I’m somewhat excited about Microsoft’s Delve (nee Oslo) primarily because if it is done right (and if/when Google offers a similar feature), users will be able to discover content across the enterprise that can help them with their job. Written word will in so many ways become a properly archived, searchable, and collaboration-ready tool for businesses (and users themselves, ideally).

    Part of the direction I think we need to see is tools that become better about organizing and cataloging our information as we create it, and keeping track of the lineage of written word and digital information. Create a file using a given template? That should be easily visible. Take a trip with family members? Photos should be easily stitched together into a searchable family album.

    Power users, of course, want to feel a sense of control over the files and folders on their computing devices (some of them even enjoy filling in metadata fields). These are the same users who complained loudly that iOS didn’t have a Finder or traditional file picker, and who persuaded Microsoft to add a file explorer of sorts to Windows Phone, as Windows 8 and Microsoft’s OneDrive and OneDrive for Business services began blurring out the legacy Windows File Explorer. There’s a good likelihood that next year’s release of Windows 9 could see the legacy Win32 desktop disappear on touch-centric Windows devices (much like Windows Phone 8.x, where Win32 still technically exists, but is kept out of view. I firmly expect this move will (to say it gently) irk Windows power users. These are the same type of users who freaked out when Apple removed the save functionality from Pages/Numbers/Keynote. Yet that approach is now commonplace for the productivity suites of all of the “big 3” productivity players (Microsoft, Google, and Apple), where real-time coauthoring requires an abstraction of the traditional “Save” verb we all became used to since the 1980’s. For Windows to succeed as a novice-approachable touch environment as iOS is, it means jettisoning a visible Win32 and the File Explorer. With this, OneDrive and the simplified file pickers in Windows become the centerpiece of how users will interact with local files.

    I’m not saying that files and folders will disappear tomorrow, or that they’ll really ever disappear entirely at all. But increasingly, especially in collaboration-based use cases, the file and folder metaphors will largely move to the wayside, replaced by Web-based experiences and the use of URLs with dedicated platform-specific local, mobile or online apps interacting with them.


17
Jun 14

Is the Web really free?

When was the last time you paid to read a piece of content on the Web?

Most likely, it’s been a while. The users of the Web have become used to the idea that Web content is (more or less) free. And outside of sites that put paywalls up, that indeed appears to be the case.

But is the Web really free?

I’ve had lots of conversations lately about personal privacy, cookies, tracking, and “getting scroogled“. Some with technical colleagues, some with non-technical friends. The common thread is that most people (that world full of normal people, not the world that many of my technical readers likely live in) have no idea what sort of information they give up when they use the Web. They have no idea what kind of personal information they’re sharing when they click <accept> on that new mobile app that wants to upload their (Exif geo-encoded) photos, that wants to track their position, or wants to harmlessly upload their phone’s address book to help “make their app experience better”.

My day job involves me understanding technology at a pretty deep level, being pretty familiar with licensing terms, and previous lives have made me deeply immersed in the world of both privacy and security. As a result, it terrifies me to see the crap that typical users will click past in a licensing agreement to get to the dancing pigs. But Pavlov proved this all long ago, and the dancing pigs problem has highlighted this for years, to no avail. Click through software licenses exist primarily as a legal CYA, and terms of service agreements full of legalese gibberish could just as well say that people have to eat a sock if they agree to the terms – they’ll still agree to them (because they won’t read them).

On Twitter, the account for Reputation.com posted the following:

A few days later, they posted this:

I responded to the first post with the statement that accurate search results have intrinsic value to users, but most users can’t actually quantify a loss of privacy. What did I mean by that? I mean that most normal people will tell you they value their privacy if you ask them, but if you take away the free niblets all over the Web that they get for giving up their privacy little by little, they’ll actually renege on how important privacy really is.

Imagine the response if you told a friend, family member, or colleague that you had a report/blog/study you were working on, and asked them, “Hey, I’m going to shoulder-surf you for a day and write down which Websites you visit, how often and how long you visit them, and who you send email to, okay?” In most cases, they’d tell you no, or tell you that you’re being weird.

Then ask them how much you’d need to pay them in order for them to let you shoulder-surf. Now they’ll be creeped out.

Finally, tell them you installed software on their computer last week, so you’ve already got the data you need, is it okay if you use that for your report. Now they’re going to probably completely overreact, and maybe even get angry (so tell them you were kidding).

More than two years ago, I discussed why do-not-track would stall out and die, and in fact, it has. This was completely predictable, and I would have been completely shocked if this hadn’t happened. It’s because there is one thing that makes the Web work at all. It’s the cycle of micropayments of personally identifiable information (PII) that, in appropriate quantities, allow advertisers (and advertising companies) to tune their advertising. In short, everything you do is up for grabs on the Web to help profile you (and ideally, sell you something). Some might argue that you searching for “schnauzer sweaters” isn’t PII. The NSA would beg to differ. Metadata is just as valuable, if not more, than data itself, to uniquely identify an individual.

When Facebook tweaked privacy settings to begin “liberating” personal information, it was all about tuning advertising. When we search using Google (or Bing, or Yahoo), we’re explicitly profiling ourselves for advertisers. The free Web as we know it is sort of a mirage. The content appears free, but isn’t. Back in the late 1990’s, the idea of micropayments was thrown about, and has in my opinion come and gone. But it is far from dead. It just never arrived in the form that people expected. Early on, the idea was that individuals might pay a dollar here for a news story, a few dollars there for a video, a penny to send an email, etc. Personally, I never saw that idea actually taking off, primarily because the epayment infrastructure wasn’t really there, and partially because, well, consumers are cheap and won’t pay for almost anything.

In 1997, Nathan Myhrvold, Microsoft’s CTO, had a different take. Nathan said, “Nobody gets a vig on content on the Internet today… The question is whether this will remain true.”

Indeed, putting aside his patent endeavors, Nathan’s reading of the tea leaves at that time was very telling. My contention is that while users indeed won’t pay cash (payments or micropayments) for the activities they perform on the Web, they’re more than willing to pay for their use of the Web with picopayments of personal information.

If you were to ask a non-technical user how much they would expect to be paid for an advertiser to know their home address, how many children they have, or what the ages of their children are, or that they suffer from psoriasis, most people would be pretty uncomfortable (even discounting the psoriasis). People like to assume, incorrectly, that their privacy is theirs, and the little lock icon on their browser protects all of the niblets of data that matter. While it conceptually does protect most of the really high financial value parts of an individual’s life (your bank account, your credit card numbers, and social security numbers), it doesn’t stop the numerous entities across the Web from profiling you. Countless crumbs you leave around the Web do allow you to be identified, and though they may not expose your personal, financial privacy, do expose your personal privacy for advertisers to peruse. It’s easy enough for Facebook (through the ubiquitous Like button) or Google (through search, Analytics, and AdSense) to know your gender, age, marital/parental status, any medical or social issues you’re having, what political party you favor, and what you were looking at on that one site that you almost placed an order on, but wound up abandoning.

If you could truly visualize all of the personal attributes you’ve silently shared with the various ad players through your use of the Web, you’d probably be quite uncomfortable with the resulting diagram. Luckily for advertisers, you can’t see it, and you can’t really undo it even if you could understand it all. Sure, there are ways to obfuscate it, or you could stay off the Web entirely. For most people, that’s not a tradeoff they’re willing to make.

The problem here is that human beings, as a general rule, stink at assessing intangible risk, and even when it is demonstrated to us in no uncertain terms, we do little to rectify it. Free search engines that value your privacy exist. Why don’t people switch? Conditioning to Google and the expected search result quality, and sheer laziness (most likely some combination of the two). Why didn’t people flock from Facebook to Diaspora or other alternatives when Facebook screwed with privacy options? Laziness, convenience, and most likely, the presence of a perceived valuable network of connections.

It’s one thing to look over a cliff and sense danger. But as the dancing pigs phenomenon (or the behavior of most adolescents/young adults, and some adults on Facebook) demonstrates, a little lost privacy here and a little lost privacy there is like the metaphoric frog in a pot. Over time it may not feel like it’s gotten warmer to you. But little by little, we’ve all sold our privacy away to keep the Web “free”.


20
May 14

Engage or die

I’m pretty lucky. For now, this is the view from my office window. You see all those boats? I get to look out at the water, and those boats, all the time (sun, rain, or snow). But those boats… honestly, I see most of those boats probably hundreds of days per year more than their owners do. I’d bet there’s a large number of them that haven’t moved in years.

IMG_0224The old adage goes “The two happiest days in a boat owner’s life are the day he buys it, and the day he sells it.”

All too often, the tools that we acquire in order to solve our problems or “make our lives better” actually add new problems or new burdens to our lives instead. At least that’s what I have found. You buy the best hand mixer you can find, but the gearing breaks after a year and the beaters won’t stay in, so you have to buy a new one. You buy a new task-tracking application, but the act of changing your work process to accommodate it actually results in lower efficiency than simply using lined paper with a daily list of tasks. As a friend says about the whole Getting Things Done (GTD) methodology, “All you have to do is change the way you work, and it will completely change the way you work.”

Perhaps that’s an unfair criticism of GTD, but the point stands for many tools or technologies. If the investment required to take advantage of, and maintain, a given tool exceeds the value returned by it (the efficiency it provides), it’s not really worth acquiring or using.

Technology promises you the world, but then winds up making the best part of using it when you cut yourself taking it out of the hermetically sealed package it was shipped in from China. Marketing will never tell you about the sharp edges, only the parts of the product that work within the narrow scenarios product management understood and defined.

Whether it’s software or hardware, I’ve spent a lot of time over the last year or so working to eliminate tools that fail to make me more productive or reduce day-to-day friction in my work or personal life. Basically looking around, pondering, “how often do I use this tool?”, and discarding it if the answer isn’t “often” or “all the time.” Tangentially, if there’s a tool that I even use at all because it’s the best option, but rarely do so, I’ll keep it around. PaperKarma is a good example of this, because there’s honestly no other tool that does what it does.

However, a lot of software and hardware that I might’ve found indispensable at one point is open for consideration, and I’m tired of being a technology pack-rat. If a tool isn’t something that I really want to (or have to) use all the time, if there’s no reason to keep it around, then why should I keep it? If it’s taking up space on my phone, tablet, or computer, but I never use it, why would I keep it at all?

As technology moves forward at a breakneck pace, with new model smartphones, tablets, and related peripherals for both arriving at incredible speed and with amazing frequency, we all have to make considered choices about when to acquire technology, when to retire it, and when to replace it. Similarly, as software purveyors all move to make you part of their own walled app and content gardens and mimic or pass each other, they also must fight to maintain relevance in the mind of their users every day.

This is why we see Microsoft building applications for iOS and Android, along with Web-based Office applications – to try and address scenarios that Apple and Google already do. It’s why we saw Apple do a reset on the iWork applications, add Web-based versions (to give PC users something to work with). Finally, it’s why we see Google building Hangout plug-ins for Outlook. It’s trying to inject your tools into a workflow where you are a foreign player.

The problem with this is that it is well-intended, but can only be modestly successful at best. As with the comment about GTD, you have to organically become a part of a user’s workflow. You can’t assert yourself into the space with your own workflow and expect to succeed. Great examples of this include Apple’s iWork applications where users on Macs are trying to collaborate with Microsoft Office users on Windows or Mac. Pages won’t seamlessly interact with Word documents – it always wants to save as a Pages document. The end result is that users are constantly frustrated throwing the documents back and forth, and will usually wind up caving and simply using Office.

Tools, whether hardware, or more likely software, that want to succeed over the long run must follow the below “rules of engagement”:

  1. Solve an actual problem faced by your potential users
  2. Seamlessly inject yourself into the workflow of the user any any collaborators the user must work with to solve that problem
  3. Deliver enough value such that users must engage regularly with your application
  4. Don’t create more friction than you remove for your users.

For me, I find that games are easily dismissed. They never solve a real problem, and are an idle-time consumer. Entertain the user or be dismissed and discarded. I downloaded a few photo synchronization apps, in the hopes that one could solve my fundamental annoyances with iPhoto. Both claimed to synchronize all of your photos from your iOS devices to their cloud. The problems with this were two-fold.

  1. They didn’t reliably synchronize on their own in the background. Both regularly nagged me to open the app so it could sync
  2. They synchronized to a cloud service, when I’ve already made a significant investment in iPhoto.

In the end, I stopped using both apps. They didn’t help me with the task I wanted to accomplish, and in fact made it more burdensome for the little value they did provide.

My primary action item out of this post, then, is a call to action for product managers (or anybody designing app[lication]s):

Make your app easy to learn, easy to engage with, friction-free, and valuable. You may think that the scenario you’ve decided to solve is invaluable, but it may actually be nerd porn that most users could care less about. Nerd porn as I define it is features that geeks creating things add to their technology that most normal users never care about (or miss if they’re omitted).

Solving a real-world problem with a general-use application means doing so in a simple, trivial, non-technical manner, and doing it in a way that makes users fall in love with the tool. It makes them want to engage with it as a tool that feels irreplaceable – that they couldn’t live without. When you’re building a tool (app/hardware/software or other), make your tool truly engaging and frictionless, or prepare to watch users acquire it, attempt to use it, and abandon it – and your business potential going with it.


05
Mar 14

Considering CarPlay

Late last week, some buzz began building that Apple, alongside automaker partners, would formally reveal the first results of their “iOS in the Car” initiative. Much as rumors had suspected, the end result, now dubbed CarPlay, was demonstrated (or at least shown in a promo video) by initial partners Ferrari, Mercedes-Benz, and Volvo. If you only have time to watch one of them, watch the video of the Ferrari. Though it is an ad-hoc demo, the Ferrari video isn’t painfully overproduced as the Mercedes-Benz video unfortunately is, and isn’t just a concept video as the Volvo is.

The three that were shown are interesting for a variety of reasons (though it is also notable that all three are premium brands). The Ferrari and Volvo videos demonstrate touch-based navigation, and the Mercedes-Benz video uses what (I believe) is their knob-based COMAND system. While CarPlay is navigable using all of them, using the COMAND knob to control the iOS-based experience feels somewhat contrived or forced; like using an old iPod click wheel to navigate a modern iPhone). It just looks painful (to me that’s a M-B issue, not an Apple issue).

Outside of the initial three auto manufacturers, Apple has said that Honda, Hyundai, and Jaguar will also have models in 2014 with CarPlay functionality.

So what exactly is CarPlay?

As I initially looked at CarPlay, it looked like a distinct animal in the Apple ecosystem. But the more I thought about it, the more familiar it looked. Apple pushing their UX out into a new realm, on a device that they don’t own the final interface of… It’s sort of Apple TV, for the car. In fact, pondering what the infrastructure might look like, I kept getting flashbacks to Windows Media Center Extenders, which are remote thin clients that rendered a Windows Media Center UI over a wired or wireless connection.

Apple’s  CarPlay involves a cable-based connection (this seems to be a requirement at this point, I’ll talk about it a bit later) which is used to remotely display several key functions of your compatible iPhone (5s, 5c, 5) on the head unit of your car. That is, the display is that of your auto head unit – but for CarPlay features, your iPhone looks to be what’s actually running the app, and the head unit is simply a dumb terminal rendering it. All data is transmitted through your phone, not some in-car LTE/4G connection, and all of the apps reside, and are updated on your phone, not on the head unit. CarPlay seems to be navigable regardless of the type of touch support your screen has (if it has touch), but also works with buttons, and again, works with knob-based navigation like COMAND.

Apple seems to be requiring two key triggers for CarPlay – 1) a voice command button on the steering wheel, and 2) an entry point into CarPlay itself, generally a button on the head unit (quite easy to see if you watch the Ferrari video, labeled APPLE CARPLAY). Of course these touches are in addition to integrating in the required Apple Lightning cable to tether it all together.

In short, Apple hasn’t done a complete end around of the OEM – the automaker can still have their own UI for their own in-car functions, and then Apple’s distinct CarPlay UI (very familiar to anyone who has used iOS 7) is there when you’re “in CarPlay”, if you will. It seems to me that CarPlay can best be thought of as a remote display for your iPhone, designed to fit the display of your car’s entertainment system. Some have said that “CarPlay systems” are running QNX – perhaps some are. The head unit manufacturer doesn’t really appear to be important here. The main point of all of this is it appears the OEM doesn’t have to do massive work to make it functional, it really looks to primarily be integrating in the remote display functionality and the I/O to the phone. In fact, the UI of the Ferrari as demonstrated doesn’t look to be that different from head units in previous versions of the FF (from what I can see). Also, if you watch the Apple employee towards the end, you can see her press the FF “app”, exiting out to the FF’s own user interface, which is distinctly different from the CarPlay UI. The CarPlay UI, in contrast, is remarkably consistent across the three examples shown so far. While the automakers all have their own unique touches, and controls for the rest of the vehicle, these distinct things that the phone is, frankly, better at, are done through the CarPlay UI.

The built-in iPhone apps supported with CarPlay at this point appear to be:

  • Phone
  • Messages
  • Maps
  • Music
  • Podcasts

The obvious scenarios here are making/receiving phone calls or sending/receiving SMS/iMessages with your phone’s native contact list, and navigation. Quick tasks. Not surfing or searching the Web while you’re driving. Yay! The Maps app has an interesting touch that the Apple employee chose to highlight in the Ferrari video, where maps you’ve been sent in messages are displayed in the list of potential destinations you can choose from. Obviously the CarPlay solution enables Apple’s turn-by-turn maps. If you’re an Apple Maps fan, that’s great news (I’m quite happy with them at this point, personally). If you like using Google Maps or another mapping/messaging or VOIP solution, it looks like you’re out of luck at this point.

In addition to touch, button, or knob-based navigation, Siri is omnipresent in CarPlay, and the system can use voice as your primary input mechanism (triggered through a voice command button on the steering wheel), and is used for reading text messages out loud to you, and responding to them. I use that Siri feature pretty often, myself.

The Music and Podcasts seem like obvious apps to make available, especially now that iTunes Radio is available (although most people either either love or hate the Podcasts app). Just as importantly, Apple is making a handful of third-party applications at this point. Notably:

  • Spotify
  • iHeartRadio
  • Stitcher

Though Apple’s CarPlay site does call out the Beats Music app as well, I noticed it was missing in the Ferrari demo.

Overall, I like Apple’s direction with this. Of course, as I said on Twitter, I’m so vested in the walled garden, I don’t necessarily care that it doesn’t integrate in with handsets from other platforms. That said, I do think most OEMs will be looking at alternatives and implementing one or more of them simultaneously (hopefully implementing all of them that they choose to in a somewhat consistent manner).

Personally, I see quite a few positives to CarPlay:

  • If you have an iPhone, it takes advantage of the device that is already your personal  hub, instead of trying to reinvent it
  • It isolates the things the manufacturer may either be good at or may want to control, and the CarPlay UX. In short, Apple gets their own UX, presented reliably
  • It uses your existing data connection, not yet another one for the car
  • It uses one cable connection. No WiFi or BLE connectivity, and charges while it works
  • I trust Apple to build a lower-distraction (Siri-centric) UI than most automakers
  • It can be updated by Apple, independent of the car head unit
  • Apple can push new apps to it independent of the manufacturer
  • Apple Maps may suck in some people’s perspective (not mine), but it isn’t nearly as bad as some in-dash nav systems (watch some of Brian’s car reviews if you don’t believe me), and doesn’t require shelling out for shiny-media based updates!

Of course, there are some criticisms I or others have already mentioned on Twitter or in reviews:

  • It requires, and uses, iOS 7. Don’t like the iOS 7 UI? You’re probably not going to be a fan
  • It requires a cable connection. Not WiFi or BLE. This is a good/bad thing. I think in time, we’ll see considerate design of integrated phone slots or the like – push the phone in, flat, to dock it. The cables look hacky, but likely enable the security, performance, low latency, and integrated charging that are a better experience overall (also discourages you from picking the phone up while driving)
  • Apple Maps. If you don’t like it, you don’t like it. I do, but lots of people still seem to like deriding it
  • It is yet another Apple walled garden (like Apple TV, or iOS as a whole). Apple controls the UI of CarPlay, how it works, and what apps and content are or are not available. Just like Apple TV is at present. The fact that it is not an open platform or open spec also bothers some.

Overall, I really am excited by what CarPlay represents. I’ve never seen an in-car entertainment system I really loved. While I don’t think I really love any of the three head units I’ve seen so far, I do relish the idea of being able to use the device I like to use already, and having an app experience I’m already familiar with. Now I just need to have it hit some lower-priced vehicles I actually want to buy.

Speaking of that; Apple has said that, beyond the makers above, the following manufacturers have also signed on to work with CarPlay:

BMW Group (which includes Mini and Rolls-Royce), Chevrolet, Ford, Kia, Land Rover, Mitsubishi, Nissan, Opel PSA Peugeot Citroen, Subaru, Suzuki, and Toyota.

As a VW fan, I was disheartened to not see VW on the list. Frankly I wouldn’t be terribly surprised to see a higher-end VW marque opt into it before too long (Porsche, Audi, or Bentley seem like obvious ones to me – but we’ll see). Also absent? Tesla. But I wouldn’t be surprised to see that show up in time as well.

It’s an interesting start. I look forward to seeing how Google, Microsoft, and others continue to evolve their own automotive stories over the coming years – but I think one thing is for sure; the beginning of the phone as the hub of the car (and beyond) is just beginning.


14
Jan 14

What did I learn from Nest?

So today Google announced that they will pay US$3.2B for Nest Labs. Surely the intention here is to have the staff of Nest help Google with home automation, the larger Internet of Things (IoT) direction, and user interfaces. All three of these are, frankly, trouble spots for Google, and if they nurture the Nest team and let them thrive, it’ll be a good addition to Google. Otherwise, they will have wound up paying a premium to buy out a good company and lose the employees as soon as they can run.

In 2012, just after I received it, I wrote about my experience with the first generation Nest thermostat. As I said on Monday evening when asked how I liked my Nest, I said:

It hasn’t exactly changed my life, but it has saved on energy costs, and it’s not hideous like most thermostats.

As I noted on Twitter as well, today’s news makes me sad. I bought Nest because it felt like they truly cared about thoughtful design. I also got the genuine feeling from the beginning that they cared genuinely about privacy.

Last year, I wrote the following about the dangers in relying on software (and hardware) that relies upon subscriptions:

Google exemplifies another side of this, where you can’t really be certain how long they will continue to offer a service. Whether it’s discontinuing consumer-grade services like Reader, or discontinuing the free level of Apps for Business, before subscribing to Google’s services an organization should generally not only raise questions around privacy and security, but just consider the long-term viability of the service. “Will Google keep this service alive in the future?” Perhaps that sounds cynical – but I believe it’s a legitimate concern. If you’re moving yourself or your business to a subscription service (heck, even a free one), you owe it to yourself to try and ascertain how long you’ve got before you can’t even count on that service anymore.

Unfortunately, my words feel prophetic now. If I’d known two years ago what I know today, maybe I’d have wavered more and decided against the Nest. Maybe not.

As I look back at Nest, it helps me frame the logic I’ll personally use when considering future IoT purchases. Ideally from now on, I’d like to consider instead:

  1. Buying devices with open APIs or open firmware. If the APIs or firmware of Nest were opened up, the devices could have had alternative apps built against them by the open-source community (to generally poor, but possible, effect). This is about as likely to happen now as Nest sharing their windfall with early adopters like myself.
  2. Buying devices with standards-based I/O (Bluetooth 4.0, Wi-Fi) and apps that can work without a Web point of contact. While a thermostat is a unique device that does clamor for a display, I think that most devices on the IoT should really have a limited, if any, display and rely on Web or smart phone apps over Wi-Fi or BT 4.0 in order to be configurable. Much like point 1, this would mean some way out if the company shutters its Web API.
  3. Buying devices from larger companies. Most of the major thermostat manufacturers are making smarter thermostats now, although aesthetically, most are still crap.
  4. Buying “dumb” alternatives. A minimalist programmable or simple non-programmable thermostat again.

In short, it’ll probably be a while before I spend money – especially premium money – on another IoT device.

Peter Bright wrote a great piece the other day on why “smart devices” were a disaster waiting to happen. Long story short, hardware purveyors suck at creating devices that stand any sort of chance of being updated. In many ways, the unfortunate practice we’ve seen with Android phones will likely become the norm with lots of embedded devices (in cars or major appliances). What seems so cool and awesome the day we buy a new piece of technology will become frustrating as all hell when it won’t work with your new phone or requires a paid subscription but used to be free.

In talking with a colleague today, I found myself taking almost a Luddite’s perspective on smart devices and the IoT. It isn’t that these devices, done right, can’t make our lives easier. It’s that we always must be wary of who we’re buying them from, whether they truly make our life easier or not, and what future they have. I’ve never been a huge believer in smart devices, but if designed considerately, I think they can be beneficial. As for me, I think the main thing I learned from Nest is to always consider the worst possible outcome of the startup I buy hardware from (yes, to me, Google was just shy of the worst possible outcome, which would have been seeing it shut down).

While I had hopes that Apple would buy Nest, as I noted on Twitter, that idea probably never really made sense. Nest made custom hardware and custom (non Apple, of course) software that had far more to do with Google’s software realm than Apple’s. I also think that while the thermostat is a use case that lots of people “just get”, I’m not sure that the device fits well in Apple’s world. While the simple UI of the Nest is very Apple-like, it doesn’t seem like a war Apple would choose to fight. I think when it comes to home automation, Apple will be standing back and letting Bluetooth 4.0 interconnected home devices take the helm in the smart home, but having iOS play the role of conductor. I also had hopes that Nest could try to be bold and push the envelope of home automation beyond the hacky do-it-yourself approaches that have been around for years before the Nest arrived, but I’m fearful whether the Nest team will succeed with that at Google. I guess time will tell. It pains me to see Nest become part of Google, but I have to congratulate the Nest team on pushing the envelope as they did, and I hope for their sake and Google’s that they can continue to push that envelope successfully from within Google.


05
Jan 14

Bimodal tablets (Windows and Android). Remember them when they’re gone. Again.

I hope these rumors are wrong, but for some odd reason, the Web is full of rumors that this year’s CES will bring a glut of bimodal tablets; devices that are designed to run Windows 8.1, but also feature an integrated instance of Android. But why?

For years, Microsoft and Intel were seemingly the best of partners. While Microsoft had fleeting dalliances with other processor architectures, they always came back to Intel. There were clear lines in the sand;

  1. Intel made processors
  2. Microsoft made software
  3. Their mutual partners (ODMs and OEMs) made complete systems.

When Microsoft announced the Surface tablets, they crossed a line. Their partners (Intel and the device manufactures) were stuck in an odd place. Continue partnering just with Microsoft (now a competitor to manufacturers, and a direct purveyor of consumer devices with ARM processors), or find alternative counterpoints to ensure that they weren’t stuck in the event that Microsoft harmed their market.

For device manufacturers, this has meant what we might have thought unthinkable 3 years ago, with key manufacturers (now believing that their former partner is now also a competitor) building Android and Chrome OS devices. For Intel, it has meant looking even more broadly at what other operating systems they should ensure compatibility with, and evangelization of (predominantly Android).

While the Windows Store has grown in terms of app count, there are still some holes, and there isn’t really a gravitational pull of apps leading users to the platform. Yet.

So some OEMs, and seemingly Intel, have collaborated on this effort to glue together Windows 8.1 and Android on a single device, with the hopes that the two OSs combined in some way equate to “consumer value”. However, there’s really no clear sign that the consumer benefits from this approach, and in fact they really lose, as they’ve now got a Windows device with precious storage space consumed by an Android install of dubious value. If the consumer really wanted an Android device, they’re in the opposite conundrum.

Really, the OEMs and Intel have to be going into this strategy without any concern for consumers. It’s just about moving devices, and trying to ensure an ecosystem is there when they can’t (or don’t want to) bet on one platform exclusively. The end result is a device that instead of doing task A well, or task B well, does a really middling job with both of them, and results in a device that the user regrets buying (or worse, regrets being given).

BIOS manufacturers and OEMs have gone down this road several times before, usually trying to put Linux either in firmware or on disk as a rapid-boot dual use environment to “get online faster” or watch movies without waiting for Windows to boot/unhibernate. To my knowledge most devices that ever had these modes provided by the OEM were rarely actually used. Users hate rebooting, they get confused by where their Web bookmarks are (or aren’t) when they need them, etc.

These kinds of approaches rarely solve problems for users; in fact, they usually create problems instead, and are a huge nightmare in terms of management. Non-technical users are generally horrible about maintaining one OS. Give them two on a single device? This will turn out quite well, don’t you think? In the end, these devices, unless executed flawlessly, are damaging to both the Windows and Android ecosystems, the OEMs, and Intel. Any bad experiences will likely result in returns, or exchanges for iPads.


29
Dec 13

My predictions for wearables in 2014

It’s the season for predictions, so I thought I’d offer you my predictions about wearables in 2014.

  1. Wearables will continue to be nerd porn in 2014 (in other words, when you say “wearable devices”, most normal people will respond, “what?”)
  2. Many wearable devices will be proposed by vendors.
  3. Too many of those will actually make it to market.
  4. A few of those will be useful.
  5. A handful of those will be aesthetically pleasing.
  6. A minute number (possibly 0) of those will actually be usable.

17
Dec 13

Goodbye, Facebook

As I posted on Facebook earlier today. Don’t worry, FB, I’m still not using G+ either, as you two rapidly collide into each other.

I’m not going to make this complicated, Facebook. It’s not me, it’s you.

I liked it when we first met, I thought it was cool how you’d help me find friends, family, co-workers I hadn’t talked to for years, even some people I’ve known since preschool. That was nice, and you didn’t try to grab my wallet every time a friend would join, like some of the “social networks” did before you came along (looking at you, Classmates).

But over the years, you’ve gotten a little bit creepy, and you rarely tell me anything new or important anymore. In fact, in terms of a “social network”, you don’t really do much for me in terms of telling me what family and friends are really up to. Instead, my wall isn’t about what is important to me, it’s ads, links from Upworthy, ThinkProgress, and other sites that have learned how to game the social graph to become front and center. Now your content is just as worthless as when Google let Demand Media and others game SEO to backfill the Web with crap content.

I’m not exactly sure what demographic you’re trying to tune Facebook for, and it sure seems like you may not know either.

So with that, Facebook, I’m gonna have to let you go. I’ve downloaded my archive (man, we did have some good times), and I’m going to have to let you go. Tomorrow afternoon, I’m pulling the plug. If you ever need to find me, I’m easy enough to find on the Web, email, and Twitter.

Take care, Facebook. I hope you figure out what the heck you want to be when you grow up.

Wes Miller


27
Nov 13

Resistance is Futile or: GenTriFicatiOn

The vocal minority. You’ve heard of them, but who are they?

Companies often seek to change their status quo by modifying how they do business. Generally, this is a nice way of saying just they want more. More what, you ask? Traditionally, it would have meant they simply want more money, as in raising the cost of the goods they are selling (or lowering the cost that they will pay to suppliers or partners). These of course are done to increase revenue, or decrease operating expenses, respectively.

In today’s world, personally identifiable information (PII) isn’t just data, but instead is a currency which is invaluable to advertisers. While Google was the first to really succeed in this economy (of sorts), Facebook, Adobe, Microsoft, and anybody else with skin in the Internet advertising or analytics game is in the same position today. For these companies, their ask is an ever increasing cross-section of your identity. In exchange, they offer you “free” services. However, like any other business, they want an ever-increasing amount of your personal information in order to continue delivering that service. We’ve seen it with Facebook and their PII land grabs really beginning in earnest in 2010, and we’re seeing it at the current time with the encroachment of Google+ across Google sites where legacy communities aren’t very welcoming to the G+ GenTriFicatiOn.

Whether you’re talking about raising costs (reducing expenses) or asking for increasingly accurate PII, these price uplifts (or gazumps) are often not greeted warmly. In fact, there’s usually a vocal minority that quite often speak out and fight the change.

On Twitter yesterday, Taylor Buley asked if the uproar due to YouTube’s shift to Google+ could generate enough momentum for a real YouTube competitor.

I responded to Taylor at the time that I didn’t think it could. Back in 2010, when Facebook made their (at that time) largest shift in privacy policy, there was a rather large outcry by people bothered by the changes. The alternative network Diaspora was launched (and failed) out of this outcry.

There comes a certain point where these outcries cause an opinion to turn into a degree of a PR problem. But this PR problem is usually short lived. In the end, only two things can happen:

  1. The change is reversed (unlikely, as it causes a strategic retreat and a tactical reassessment)
  2. The turbulence subsides, the majority of users are retained, and some of the vocal minority are lost.

I consciously chose the term GenTriFicatiOn when I was describing Google+ earlier. Google is trying to build a community of happy PII sharers. But a lot of Google’s legacy community citizens don’t fit that mold. Google’s services are provided “free” in exchange for the price that they (Google) deems adequate. If you don’t want to pay that price, Google seems happy to see you exit the community.

Google today, like Facebook several years ago, is in the position of the chef with a frog in a pot. Slowly turning the heat up, and actually trying to excommunicate users who aren’t going to be willing participants in the Google of Tomorrow. Facebook most likely flushed the vocal privacy critics several years ago. Consider this Google Trends chart on the query “Facebook privacy”. While there is a regular churn on the topic, high water mark event H aligns nicely with the most contentious (to that date) privacy changes Facebook made, back in 2010.

Facebook_privacy

When Google shut down Google Reader last year, there was a huge outcry. However, Google obviously knew the value that Google Reader users provided in terms of PII sharing before it shut down the site. (Answer? Not much.) As a result? A huge outcry followed by a deafening thud. Google didn’t lose much of what they were after, which is those data sharing, Google loving users. See the Google Trends chart of the Google Reader outcry below. Towards the right we can see the initial outcry, followed most likely by discussion of alternatives/replacements and… resignation.

Google_reader

When these sites increase their PII cost to end users (let’s call these end users producers, not consumers), they’re taking a conscious gamble. The sites are hoping that the number of users who won’t care about their privacy exceeds the number of users who do. In general, they’re likely right, especially if they carefully, consciously execute these steps one by one, and are aware of which ones will be the largest minefields. Of those Google properties remaining to be “Plussed”, Google Voice is likely the most contentious, although YouTube was also pretty likely to generate pushback, as it did. Again, those vocal users not happy with the changes aren’t going to be good Google+ users, so if Google+ is where Google believes their future lies, it’s in their best interest to churn those users out anyway.