27
Mar 13

The Stigma of Mac Shaming

I recall hearing a story of a co-worker at Microsoft, who was a technical assistant to an executive, who had a Mac. It wouldn’t normally be a big deal, except he worked directly for an executive. As a result, this Mac was seen in many meetings across campus – it’s distinct aluminum body and fruity ghost shining through the lid a constant reminder that this was one less PC sold (even if it ran Windows through Boot Camp or virtualization software. Throughout most of Microsoft, there was a strange culture of “eww, a Mac”. Bring a Mac or an iPod to work, feel like an outcast. This was my first exposure to Mac Shaming.

I left Microsoft in 2004, to work at Winternals in Austin (where I had the last PC I ever really loved – a Toshiba Tecra A6). In 2006, on the day Apple announced Boot Camp, I placed an order for a white Intel iMac. This was just over three months before Winternals was acquired by Microsoft (but SHH… I wasn’t supposed to know that yet). This was my first Mac. Ever.

Even though I had worked at Microsoft for over 7 years, and was still writing for Microsoft’s TechNet Magazine as a monthly Contributing Editor, I was frustrated. My main Windows PC at home was an HP Windows XP Media Center PC. Words cannot express my frustration at this PC. It “worked” as I originally received it – but almost every time it was updated, something broke. All I wanted was a computer that worked like an appliance. I was tired of pulling and pushing software and hardware to try and get it to work reliably. I saw Windows Vista on the horizon and… I saw little hope for me coming to terms with using Windows much at home. It was a perfect storm – me being extreme underwhelmed with Windows Vista, and the Mac supporting Windows so I could dual-boot Windows as I needed to in order to write. And so it began.

Writing on the Mac was fine – I used Word, and it worked well enough. Running Windows was fine (I always used VMware Fusion), and eventually I came to terms with most of the quirks of the Mac. I still try to cut and paste with the Ctrl key sometimes, but I’m getting better.

I year later, I flipped from a horrible Windows CE “smartish” phone from HTC on the day that Apple dropped the price of the original iPhone to $399. Through two startups – one a Windows security startup, the other a Web startup, I used two 15″ MacBook Pros as my primary work computer – first the old stamped MBP, then the early unibody.

For the last two years, I’ve brought an iPad with me to most of the conferences I’ve gone to – even Build 2011, Build 2012, and the SharePoint Conference in 2012. There’s a reason for that. Most PCs can’t get you on a wireless network and keep you connected all day, writing, without needing to plug in (time to plug in, or plugs to use, being a rarity at conferences). Every time I whipped out my iPad and it’s keyboard stand with the Apple Bluetooth keyboard, people would look at me curiously. But quite often, as I’d look around, I’d see many journalists or analysts in the crowd also using Macs or iPads. The truth is, tons of journalists use Macs. Tons of analysts and journalists that cover Microsoft even use Macs – many as their primary device. But there still seems to be this weird ethos that you should use Windows as your primary device if you’re going to talk about Windows. If you are a journalist and you come to a Microsoft meeting or conference with a Mac, there’s all but guaranteed to be a bit of an awkward conversation if you bring it out.

I’m intimately familiar with Windows. I know it quite well. Perhaps a little too well. Windows 8 and I? We’re kind of going in different directions right now. I’m not a big fan of touch. I’m a big fan of a kick-ass desktop experience that works with me.

Last week, my ThinkPad died. This was a week after my iMac had suffered the same fate, and I had recovered it through Time Machine. Both died of a dead Seagate HDD. I believe that there is something deeper going on with the ThinkPad, as it was crashing regularly. While it was running Windows 8, I believe it was the hardware failing, not the operating system, that led to this pain. In general, I had come to terms with Windows 8. Because my ThinkPad was touch, it didn’t work great for me, but worked alright – though I really wasn’t using the “WinRT side” of Windows 8 at all, I had every app I used daily pinned to the Taskbar instead. Even with the Logitech t650, I struggled with the WinRT side of Windows 8.

So here, let me break this awkward silence. I bought another Mac, to use as my primary writing machine. A 13″ Retina MacBook Pro. Shun me. Look down upon me. Shake your head in disbelief. Welcome to Mac shaming. The machine is beautiful, and has a build quality that is really unmatched by any other OEM. A colleague has a new Lenovo Yoga, and I have to admit, it is a very interesting machine – likely one of the few that’s out there that I’d really consider – but it’s just not for me. I also need a great keyboard. The selection of Windows 8 slates with compromised keyboards in order to be tablets is long. I had contemplated getting a Mac for myself for some time. I still have a Windows 8 slate (the Samsung), and will likely end up virtualizing workloads I really need in order to evaluate things.

My first impression is that, as an iPad power user (I use iOS gestures a lot) it’s frighteningly eerie how powerful that makes one on a MBP with Mountain Lion and fullscreen apps. But I’ll talk about that later.

I went through a bit of a dilemma about whether to even post this or not, due to the backlash I expect. Post your thoughts below All I request? I invoke Wheaton’s Law at this point.


25
Mar 13

The care and feeding of software

App hoarding. The dark, unspoken secret. We’ve all done it. I logged on to a Windows 8 tablet I hadn’t used for quite some time, and I was so ashamed of myself. So much junk, so many free apps I downloaded, tried, and abandoned. Only recently have I begun steadfastly maintaining a “two screen” limit on iOS to try and keep the applications on my devices solely to those that I use regularly.

This isn’t new, mind you. Enterprises have been doing this for years. Sometimes the “application” is an Excel spreadsheet. Sometimes it’s an old database application, or some other piece of old code, owned by a developer who long since ran from the organization.

For a long time, like Microsoft and comprehensive security ahead of the Windows security push, customers could turn a blind eye to application proliferation. Like feral rabbits, one will lead to many, and if you don’t manage or cut them back, they get out of control. Unfortunately, many enterprise applications are borne out of short-term necessity, without a great period of design forethought. Just as unfortunately, nobody goes around every year and does an “application census” in most organizations to figure out which applications are dead, abandoned, unused, or worst of all – insecure or unsecurable.

A colleague today was telling me about an antivirus application from a major vendor that relies on Java. Terrifying. But that’s nothing. Java is still supported (how well supported is arguable). If your organization is of any significant size, you’ve got applications based around ancient versions of Microsoft Office, SQL Server, or other products and platforms that are likely long past their expiration date. No updates, no patches, nothing. Yet your organization depends on them, and likely has no security mitigation or migration story in play. With the current crop of vulnerabilities we’ve seen recently in Java, Flash, and Acrobat Reader, I’ve been growing increasingly concerned with how dependent so many organizations are upon all three, yet how laissez-faire they seem to be about eliminating or at least reducing their dependency upon all three.

On a similar note, as someone who helped ship Windows XP, I love how well it has stood the test of time. But it was not engineered for today’s world – from an always-on connection to the Internet to the threat vectors being thrown at it and the software running on it.

It concerns me that so many organizations aren’t cognizant of what software (operating systems, platforms, and applications) are running in their organizations. They talk big of cloud, and how it’s better that they run the software on their premises. Yet they’re running old, unpatched software, often with known, never-to-be-patched vulnerabilities, and no plan to consolidate applications and remove dead, unsupported operating systems, platforms, and applications. It’s the equivalent of every enterprise having a bunch of storage units full of random crap you keep around because “someone might need it someday”.

Microsoft has been beating a drum about Windows XP – if you look at it closely, it sounds more like a marketing message. But whether you view it as that or not, and whether Windows 7, Windows 8, or something else entirely is in the cards for you, your business has barely one year to get off of Windows XP (April 8, 2014). We’ve heard from some customers that they have heard of custom support options after that time, but they are on a per-desktop basis, and the adage, “if you have to ask, you probably can’t afford it” appears to apply quite well. Windows XP (officially at death’s door) and Office 2003, also very widely used still, both pass into the great beyond on that same day.

Whether it is Windows XP, Office 2003, more porous (hard or impossible to patch) platform components, or custom applications on top of them, it’s imperative that organizations start managing and monitoring – and deprecating/discontinuing – applications that rely on dead software to exist. They’re putting your organization at risk. For me, there are two takes to this – cut back the applications you already have, and more importantly, carefully regulate how you build and deploy new ones, with a keen eye on the support lifecycle – and the patchability/supportability – of the OS, runtimes, and applications that you build upon. Applications can seem quick and easy to build on a whim. But like a puppy, or perhaps even more like a parrot, applications aren’t free to build or maintain. They are a long-term commitment.


24
Mar 13

One release away from irrelevance

A few weeks ago on Twitter, I said something about Apple, and someone replied back something akin to, “Apple is only one release away from irrelevance.”

Ah, but you see… we all are. In terms of sustainability, if you believe “we get this version released, and we win”, you lose. Whether you have competitors today, or you have a market that is principally yours, if there is enough opportunity for you, there’s enough appeal for someone else to enter it too.

A book I recently read discussed the first generation Ford Taurus. Started at the cusp of the 1980’s, after a decade of largely mediocre vehicles from Ford, the Taurus (and a handful of other vehicles that arrived near the same time) changed the aesthetic experience we expected from cars. The book’s author comments that Ford had even largely stopped using it’s blue oval insignia during the 1970’s, perhaps out of concerns that the vehicles didn’t represent the quality values that the blue oval should represent. Thing is, you very clearly get a picture that as the vehicle neared completion, the team “hit the wall” in marathoning parlance. They shipped, congratulated each other, and moved on to other projects. Rather than turning around and immediately beginning work on a next model to iterate the design and own the market, they stalled out for nearly a decade, only to do the same massive run in order to get the next iteration of the vehicle out the door (documented in yet another book). But I digress.

Many people often ask who Microsoft’s biggest competitor is. It isn’t Oracle. It isn’t startups. It’s Microsoft. Every 2-5 years, Microsoft replaces (and sometimes displaces) their own shipped X-1 products with new versions. If those new versions don’t include enough features and value so that customers can feel they are getting their money’s worth, they’ll stall out on older versions. We’ve seen this with Windows, where many businesses – and consumers, have stalled out on a 12 year old OS because “it’s good enough”, or Office 2003, because not only is it “good enough”, but the Ribbon (and it’s half-completed existence in Office 2007) scared away many customers. It’s gotten better in each iteration since – but the key question is always, “is there enough value in there to pull customers forward”?

I believe that the first thing you have to firmly grasp in technology – or really in business as a whole – is that nothing is forever.  You must figure out how to out-innovate yourself, to evolve and grow, even if it means jettisoning or submarining entire product lines – in order to create new ones that can take you forward again. Or disappear.

I’ve been rather surprised when I’ve said this, how defensive some people have gotten. Most people don’t like to ponder their own mortality. They sure don’t like to ponder the mortality of their employer or the platform that they build their business upon. But I think it is imperative that people begin doing exactly that.

There will come a day when we will likely talk about every tech giant of today in the past tense. Many may survive, but will be shadows (red dwarves, as I said on Twitter last night) of themselves. Look at how many tech giants of the 1970’s-1990’s are gone today – or are largely services driven organizations rather than technological innovators.

When that follower said that Apple was only one release away from irrelevance, I replied back with something similar to, “Almost every company is. It’s just a question of whether they realize it and act on it or not.”


23
Mar 13

The death of the pixel

It really didn’t hit me until recently. Something I’ve worked with for years, is being forced to retire. Well, not really retire, but at least asked to take a seat in the background.

My daughters love it when I tell them stories about “When I was little…” – the stories always begin with that saying. They usually have a lot to do with technology, and now things have changed over the last 40 years. You know the drill – phones with self-coiling cords that were stuck to the wall, payphones, Disney Read-Along books (records and then tapes), etc. Good times.

Two days ago, I had been working with a Retina Macbook Pro earlier in the day, and then it was time to put my 8 year old to bed. I told her about the Apple IIe my parents had bought when I was younger – the computer that I used through my first year of college.

Though my parents had even opted for the 80-column text card, as I look back now, the things that stick out in my mind were using The Print Shop to create horribly pixelated banners and signs, and using AppleWorks to create documents – all the way through that first year of college. I told her all about the tiny, block-like dots that made up everything on the screen, and everything that we printed.

The pixel was an essential part of technology then. We were on the other end of the spectrum from today; that is, “how many pixels do you need to make a bunch of pixels look kind of like the letter ‘o'”. I have to look back now and laugh a bit, because I recall how – while it was amazing to have computers at all – this early era of Apples and PCs is laughable from a user experience perspective. Like cars with tillers and no windscreen, these were good enough to work, for the time being.

With my iPhones, I’ve appreciated how amazing the pixel-dense “Retina” displays are. In particular, reading text is incredibly pleasant, as you can often forget you’re reading off of pixelated glass. But whether you’re consuming or creating content on that size of screen, it’s hard to get “immersed” in it.

Only as I used that Retina Macbook (a 13″), did I really realize how far we’ve come. Now it isn’t, “how many pixels do you need to make it look like an ‘o'”, it’s “how small do the pixels need to be so that you can’t see the pixels in the ‘o'”. Instead of looking like a bunch of dots creating the illusion of a letter on the screen, it’s the feeling of ink and a magical typewriter that delivers a WYSIWYG experience with digital ink on digital paper. Truly amazing.


22
Mar 13

You’re only as safe as your last backup

This week, for the second time in a year, I lost the hard drive in my main computer, a 2010 ThinkPad W510 running Windows 8. I swear I was good to the computer – I don’t know why this second Seagate 500GB drive (yes, the first one was too!) decided to hit the floor. I’ve had so many hardware problems with this system – BSODs, weird display problems, and more, over the last year, that rather than try to jam it back together for one more gig with the band, I am putting my ThinkPad out to pasture, and have replaced it.

I’ll tell you what – when you have a HDD fail, Twitter is all aflutter with people offering posthumous advice on what you could have done to avoid data loss. SkyDrive, CrashPlan, Dropbox, Windows 8 backup utilities… Like free advice, everybody had wisdom to offer… Unfortunately, it was too late. The damage was done. While I didn’t lose the latest draft of my book (THANKS SkyDrive!!!), I did lose an article draft I had been working on for some time. I’m not happy about that. Here’s how it happened.

On Wednesday morning, the date of my PC’s demise, I got up early, as I often have to do, to take my eldest to ice skating before school. The day before, I had checked out a key work file from our work file server (classic SMB Windows server file share, not SharePoint). Failure 1: I skipped a step, and pulled it locally, instead of archiving it to the server and making a copy. Our process is arcane and complex at times, but it works. The document was a rather complex outline for a lengthy piece around SharePoint Search.

While I was working at the skating rink, I wrote a good 1,000 words, getting towards more than half of the article. Failure 2: I was working with the file on my desktop, not my SkyDrive folder. Failure 3: I wasn’t on the Internet while I was at the skating rink – they have no free WiFi available. As I wrote the piece, I noticed that my system was behaving really erratically. Apps were hanging and whitescreening, only to eventually come back. Running Process Explorer, I couldn’t see anybody pegging the CPU, so I couldn’t find an obvious culprit to blame. Looking back, the warning signs of impending HDD failure were all there. I had a bunch of USB Flash Drives (UFDs) with me, so I could have, and should have copied the file off. At the moment, I’m so terrified of HDD data loss, that I’m saving things into synchronized folders all over the place, and backing up everything to everywhere.

When my daughter was done skating, we headed home, and my wife took her and her sister to school as I headed to the office. I logged on, and my computer failed to resume – it was hibernated, and tried starting – only to BSOD. After the BSOD, it just hung at the Windows 8 whirligig on the boot screen. Once put in any other machine, the drive simply clicks away, and fails to mount. Dead.

Fortunately, I had been using Windows 8’s File History to back up my files. Failure 4: Because I was using it with an external USB HDD, I was inconsistent about backing it up, and hadn’t done so in a week. Meaning my outline file was dead. Gone. MIA.

I have to look back at my criticism of Windows To Go and even renew it a bit. The thought of creating content on the go, unless you have WiFi or 3G/4G connectivity back to SharePoint, SkyDrive, Dropbox, etc, it’s an invitation to lose work as I did.

I often say that if you make a user opt-in to a process, they never will. My new backup mechanism involves technologies that all happen in the background, automatically, and don’t let me opt out, as I had done with Windows 8’s File History. Though nothing aside from me bailing the file before the HDD died on Wednesday could have saved it, at least I would have had the outline from backing it up earlier. But through a series of lazy step skipping on my behalf, I hosed myself. I am disappoint.

Given that I’ve had three HDDs die on me over the last year, and have lost a spot of data during all of them other than my iMac dying (thanks to Time Machine), I still ponder why modern operating systems seem to have inadequate or ineffective means to tell the user that their drive is failing and about to die.


21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?


19
Mar 13

Bill Hill and Homo Sapiens 2.0

Working on another blog post, and ran across an interview of Bill Hill from 2009. Bill reinvented himself many times in his career, from a newspaperman to someone who fundamentally worked to change the way the world read text on a digital screen. It harkens back to yesterday’s post, as well as my post on the machines coming for your job. Specifically, at about 19 minutes in, this conversation comes up:

Interviewer: “In this economy…What’s the relationship between fear…and taking chances…?”

Bill Hill: “Well that’s just the whole point. I mean, it’s very easy to get kinda cozy, and do ordinary stuff.” and “You can’t allow yourself to get paralyzed.”

Bill never stopped moving, never stopped reinventing himself. Weeks before he passed, he and I had a conversation about eBooks, almost 13 years after I first met him as we talked the same subject. You can’t stop moving, and can’t stop reinventing yourself.

 

 

 

 

 


18
Mar 13

Always Be Unique

Earlier today, this tweet showed up in my Twitter timeline. It leads with the text: “Quality to blame for declining news audiences, study suggests”

I retweeted it, and then commented, “The increased cost for news content, and the decreasing amount of truly unique content, show why people abandon news outlets.”

At first, I thought this applied just to news content. But no, it applies to many things in our life today; however news exemplifies it in a very unique way.

I’ve said before that “The Web democratized content” (along with music). Anyone can be a “journalist” – or at least a published writer, on the Internet today. But that just gets you published. Anyone can take the time to write a book, pay for an on-demand publisher to print a copy, and voila!, they’re a published author. That doesn’t mean anyone will pay money for copies, read them, and recommend them to friends. Same on the Web.

I’m not only a producer of information, I’m also a consumer – and I have to tell you, as I browse the aisles of information that are out there, there’s a lot of digital junk food vying for our attention. There are a handful of news sources that break actual news, and a handful news sources that perform strong analysis. But more often, the Web and Twitter (and hours or days later, television stations and newspapers, respectively), are chock full of a self-aggrandizing punditry, where like the childhood game of “telephone”, non-news begins resonating and echoing, becoming louder and louder until it sounds like news. I’ve seen this happen with news about every major technology company, and recently saw it happen to a family member of a friend.

News feeds on sensationalism. Whether it’s bad news, “exclusive” news (whatever that means in the age of Twitter), an idiotic rumor based upon a leak from a supply chain provider, or worse, rumors based upon rumors, it spreads like gossip. In the end, it’s impossible for anyone to truly stand out, because everyone is stuck in this same rut of repeating the rumor.

So back to my point. Why are news audiences declining? Because conventional news outlets are being beaten to the punch. News outlets have generally shrunk in content and quality, while increasing in price, and throwing in advertising technology that gets in the way of the content and the user’s experience. I’m not sure that content paid for primarily by advertising is sustainable. But I can tell you if every time people visit your site you give them interstitial ads, pop-ups/pop-unders/pop-ins, or other hostile advertising that gets in the way of the content, rather than adding value (yes, that’s possible), you’re going to lose readers over time. Pissing off your reader isn’t a good way to provide unique news in a sustainable way.

News has been democratized and commoditized to the point that if we buy a morning paper in one city, fly to another and grab another paper, we see the same syndicated news, with a thin veneer of geographically relevant content grafted on. Conventional journalism outlets are dying because they are slow, and don’t provide significant value to their consumer given the long wait time. They’re also often laden with ads, with articles that are slow to load (usually due to slow ad engines, ironically), and provide little value outside of news you could have seen breaking on Twitter some time ago if you were watching. News outlets that regurgitate twice-baked news without adding value are doomed to be paved over, parking lots of journalism times past.

This applies outside of journalism too. It’s called the first-mover advantage. Do something first, you can possibly own that market. Follow others, and you have to clearly demonstrate the unique value you provide beyond the first mover – or get buried in the melee of als0-rans vying to also catch up to the first mover.

If you want to be successful in anything, always be first, or always be unique. Those are your two choices. Much like any other job, there’s always someone waiting to fill your shoes if you stop providing unique value.


16
Mar 13

Shut up and eat your GMOs

It’s with a fair amount of disappointment (disbelief?) that I read Bruce Ramsey’s article about Initiative 522 (Washington’s GMO labeling proposition) in the Seattle Times.

My belief, after reading this piece, is that Mr. Ramsey should generally refrain from writing when his familiarity with the topic at hand leads him to include the disclaimer “I am a novice”, as he did with the statement early in this article, “I am a novice on genetically modified organisms”.

There are three modalities of belief in the GMO (genetically modified organism) debate – or in any discussion of where our food comes from). Heck, it really applies to almost any topical debate.

  1. Apathy (no real concern one way or another – perhaps no familiarity to base an opinion on)
  2. Agreement (tolerance or fanaticism for the practice)
  3. Antipathy (some disagreement or more with the practice)

In my experience, when it comes to their food, Americans generally seem to fall contentedly into the category of apathy. Happy to ignore the complexities of where their food comes from, most Americans ignore the ugly underbelly of our industrialized food system until the evening news enlightens them to a new E. coli outbreak in antibiotic-laden, undercooked beef from a CAFO, or they latch on to a buzzword used by reporters like pink slime or “meat glue”. They happily go along with the ethos that, “Everything is okay until it’s not okay.” But when that concern passes, most turn back to their bread and circuses, and spend more energy focused on reality shows than what’s in the food their family is eating.

Mr. Ramsey’s piece clearly puts him in the “agreement” camp that GMOs are acceptable because, as he states, “People are trying to make an economic case in a matter that is mostly about belief.” and that they don’t need labels “…if it makes no difference to people’s health?” But it boggles my mind that Mr. Ramsey elected to prognosticate about the need to label GMOs – one way or the other – when he clearly has no background on the topic, or why labeling efforts ever came to be. Instead, he simply dismisses the debate about GMOs as if it were a figment of the imagination of those behind 522 – just ignorance on their part, or even moreso, some evil conspiracy by “Big Organic” to foist its way of life upon the rest of the world. I don’t get where this idea can even begin to come from.

My own belief, based upon years of trying to understand the complexities of our food system, and trying to not turn a blind eye to the unpleasantness of it all, is that all of us in the US are being deceived about whether GMOs are or are not harmless – we are told “it’s fine” – but the people telling us that are the people making the GMO seed, and the profits as a result. There was no opportunity to question at the inception, and even as we face the impending likely approval of a GMO salmon, even with a huge public outcry, it appears that business may win out over unknowable, unanswerable questions about long-term health effects or environmental detriment from this fish being approved.

I’m clearly in the disagreement camp, and I am a firm believer that consumers should be transparently made aware of the possible risks of genetic modification, and given the option to know what foods (or often “foods”) contain GM crops.

Long ago, our government decided to look the other way about GMOs. Using a methodology called substantial equivalency, even over the objections of 9 FDA scientists, the FDA accepted the GMO industry’s stance that there is no difference between conventional breeding (hybridization) and bioengineering. Now maybe you have, but I’ve never seen a salmon mate with an eel, and I’ve never seen a bacterium mate with a papaya. Yet among other genetic cross-breeding, that’s what we’ve got today. If someone tells you there is no difference between hybridization and biotechnology, they’re lying to you (or trying to sell you GMO seed, and likely a pesticide cocktail to go with it).

The use of substantial equivalency is entirely biased towards the needs of producers rather than the general public. It is based around the (non-scientific) philosophy that a new food is like an old food, unless it isn’t. Dismissed by many scientists as not a safety assessment, rather a means to rubber-stamp new foods until otherwise proven hazardous, substantial equivalency has enabled GMO producers to throw countless food components onto our plates simply claiming they are safe, using a categorization called Generally Regarded As Safe (GRAS) until found to be otherwise – note the earlier article where the FDA has even turned a blind eye towards food that was found to not be safe. They aren’t even treated as an additive. Frankly, we’re still just learning what kind of baggage even comes along with GMOs. Note the two paragraphs in that Durango Herald article:

“Monsanto’s own feeding studies, however, showed that the genetic material in GMO corn that makes it pest-resistant was transferred to the beneficial bacteria in the intestinal tract of humans eating GMO corn. This potential for creating a pesticide factory in the human gut has gone untested.”

Followed by:

“Recent research has shown that GMO corn insecticidal proteins are found in the blood of pregnant women and their fetuses. Animal research has shown intestinal, liver, kidney and reproductive toxicity from both GM corn and soy. This does not bode well for the assertion of ‘substantial equivalence.’”

Yet GMOs are in almost everything you eat. Conventional corn, soy, canola, and likely soon, farm-raised, (antibiotic-laden) salmon!

People in the yes on GMO camp generally decry people saying no as “anti-science” or “nutcases”. I’m hardly anti-science, and I like to think I’m reasonably rational and well-balanced. But I believe as a species, we often jump into hasty “great ideas” only to later regret that idea. Radium. Thalidomide. Vioxx. History is littered with pharmaceuticals rushed to market only to be pulled back after fatalities exceeded the manufacturer’s clinical trials. I believe that often, our government officials err on the side of the businesses that pay for influence, rather than on the side of consumers, who merely vote them into office. In the case of GMOs in our foods, the rush to judgment isn’t on the side of the naysayers, it’s on the side of the government and industrial agricultural giants, who have foisted GMO crops on consumers, without ever questioning the long-term side effects, or offering consumers any other option aside from buying organically labeled foods, where GM ingredients are forbidden by definition.

If you don’t read anything else, Mr. Ramsey, I hope you will read this. There is no independent testing of GMOs. None. There is no long-term testing of GMOs. None. Independent or black-box at the vendor. As a consumer, you have absolutely no way, outside of eating exclusively organic, that you are not regularly ingesting GMOs. You dismissed the need for 522 not because you investigated and understood why GMOs are not good for us (let alone the planet), but because you inquired with one geneticist and a company trying to sell a GMO apple. Isn’t that rather like asking a WSU student what kind of education UW can provide? Yet GMOs are in almost everything you eat. You trivialize the need for labels, and point the finger at a local Co-op, Whole Foods, and other organic food proponents as renegades trying to force their world on you.

California’s recently failed proposition to label GMOs was, much as Washington’s was, created by volunteers. California’s was crushed under the weight of conventional food and agribusiness giants. They don’t want GMO labeling because, for the food producers, it will result in high cost for ingredients (for example, replacing high-fructose corn syrup, predominantly GM, with non-GM sugars), packaging changes, and reformulation costs. The agribusiness giants? Because it crushes their revenue stream. That’s why Monsanto spent millions to defeat the initiative, and surely will here as well.

I’m not exactly sure why you elected to land on the side of supporting GMOs, given your self-admitted naiveté. But I hope that in the future you will examine and understand the whole debate before injecting yourself into it and using your column to create (my belief) wrong-headed, uninformed public opinion.

The organizations and companies you finger pointed at being “behind 522”? Sure – they stand to make more profit if GMOs are labeled. But I honestly don’t believe that’s why PCC (a local Co-operative), among them, is doing it. They are backing it because consumers deserve to have a choice – to be pulled from their apathy that the industrial food suppliers have happily created – to understand what is in the foods that they buy, and make healthier and more sustainable choices. In the end, it won’t matter if 522 passes. Whole Foods has already decided to label all GMO foods in their stores by 2018. That’s still too far away, but it’s a light at the end of the tunnel.

I can go on discussing the unsustainability of GMOs at length – contrary to the public relations slogan, GMOs are not the only solution to feeding the world (http://www.huffingtonpost.com/jeffrey-smith/vilsack-mistakenly-pitche_b_319998.html). With their seed licensing costs, creation of a biologically unsustainable monoculture, high cost for other inputs matched to many of them, increasing requirements for more volume and higher toxicity herbicides and pesticides to accompany them as pests and weeds develop natural resistance, GMOs as the key to “feeding the world” are a myth.


09
Mar 13

Most smart appliances are stupid

“Smart”. Many device and appliance manufacturers toss that word around like they know what it means.

An app platform on a TV? Voila! It’s a “Smart TV”

An LCD screen and/or an app platform on a refrigerator, washer or dryer? Voila! It’s a “Smart Appliance”.

You go ahead and keep using that word, manufacturers. Just understand that it doesn’t mean what you think it means. You’re turning it into a meaningless modifier, like “green”, “natural”, or one of my favorites, “healthy”.

Let me offer you a hint. Anything that leads the blurb describing itself with a meaningless modifier usually doesn’t actually do the thing it purports to do. Way back in 2010, I discussed  how it was important for purveyors of “family room devices” and app authors to understand the difference between single-user and multiple-user applications/experiences, rather than just shoving the entire (single-user) smart phone experience into the “smart TV”. Just like task-oriented computing for any other class of user, designing proper smart TVs, and even more importantly, smart appliances, needs to start with solving the actual problems that users are having, not just crap out LCD displays on top of old refrigerators or washers and dryers.

Let me start with a real-world collection of problems that I have with appliances, tell you why fixing them is truly “smart”, and why adding an LCD display or “apps” to an appliance won’t solve any of them.

Dishwasher:

  • Problem: I unloaded and reloaded the dishwasher, but forgot to start it.
  • Solution: Send my phone a notification and tell me before I go to bed.
  • Problem: I unloaded, reloaded, and started the dishwasher but forgot soap.
  • Solution: Send my phone a notification and tell me after it starts so I can fix it.

Oven:

  • Problem: I set a timer for a dish I’m baking, and it has been going off for a minute.
  • Solution: Send my phone a notification or CALL ME!

Washer or dryer:

  • Problem: A load of clothes has finished and is ready for the next step.
  • Solution: Send my phone a notification so I can move them/fold/hang them.

You get the idea. Things that can hook in here? Doorbells. Smoke alarms/CO detectors. Garage doors to tell me I left them open and it’s 11 PM. The list goes on and on. Simple notifications to tell me something’s not going the way it should, or I need to pay attention to something I’m not.

None of these involve a display or that much intelligence. What they do involve is:

  1. Network interconnectivity within appliances (WiFi, Bluetooth 4, Zigbee, whatever to a centralized service. Pick a standard across the industry and go with it – a rising tide lifts all boats.
  2. A willingness for someone to step up and build an intra-appliance API for pushing notifications
  3. A kick-ass (simple) user interface for apps on major mobile platforms like Nest has.

Yes – admitting that the smart phone or tablet is the hub of our lives, not our refrigerator or washer and dryer, is also a key tenet. Is that news?

Smart appliances aren’t about doing more things on our appliances. Sorry to break that to you, appliance vendors. It’s about appliances integrating into the way we use them, and helping us make the most of them. Stop adding apps. Stop adding screens. Start solving problems.