Mar 13

The death of the pixel

It really didn’t hit me until recently. Something I’ve worked with for years, is being forced to retire. Well, not really retire, but at least asked to take a seat in the background.

My daughters love it when I tell them stories about “When I was little…” – the stories always begin with that saying. They usually have a lot to do with technology, and now things have changed over the last 40 years. You know the drill – phones with self-coiling cords that were stuck to the wall, payphones, Disney Read-Along books (records and then tapes), etc. Good times.

Two days ago, I had been working with a Retina Macbook Pro earlier in the day, and then it was time to put my 8 year old to bed. I told her about the Apple IIe my parents had bought when I was younger – the computer that I used through my first year of college.

Though my parents had even opted for the 80-column text card, as I look back now, the things that stick out in my mind were using The Print Shop to create horribly pixelated banners and signs, and using AppleWorks to create documents – all the way through that first year of college. I told her all about the tiny, block-like dots that made up everything on the screen, and everything that we printed.

The pixel was an essential part of technology then. We were on the other end of the spectrum from today; that is, “how many pixels do you need to make a bunch of pixels look kind of like the letter ‘o'”. I have to look back now and laugh a bit, because I recall how – while it was amazing to have computers at all – this early era of Apples and PCs is laughable from a user experience perspective. Like cars with tillers and no windscreen, these were good enough to work, for the time being.

With my iPhones, I’ve appreciated how amazing the pixel-dense “Retina” displays are. In particular, reading text is incredibly pleasant, as you can often forget you’re reading off of pixelated glass. But whether you’re consuming or creating content on that size of screen, it’s hard to get “immersed” in it.

Only as I used that Retina Macbook (a 13″), did I really realize how far we’ve come. Now it isn’t, “how many pixels do you need to make it look like an ‘o'”, it’s “how small do the pixels need to be so that you can’t see the pixels in the ‘o'”. Instead of looking like a bunch of dots creating the illusion of a letter on the screen, it’s the feeling of ink and a magical typewriter that delivers a WYSIWYG experience with digital ink on digital paper. Truly amazing.

Mar 13

You’re only as safe as your last backup

This week, for the second time in a year, I lost the hard drive in my main computer, a 2010 ThinkPad W510 running Windows 8. I swear I was good to the computer – I don’t know why this second Seagate 500GB drive (yes, the first one was too!) decided to hit the floor. I’ve had so many hardware problems with this system – BSODs, weird display problems, and more, over the last year, that rather than try to jam it back together for one more gig with the band, I am putting my ThinkPad out to pasture, and have replaced it.

I’ll tell you what – when you have a HDD fail, Twitter is all aflutter with people offering posthumous advice on what you could have done to avoid data loss. SkyDrive, CrashPlan, Dropbox, Windows 8 backup utilities… Like free advice, everybody had wisdom to offer… Unfortunately, it was too late. The damage was done. While I didn’t lose the latest draft of my book (THANKS SkyDrive!!!), I did lose an article draft I had been working on for some time. I’m not happy about that. Here’s how it happened.

On Wednesday morning, the date of my PC’s demise, I got up early, as I often have to do, to take my eldest to ice skating before school. The day before, I had checked out a key work file from our work file server (classic SMB Windows server file share, not SharePoint). Failure 1: I skipped a step, and pulled it locally, instead of archiving it to the server and making a copy. Our process is arcane and complex at times, but it works. The document was a rather complex outline for a lengthy piece around SharePoint Search.

While I was working at the skating rink, I wrote a good 1,000 words, getting towards more than half of the article. Failure 2: I was working with the file on my desktop, not my SkyDrive folder. Failure 3: I wasn’t on the Internet while I was at the skating rink – they have no free WiFi available. As I wrote the piece, I noticed that my system was behaving really erratically. Apps were hanging and whitescreening, only to eventually come back. Running Process Explorer, I couldn’t see anybody pegging the CPU, so I couldn’t find an obvious culprit to blame. Looking back, the warning signs of impending HDD failure were all there. I had a bunch of USB Flash Drives (UFDs) with me, so I could have, and should have copied the file off. At the moment, I’m so terrified of HDD data loss, that I’m saving things into synchronized folders all over the place, and backing up everything to everywhere.

When my daughter was done skating, we headed home, and my wife took her and her sister to school as I headed to the office. I logged on, and my computer failed to resume – it was hibernated, and tried starting – only to BSOD. After the BSOD, it just hung at the Windows 8 whirligig on the boot screen. Once put in any other machine, the drive simply clicks away, and fails to mount. Dead.

Fortunately, I had been using Windows 8’s File History to back up my files. Failure 4: Because I was using it with an external USB HDD, I was inconsistent about backing it up, and hadn’t done so in a week. Meaning my outline file was dead. Gone. MIA.

I have to look back at my criticism of Windows To Go and even renew it a bit. The thought of creating content on the go, unless you have WiFi or 3G/4G connectivity back to SharePoint, SkyDrive, Dropbox, etc, it’s an invitation to lose work as I did.

I often say that if you make a user opt-in to a process, they never will. My new backup mechanism involves technologies that all happen in the background, automatically, and don’t let me opt out, as I had done with Windows 8’s File History. Though nothing aside from me bailing the file before the HDD died on Wednesday could have saved it, at least I would have had the outline from backing it up earlier. But through a series of lazy step skipping on my behalf, I hosed myself. I am disappoint.

Given that I’ve had three HDDs die on me over the last year, and have lost a spot of data during all of them other than my iMac dying (thanks to Time Machine), I still ponder why modern operating systems seem to have inadequate or ineffective means to tell the user that their drive is failing and about to die.

Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?

Mar 13

Bill Hill and Homo Sapiens 2.0

Working on another blog post, and ran across an interview of Bill Hill from 2009. Bill reinvented himself many times in his career, from a newspaperman to someone who fundamentally worked to change the way the world read text on a digital screen. It harkens back to yesterday’s post, as well as my post on the machines coming for your job. Specifically, at about 19 minutes in, this conversation comes up:

Interviewer: “In this economy…What’s the relationship between fear…and taking chances…?”

Bill Hill: “Well that’s just the whole point. I mean, it’s very easy to get kinda cozy, and do ordinary stuff.” and “You can’t allow yourself to get paralyzed.”

Bill never stopped moving, never stopped reinventing himself. Weeks before he passed, he and I had a conversation about eBooks, almost 13 years after I first met him as we talked the same subject. You can’t stop moving, and can’t stop reinventing yourself.






Mar 13

Always Be Unique

Earlier today, this tweet showed up in my Twitter timeline. It leads with the text: “Quality to blame for declining news audiences, study suggests”

I retweeted it, and then commented, “The increased cost for news content, and the decreasing amount of truly unique content, show why people abandon news outlets.”

At first, I thought this applied just to news content. But no, it applies to many things in our life today; however news exemplifies it in a very unique way.

I’ve said before that “The Web democratized content” (along with music). Anyone can be a “journalist” – or at least a published writer, on the Internet today. But that just gets you published. Anyone can take the time to write a book, pay for an on-demand publisher to print a copy, and voila!, they’re a published author. That doesn’t mean anyone will pay money for copies, read them, and recommend them to friends. Same on the Web.

I’m not only a producer of information, I’m also a consumer – and I have to tell you, as I browse the aisles of information that are out there, there’s a lot of digital junk food vying for our attention. There are a handful of news sources that break actual news, and a handful news sources that perform strong analysis. But more often, the Web and Twitter (and hours or days later, television stations and newspapers, respectively), are chock full of a self-aggrandizing punditry, where like the childhood game of “telephone”, non-news begins resonating and echoing, becoming louder and louder until it sounds like news. I’ve seen this happen with news about every major technology company, and recently saw it happen to a family member of a friend.

News feeds on sensationalism. Whether it’s bad news, “exclusive” news (whatever that means in the age of Twitter), an idiotic rumor based upon a leak from a supply chain provider, or worse, rumors based upon rumors, it spreads like gossip. In the end, it’s impossible for anyone to truly stand out, because everyone is stuck in this same rut of repeating the rumor.

So back to my point. Why are news audiences declining? Because conventional news outlets are being beaten to the punch. News outlets have generally shrunk in content and quality, while increasing in price, and throwing in advertising technology that gets in the way of the content and the user’s experience. I’m not sure that content paid for primarily by advertising is sustainable. But I can tell you if every time people visit your site you give them interstitial ads, pop-ups/pop-unders/pop-ins, or other hostile advertising that gets in the way of the content, rather than adding value (yes, that’s possible), you’re going to lose readers over time. Pissing off your reader isn’t a good way to provide unique news in a sustainable way.

News has been democratized and commoditized to the point that if we buy a morning paper in one city, fly to another and grab another paper, we see the same syndicated news, with a thin veneer of geographically relevant content grafted on. Conventional journalism outlets are dying because they are slow, and don’t provide significant value to their consumer given the long wait time. They’re also often laden with ads, with articles that are slow to load (usually due to slow ad engines, ironically), and provide little value outside of news you could have seen breaking on Twitter some time ago if you were watching. News outlets that regurgitate twice-baked news without adding value are doomed to be paved over, parking lots of journalism times past.

This applies outside of journalism too. It’s called the first-mover advantage. Do something first, you can possibly own that market. Follow others, and you have to clearly demonstrate the unique value you provide beyond the first mover – or get buried in the melee of als0-rans vying to also catch up to the first mover.

If you want to be successful in anything, always be first, or always be unique. Those are your two choices. Much like any other job, there’s always someone waiting to fill your shoes if you stop providing unique value.

Mar 13

Shut up and eat your GMOs

It’s with a fair amount of disappointment (disbelief?) that I read Bruce Ramsey’s article about Initiative 522 (Washington’s GMO labeling proposition) in the Seattle Times.

My belief, after reading this piece, is that Mr. Ramsey should generally refrain from writing when his familiarity with the topic at hand leads him to include the disclaimer “I am a novice”, as he did with the statement early in this article, “I am a novice on genetically modified organisms”.

There are three modalities of belief in the GMO (genetically modified organism) debate – or in any discussion of where our food comes from). Heck, it really applies to almost any topical debate.

  1. Apathy (no real concern one way or another – perhaps no familiarity to base an opinion on)
  2. Agreement (tolerance or fanaticism for the practice)
  3. Antipathy (some disagreement or more with the practice)

In my experience, when it comes to their food, Americans generally seem to fall contentedly into the category of apathy. Happy to ignore the complexities of where their food comes from, most Americans ignore the ugly underbelly of our industrialized food system until the evening news enlightens them to a new E. coli outbreak in antibiotic-laden, undercooked beef from a CAFO, or they latch on to a buzzword used by reporters like pink slime or “meat glue”. They happily go along with the ethos that, “Everything is okay until it’s not okay.” But when that concern passes, most turn back to their bread and circuses, and spend more energy focused on reality shows than what’s in the food their family is eating.

Mr. Ramsey’s piece clearly puts him in the “agreement” camp that GMOs are acceptable because, as he states, “People are trying to make an economic case in a matter that is mostly about belief.” and that they don’t need labels “…if it makes no difference to people’s health?” But it boggles my mind that Mr. Ramsey elected to prognosticate about the need to label GMOs – one way or the other – when he clearly has no background on the topic, or why labeling efforts ever came to be. Instead, he simply dismisses the debate about GMOs as if it were a figment of the imagination of those behind 522 – just ignorance on their part, or even moreso, some evil conspiracy by “Big Organic” to foist its way of life upon the rest of the world. I don’t get where this idea can even begin to come from.

My own belief, based upon years of trying to understand the complexities of our food system, and trying to not turn a blind eye to the unpleasantness of it all, is that all of us in the US are being deceived about whether GMOs are or are not harmless – we are told “it’s fine” – but the people telling us that are the people making the GMO seed, and the profits as a result. There was no opportunity to question at the inception, and even as we face the impending likely approval of a GMO salmon, even with a huge public outcry, it appears that business may win out over unknowable, unanswerable questions about long-term health effects or environmental detriment from this fish being approved.

I’m clearly in the disagreement camp, and I am a firm believer that consumers should be transparently made aware of the possible risks of genetic modification, and given the option to know what foods (or often “foods”) contain GM crops.

Long ago, our government decided to look the other way about GMOs. Using a methodology called substantial equivalency, even over the objections of 9 FDA scientists, the FDA accepted the GMO industry’s stance that there is no difference between conventional breeding (hybridization) and bioengineering. Now maybe you have, but I’ve never seen a salmon mate with an eel, and I’ve never seen a bacterium mate with a papaya. Yet among other genetic cross-breeding, that’s what we’ve got today. If someone tells you there is no difference between hybridization and biotechnology, they’re lying to you (or trying to sell you GMO seed, and likely a pesticide cocktail to go with it).

The use of substantial equivalency is entirely biased towards the needs of producers rather than the general public. It is based around the (non-scientific) philosophy that a new food is like an old food, unless it isn’t. Dismissed by many scientists as not a safety assessment, rather a means to rubber-stamp new foods until otherwise proven hazardous, substantial equivalency has enabled GMO producers to throw countless food components onto our plates simply claiming they are safe, using a categorization called Generally Regarded As Safe (GRAS) until found to be otherwise – note the earlier article where the FDA has even turned a blind eye towards food that was found to not be safe. They aren’t even treated as an additive. Frankly, we’re still just learning what kind of baggage even comes along with GMOs. Note the two paragraphs in that Durango Herald article:

“Monsanto’s own feeding studies, however, showed that the genetic material in GMO corn that makes it pest-resistant was transferred to the beneficial bacteria in the intestinal tract of humans eating GMO corn. This potential for creating a pesticide factory in the human gut has gone untested.”

Followed by:

“Recent research has shown that GMO corn insecticidal proteins are found in the blood of pregnant women and their fetuses. Animal research has shown intestinal, liver, kidney and reproductive toxicity from both GM corn and soy. This does not bode well for the assertion of ‘substantial equivalence.’”

Yet GMOs are in almost everything you eat. Conventional corn, soy, canola, and likely soon, farm-raised, (antibiotic-laden) salmon!

People in the yes on GMO camp generally decry people saying no as “anti-science” or “nutcases”. I’m hardly anti-science, and I like to think I’m reasonably rational and well-balanced. But I believe as a species, we often jump into hasty “great ideas” only to later regret that idea. Radium. Thalidomide. Vioxx. History is littered with pharmaceuticals rushed to market only to be pulled back after fatalities exceeded the manufacturer’s clinical trials. I believe that often, our government officials err on the side of the businesses that pay for influence, rather than on the side of consumers, who merely vote them into office. In the case of GMOs in our foods, the rush to judgment isn’t on the side of the naysayers, it’s on the side of the government and industrial agricultural giants, who have foisted GMO crops on consumers, without ever questioning the long-term side effects, or offering consumers any other option aside from buying organically labeled foods, where GM ingredients are forbidden by definition.

If you don’t read anything else, Mr. Ramsey, I hope you will read this. There is no independent testing of GMOs. None. There is no long-term testing of GMOs. None. Independent or black-box at the vendor. As a consumer, you have absolutely no way, outside of eating exclusively organic, that you are not regularly ingesting GMOs. You dismissed the need for 522 not because you investigated and understood why GMOs are not good for us (let alone the planet), but because you inquired with one geneticist and a company trying to sell a GMO apple. Isn’t that rather like asking a WSU student what kind of education UW can provide? Yet GMOs are in almost everything you eat. You trivialize the need for labels, and point the finger at a local Co-op, Whole Foods, and other organic food proponents as renegades trying to force their world on you.

California’s recently failed proposition to label GMOs was, much as Washington’s was, created by volunteers. California’s was crushed under the weight of conventional food and agribusiness giants. They don’t want GMO labeling because, for the food producers, it will result in high cost for ingredients (for example, replacing high-fructose corn syrup, predominantly GM, with non-GM sugars), packaging changes, and reformulation costs. The agribusiness giants? Because it crushes their revenue stream. That’s why Monsanto spent millions to defeat the initiative, and surely will here as well.

I’m not exactly sure why you elected to land on the side of supporting GMOs, given your self-admitted naiveté. But I hope that in the future you will examine and understand the whole debate before injecting yourself into it and using your column to create (my belief) wrong-headed, uninformed public opinion.

The organizations and companies you finger pointed at being “behind 522”? Sure – they stand to make more profit if GMOs are labeled. But I honestly don’t believe that’s why PCC (a local Co-operative), among them, is doing it. They are backing it because consumers deserve to have a choice – to be pulled from their apathy that the industrial food suppliers have happily created – to understand what is in the foods that they buy, and make healthier and more sustainable choices. In the end, it won’t matter if 522 passes. Whole Foods has already decided to label all GMO foods in their stores by 2018. That’s still too far away, but it’s a light at the end of the tunnel.

I can go on discussing the unsustainability of GMOs at length – contrary to the public relations slogan, GMOs are not the only solution to feeding the world (http://www.huffingtonpost.com/jeffrey-smith/vilsack-mistakenly-pitche_b_319998.html). With their seed licensing costs, creation of a biologically unsustainable monoculture, high cost for other inputs matched to many of them, increasing requirements for more volume and higher toxicity herbicides and pesticides to accompany them as pests and weeds develop natural resistance, GMOs as the key to “feeding the world” are a myth.

Mar 13

Most smart appliances are stupid

“Smart”. Many device and appliance manufacturers toss that word around like they know what it means.

An app platform on a TV? Voila! It’s a “Smart TV”

An LCD screen and/or an app platform on a refrigerator, washer or dryer? Voila! It’s a “Smart Appliance”.

You go ahead and keep using that word, manufacturers. Just understand that it doesn’t mean what you think it means. You’re turning it into a meaningless modifier, like “green”, “natural”, or one of my favorites, “healthy”.

Let me offer you a hint. Anything that leads the blurb describing itself with a meaningless modifier usually doesn’t actually do the thing it purports to do. Way back in 2010, I discussed  how it was important for purveyors of “family room devices” and app authors to understand the difference between single-user and multiple-user applications/experiences, rather than just shoving the entire (single-user) smart phone experience into the “smart TV”. Just like task-oriented computing for any other class of user, designing proper smart TVs, and even more importantly, smart appliances, needs to start with solving the actual problems that users are having, not just crap out LCD displays on top of old refrigerators or washers and dryers.

Let me start with a real-world collection of problems that I have with appliances, tell you why fixing them is truly “smart”, and why adding an LCD display or “apps” to an appliance won’t solve any of them.


  • Problem: I unloaded and reloaded the dishwasher, but forgot to start it.
  • Solution: Send my phone a notification and tell me before I go to bed.
  • Problem: I unloaded, reloaded, and started the dishwasher but forgot soap.
  • Solution: Send my phone a notification and tell me after it starts so I can fix it.


  • Problem: I set a timer for a dish I’m baking, and it has been going off for a minute.
  • Solution: Send my phone a notification or CALL ME!

Washer or dryer:

  • Problem: A load of clothes has finished and is ready for the next step.
  • Solution: Send my phone a notification so I can move them/fold/hang them.

You get the idea. Things that can hook in here? Doorbells. Smoke alarms/CO detectors. Garage doors to tell me I left them open and it’s 11 PM. The list goes on and on. Simple notifications to tell me something’s not going the way it should, or I need to pay attention to something I’m not.

None of these involve a display or that much intelligence. What they do involve is:

  1. Network interconnectivity within appliances (WiFi, Bluetooth 4, Zigbee, whatever to a centralized service. Pick a standard across the industry and go with it – a rising tide lifts all boats.
  2. A willingness for someone to step up and build an intra-appliance API for pushing notifications
  3. A kick-ass (simple) user interface for apps on major mobile platforms like Nest has.

Yes – admitting that the smart phone or tablet is the hub of our lives, not our refrigerator or washer and dryer, is also a key tenet. Is that news?

Smart appliances aren’t about doing more things on our appliances. Sorry to break that to you, appliance vendors. It’s about appliances integrating into the way we use them, and helping us make the most of them. Stop adding apps. Stop adding screens. Start solving problems.

Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.

Feb 13

The machines are coming for your job. Big deal.

This blog post is in response to the TechCrunch piece entitled Get Ready To Lose Your Job.

For my entire life, my father was a physician (until he retired). He had to subscribe to medical journals and take courses to keep his skills up to snuff, but medicine, and his specialty, did not evolve to such a form that his career has been replaced. That said, his specialty (gastroenterology) now has some amazing tools at their disposal that can obviate the need for some procedures or tools. But the point is – he never had to shift jobs, only keep skills up to date.

I recently read the book Punching Out about a steel stamping plant in Detroit being spun down over a year’s time. While the book left a little bit to be desired (still, a good read), the events were what got me thinking. As the author works alongside him, here’s a story from a worker who had been a part of Ford’s assembly line for a long time but had been let go:

 “They built a new assembly line. One day, we went over for a tour of the new line, and they showed me a machine that was doing my job. The line that I was working on was built in 1942, and this was in 1979. They turned the lights out, and the machine was still doing the job. So I said to myself, ‘Now I gotta learn how to build machines.’”

Humans are toolmakers. We find tasks that need repeating, and we find ways to make those tasks more efficient, cheaper, faster, or all of the above. Cotton gins. Threshing machines (or modern day combine harvesters which replaced them). Assembly line. Steel mini-mills. Scripting languages… All of them exist to make repetitive tasks less tedious.

Often when new technology comes along that makes these tedious tasks less cumbersome, the technology is called “disruptive”. Regardless of how disruptive it may actually be in the long run to society overall, when a piece of technology can replace a worker or two… or three.. or more… it becomes a socially significant event. The Swing Riots in the early 1800’s are a good example of technology leading directly to social complications.

However, even technology isn’t permanent. Let me tell you a secret. NOTHING is permanent. Nothing. Technology is ever-changing, ever-evolving, in a perpetual movement forward for better efficiency. As the thresher replaced people, the thresher itself was eventually subsumed by the combine harvester of today, which combined three previous innovations together in one – obviating the need for all three (and likely most manufacturers making them).

Innately, humans become comfortable, almost sedentary, in their ways. We think things won’t change, and the status quo will continue. However, I think Isaac Asimov said it well,

“The only constant is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.”

It’s easy to look at technology like threshing machines or steel stamping machines – which both replaced individual, slow, labor with automation, and see how technology replaces the individual. But the same is true with software.

In my recent Task-Oriented Computing post, I mentioned this as well:

“Rather than making users take hammers to drive in screws, smaller, task-oriented applications can enable them to process workflow that may have been cumbersome before and enable workers to perform other more critical tasks instead.”

Software is a tool. We use that software to make our work more efficient, cheaper, faster, or all of the above. Does that sound familiar? The key value that people do, and always will add to any tool – whether it is a device or a software solution – is the human mind. The article I mentioned early on is similar to other articles we could find in the 1900’s about machines “stealing jobs” from auto workers, textile workers, and more. Yes. Many of the jobs of today will not be jobs for humans in the future. They will be jobs for machines and software. Get used to it. This isn’t a threat – it’s opportunity. Machines and software can free us from the rote tasks of our jobs, if we let them, and if we let ourselves continue to grow and learn throughout our lives. Don’t stand still. You shouldn’t be doing that, even if technology wasn’t coming to get your job.

I ran across this work that Alan Turing wrote  about machines, masters, and servants. As technology continues to accelerate and perform amazing things we would have thought impossible years before, we must not just innovate the technology. All of us – regardless of our role in society – must be constantly reinventing, reinvigorating, and renewing our own role in society. Not content to let our skills and thinking lie dormant, we must push to make a role for ourselves in the ever changing world. Don’t fear the machines. Don’t fear the software. Don’t fear the change. Be a part of it.

Feb 13

Microsoft Account – Bring Your Own Identity

When you start a new job, there’s only one you. You don’t get a new identity just because you started at a new company. You have the same Social Security number, you have the same fingerprints, same birthdate, same home town. You get a collection of credentials that give you access to company resources, but you don’t really get a new “identity”.

In fact, pretty much the only time you get a completely new identity is if you enter the Federal Witness Protection Program.

What does happen is, you go in to a new job, and you tell them who you are, you provide your actual identity cards/passport to them, and they established a pseudo-identity for you within the organization.

For some reason, along the way, it became normal to have your corporate identity be who you are. When we look at Active Directory (AD), and any other LDAP based directory over last decade or so, a lot of their growth is been around trying to make that the single identity that you use within the company, and have it even be federated out when you need to connect to external resources outside of the organization.
But again, when you leave that company, you take your identity with you. The email address, the AD access, the server access, the application access, the database access – it was all part of your role in the company, and ceases to be, the day you leave.

Last year when Microsoft announced Windows RT, a lot of us… well… kind of freaked out, because Windows RT didn’t include active directory membership, let alone any ability to manage the device through Group Policy (GP). After over 12 years, Microsoft was saying “no… no… no… you don’t have to use AD to manage this machine. In fact, it can’t even join AD“. It was Windows 9X all over again in terms of centralized management.

What’s most fascinating to me about Windows RT though, is that your identity, when you log on to that machine, is a Microsoft account; the thing we used to call Passport, Live ID, etc. Microsoft has made your personal Microsoft Account the central hub of everything you do now – from Windows, to SkyDrive, Outlook.com, Office, and the Windows Store – because the device is a personal device, which could possibly be used with work resources. Most importantly though, this account is yours. Your employer has no control over the account. Regrettably, that includes a lack of manageability for such things as password complexity, or how you handle data that crosses from company systems over the threshold of your device and out to the unmanaged SkyDrive service – or any cloud storage service.

A blog post I ran across a few weeks ago about “BYOI” caught my attention. That’s bring your own identity, for those of you keeping score of the acronyms at home. Microsoft hasn’t stated anything of the sort, but I have to look
at what’s in Windows RT, and wonder if BYOI isn’t indeed part of a bigger trend.

In many ways BYOI reflects the whole BYOD or COPE, or whatever we want to call it… the idea that IT doesn’t own or manage devices any longer, users do. And just as you never would’ve brought a home computer into your company and have it join AD then, why would you do that today? As I alluded to almost a year and a half ago in my post where I stated that hypervisors on phones were a bad idea, it’s not about managing devices anymore. At best, it’s about managing applications – and even more about managing access to data from within applications. Lose the device? Who cares. Brick it. Lose the application? Who cares – you didn’t lose the credentials. Lost the credentials? Nuke them and provide new ones to the user. We’ve lived in this world of device dictatorship not because it was the best way to create productive users, but because Windows was created as an “anything goes, all users are admin” world, with a common filesystem shared promiscuously by any code that can run on the system. We’re moving towards a world where data, not devices, are the hub – and identity, not devices, are the key that unlock access to that data.

The BYOI post that I mentioned earlier didn’t really talk about this aspect, it was kind of a different tangent – but my point is what if it doesn’t matter what set of credentials you use to log on to a device? Technologies like exchange ActiveSync and other mobile device management technologies give IT the ability to nuke a device from orbit if they want to. It’s not that AD Is dead, it’s just that Microsoft understands that AD isn’t an active part of users’ personal machines (those they personally acquired).

An iPad or iPhone never asks for credentials to log on to the device, but of course it never really establishes your identity either (there are as many as 1 users on any iOS device – they’re sharing a single identity across the OS – for better or worse). Instead, in iOS it really becomes the applications that hold your identity and authenticate you to assets of the company (or Apple, or Netflix, etc). An even better aspect of this is the fact that these applications then usually don’t hold much application state. What they do is allow authentication and state to be managed and secured by the application instead of by the operating system (just like Windows Store applications and many well-managed IT applications do). All iOS owns is the management and security of which applications are allowed to be installed and run on the device, and the secure storage of data. It also owns destruction of the operating system and all of the data on the device if the device is lost or compromised and the pass code is entered incorrectly or the device is forcibly wiped through Exchange or other device management software.

In a somewhat fascinating turn of events, even Office 2013/Office 365 do the same. While you can store data locally, your Microsoft Account or Office 365 account can be different than your AD account, and are used to license the software to you, and provide shared storage in the cloud (yes, an Office 365 account can be tied back to AD – but the point is that Office, not Windows, is providing that authentication gateway). Identity is moving up the stack, from an OS-level service to an application-level service, where you can just as easily bring your own identity – which can, but doesn’t have to be, a single directory used across a device for everything.