21
Mar 13

What’s your definition of Minimum Viable Product?

At lunch the other day, a friend and I were discussing the buzzword bingo of “development methodologies” (everybody’s got one).

In particular, we honed in on Minimum Viable Product (MVP) as being an all-but-gibberish term, because it means something different to everyone.

How can you possibly define what is an MVP, when each one of us approaches MVP with predisposed biases of what is viable or not? One man’s MVP is another’s nightmare. Let me explain.

For Amazon, the original Kindle, with it’s flickering page turn, was an MVP. Amazon, famous for shipping… “cost-centric” products and services was traditionally willing to leave some sharp edges in the product. For the Kindle, this meant flickering page turns were okay. It meant that Amazon Web Services (AWS) didn’t need a great portal, or useful management tools. Until their hand was forced on all three by competitors. Amazon’s MVP includes all the features they believe it needs, whether they’re fully baked or usable, or whether the product still has metaphoric splinters coming off from where the saw blade of feature decisions cut it. This often works because Amazon’s core customer segment, like Walmart’s, tends to be value-driven, rather than user-experience driven.

For Google, MVP means shipping minimal products that they either call “Beta”, or that behave like a beta, tuning them, and re-releasing them . In many ways, this model works, as long as customers are realistic about what features they actually use. For Google Apps, this means applications that behave largely like Microsoft Office, but include only a fraction of the functionality (enough to meet the needs of a broad category of users). However Google traditionally pushed these products out early in order to attempt to evolve them over time. I believe that if any company of the three I mention here actually implement MVP as I believe it to be commonly understood, it is Google. Release, innovate, repeat. Google will sometimes put out products just to try them, and cull them later if the direction was wrong. If you’re careful about how often you do this, that’s fine. If you’re constantly tuning by turning off services that some segment of your customers depend on, it can cost you serious customer goodwill, as we recently saw with Google Reader (though I doubt in the long run that event will really harm Google). It has been interesting for me to watch Google build their own Nexus phones, where MVP obviously can’t work the same. You can innovate hardware Release over Release (RoR), but you can’t ever improve a bad hardware compromise after the fact – just retouch the software inside. Google has learned this. I think Amazon learned it after the original Kindle, but even the Fire HD was marred a bit by hardware design choices like a power button that was too easy to turn off while reading. But Amazon is learning.

For Apple, I believe MVP means shipping products that make conscious choices about what features are even there. With the original iPhone, Apple was given grief because it wasn’t 3G (only years later to be berated because the 3GS, 4, and 4S continued to just be 3G). Apple doesn’t include NFC. They don’t have hardware or software to let you “bump” phones. They only recently added any sort of “wallet” functionality… The list goes on and on. Armchair pundits berate Apple because they are “late” (in the pundit’s eyes) with technology that others like Samsung have been trying to mainstream for 1-3 hardware/software cycles. Sometimes they are late. But sometimes they’re “on-time”. When you look at something like 3G or 4G, it is critical that you get it working with all of the carriers you want to support it, and all of their networks. If you don’t, users get ticked because the device doesn’t “just work”. During Windows XP, that was a core mantra of Jim Allchin’s – “It just works”. I have to believe that internally, Apple often follows this same mantra. So things like NFC or QR codes (now seemingly dying) – which as much as they are fun nerd porn, aren’t consumer usable or viable everywhere yet – aren’t in Apple’s hardware. To Apple, part of the M in MVP seems to be the hardware itself – only include the hardware that is absolutely necessary – nothing more – and unless the scenario can work ubiquitously, it gets shelved for a future derivation of the device. The software works similarly, where Apple has been curtailing some software (Messages, for example) for legacy OS X versions, only enabling it on the new version. Including new hardware and software only as the scenarios are perfect, and only in new devices or software, rather than throwing it in early and improving on it later, can in many ways be seen as a forcing function to encourage movement to a new device (as Siri was with the 4S).

I’ve seen lots of geeks complain that Apple is stalling out. They look at Apple TV where Apple doesn’t have voice, doesn’t have an app ecosystem, doesn’t have this or that… Many people complaining that they’re too slow. I believe quite the opposite, that Apple, rather than falling for the “spaghetti on the wall” feature matrix we’ve seen Samsung fall for (just look at the Galaxy S4 and the features it touts), takes time – perhaps too much time, according to some people – to assess the direction of the market. Apple knows the whole board they are playing, where competitors don’t. To paraphrase Wayne Gretzky, they “skate to where the puck is going to be, not where it has been.” Most competitors seem more than happy to try and “out-feature” Apple with new devices, even when those features aren’t very usable or very functional in the real world. I think they’re losing touch of what their goal should be, which is building great experiences for their users, and instead believing their brass ring is “more features than Apple”. This results in a nerd porn arms race, adding features that aren’t ready for prime time, or aren’t usable by all but a small percentage of users.

Looking back at the Amazon example I gave early on, I want you to think about something. That flicker on page turn… Would Apple have ever shipped that? Would Google? Would you?

I think that developing an MVP of hardware or software (or generally both, today) is quite complex, and requires the team making the decision to have a holistic view about what is most important to the entire team, to the customer, and to the long-term success of your product line and your company – features, quality, or date. What is viable to you? What’s the bare minimum? What would you rather leave on the cutting room floor? Finesse, finish, or features?

Given the choice would you rather have a device with some rough edges but lots of value (it’s “cheap”, in many senses of the word)? A device that leads the market technically, but may not be completely finished either? A device that feels “old” to technophiles, but is usable by technophobes?

What does MVP mean to you?


19
Mar 13

Bill Hill and Homo Sapiens 2.0

Working on another blog post, and ran across an interview of Bill Hill from 2009. Bill reinvented himself many times in his career, from a newspaperman to someone who fundamentally worked to change the way the world read text on a digital screen. It harkens back to yesterday’s post, as well as my post on the machines coming for your job. Specifically, at about 19 minutes in, this conversation comes up:

Interviewer: “In this economy…What’s the relationship between fear…and taking chances…?”

Bill Hill: “Well that’s just the whole point. I mean, it’s very easy to get kinda cozy, and do ordinary stuff.” and “You can’t allow yourself to get paralyzed.”

Bill never stopped moving, never stopped reinventing himself. Weeks before he passed, he and I had a conversation about eBooks, almost 13 years after I first met him as we talked the same subject. You can’t stop moving, and can’t stop reinventing yourself.

 

 

 

 

 


18
Mar 13

Always Be Unique

Earlier today, this tweet showed up in my Twitter timeline. It leads with the text: “Quality to blame for declining news audiences, study suggests”

I retweeted it, and then commented, “The increased cost for news content, and the decreasing amount of truly unique content, show why people abandon news outlets.”

At first, I thought this applied just to news content. But no, it applies to many things in our life today; however news exemplifies it in a very unique way.

I’ve said before that “The Web democratized content” (along with music). Anyone can be a “journalist” – or at least a published writer, on the Internet today. But that just gets you published. Anyone can take the time to write a book, pay for an on-demand publisher to print a copy, and voila!, they’re a published author. That doesn’t mean anyone will pay money for copies, read them, and recommend them to friends. Same on the Web.

I’m not only a producer of information, I’m also a consumer – and I have to tell you, as I browse the aisles of information that are out there, there’s a lot of digital junk food vying for our attention. There are a handful of news sources that break actual news, and a handful news sources that perform strong analysis. But more often, the Web and Twitter (and hours or days later, television stations and newspapers, respectively), are chock full of a self-aggrandizing punditry, where like the childhood game of “telephone”, non-news begins resonating and echoing, becoming louder and louder until it sounds like news. I’ve seen this happen with news about every major technology company, and recently saw it happen to a family member of a friend.

News feeds on sensationalism. Whether it’s bad news, “exclusive” news (whatever that means in the age of Twitter), an idiotic rumor based upon a leak from a supply chain provider, or worse, rumors based upon rumors, it spreads like gossip. In the end, it’s impossible for anyone to truly stand out, because everyone is stuck in this same rut of repeating the rumor.

So back to my point. Why are news audiences declining? Because conventional news outlets are being beaten to the punch. News outlets have generally shrunk in content and quality, while increasing in price, and throwing in advertising technology that gets in the way of the content and the user’s experience. I’m not sure that content paid for primarily by advertising is sustainable. But I can tell you if every time people visit your site you give them interstitial ads, pop-ups/pop-unders/pop-ins, or other hostile advertising that gets in the way of the content, rather than adding value (yes, that’s possible), you’re going to lose readers over time. Pissing off your reader isn’t a good way to provide unique news in a sustainable way.

News has been democratized and commoditized to the point that if we buy a morning paper in one city, fly to another and grab another paper, we see the same syndicated news, with a thin veneer of geographically relevant content grafted on. Conventional journalism outlets are dying because they are slow, and don’t provide significant value to their consumer given the long wait time. They’re also often laden with ads, with articles that are slow to load (usually due to slow ad engines, ironically), and provide little value outside of news you could have seen breaking on Twitter some time ago if you were watching. News outlets that regurgitate twice-baked news without adding value are doomed to be paved over, parking lots of journalism times past.

This applies outside of journalism too. It’s called the first-mover advantage. Do something first, you can possibly own that market. Follow others, and you have to clearly demonstrate the unique value you provide beyond the first mover – or get buried in the melee of als0-rans vying to also catch up to the first mover.

If you want to be successful in anything, always be first, or always be unique. Those are your two choices. Much like any other job, there’s always someone waiting to fill your shoes if you stop providing unique value.


16
Mar 13

Shut up and eat your GMOs

It’s with a fair amount of disappointment (disbelief?) that I read Bruce Ramsey’s article about Initiative 522 (Washington’s GMO labeling proposition) in the Seattle Times.

My belief, after reading this piece, is that Mr. Ramsey should generally refrain from writing when his familiarity with the topic at hand leads him to include the disclaimer “I am a novice”, as he did with the statement early in this article, “I am a novice on genetically modified organisms”.

There are three modalities of belief in the GMO (genetically modified organism) debate – or in any discussion of where our food comes from). Heck, it really applies to almost any topical debate.

  1. Apathy (no real concern one way or another – perhaps no familiarity to base an opinion on)
  2. Agreement (tolerance or fanaticism for the practice)
  3. Antipathy (some disagreement or more with the practice)

In my experience, when it comes to their food, Americans generally seem to fall contentedly into the category of apathy. Happy to ignore the complexities of where their food comes from, most Americans ignore the ugly underbelly of our industrialized food system until the evening news enlightens them to a new E. coli outbreak in antibiotic-laden, undercooked beef from a CAFO, or they latch on to a buzzword used by reporters like pink slime or “meat glue”. They happily go along with the ethos that, “Everything is okay until it’s not okay.” But when that concern passes, most turn back to their bread and circuses, and spend more energy focused on reality shows than what’s in the food their family is eating.

Mr. Ramsey’s piece clearly puts him in the “agreement” camp that GMOs are acceptable because, as he states, “People are trying to make an economic case in a matter that is mostly about belief.” and that they don’t need labels “…if it makes no difference to people’s health?” But it boggles my mind that Mr. Ramsey elected to prognosticate about the need to label GMOs – one way or the other – when he clearly has no background on the topic, or why labeling efforts ever came to be. Instead, he simply dismisses the debate about GMOs as if it were a figment of the imagination of those behind 522 – just ignorance on their part, or even moreso, some evil conspiracy by “Big Organic” to foist its way of life upon the rest of the world. I don’t get where this idea can even begin to come from.

My own belief, based upon years of trying to understand the complexities of our food system, and trying to not turn a blind eye to the unpleasantness of it all, is that all of us in the US are being deceived about whether GMOs are or are not harmless – we are told “it’s fine” – but the people telling us that are the people making the GMO seed, and the profits as a result. There was no opportunity to question at the inception, and even as we face the impending likely approval of a GMO salmon, even with a huge public outcry, it appears that business may win out over unknowable, unanswerable questions about long-term health effects or environmental detriment from this fish being approved.

I’m clearly in the disagreement camp, and I am a firm believer that consumers should be transparently made aware of the possible risks of genetic modification, and given the option to know what foods (or often “foods”) contain GM crops.

Long ago, our government decided to look the other way about GMOs. Using a methodology called substantial equivalency, even over the objections of 9 FDA scientists, the FDA accepted the GMO industry’s stance that there is no difference between conventional breeding (hybridization) and bioengineering. Now maybe you have, but I’ve never seen a salmon mate with an eel, and I’ve never seen a bacterium mate with a papaya. Yet among other genetic cross-breeding, that’s what we’ve got today. If someone tells you there is no difference between hybridization and biotechnology, they’re lying to you (or trying to sell you GMO seed, and likely a pesticide cocktail to go with it).

The use of substantial equivalency is entirely biased towards the needs of producers rather than the general public. It is based around the (non-scientific) philosophy that a new food is like an old food, unless it isn’t. Dismissed by many scientists as not a safety assessment, rather a means to rubber-stamp new foods until otherwise proven hazardous, substantial equivalency has enabled GMO producers to throw countless food components onto our plates simply claiming they are safe, using a categorization called Generally Regarded As Safe (GRAS) until found to be otherwise – note the earlier article where the FDA has even turned a blind eye towards food that was found to not be safe. They aren’t even treated as an additive. Frankly, we’re still just learning what kind of baggage even comes along with GMOs. Note the two paragraphs in that Durango Herald article:

“Monsanto’s own feeding studies, however, showed that the genetic material in GMO corn that makes it pest-resistant was transferred to the beneficial bacteria in the intestinal tract of humans eating GMO corn. This potential for creating a pesticide factory in the human gut has gone untested.”

Followed by:

“Recent research has shown that GMO corn insecticidal proteins are found in the blood of pregnant women and their fetuses. Animal research has shown intestinal, liver, kidney and reproductive toxicity from both GM corn and soy. This does not bode well for the assertion of ‘substantial equivalence.’”

Yet GMOs are in almost everything you eat. Conventional corn, soy, canola, and likely soon, farm-raised, (antibiotic-laden) salmon!

People in the yes on GMO camp generally decry people saying no as “anti-science” or “nutcases”. I’m hardly anti-science, and I like to think I’m reasonably rational and well-balanced. But I believe as a species, we often jump into hasty “great ideas” only to later regret that idea. Radium. Thalidomide. Vioxx. History is littered with pharmaceuticals rushed to market only to be pulled back after fatalities exceeded the manufacturer’s clinical trials. I believe that often, our government officials err on the side of the businesses that pay for influence, rather than on the side of consumers, who merely vote them into office. In the case of GMOs in our foods, the rush to judgment isn’t on the side of the naysayers, it’s on the side of the government and industrial agricultural giants, who have foisted GMO crops on consumers, without ever questioning the long-term side effects, or offering consumers any other option aside from buying organically labeled foods, where GM ingredients are forbidden by definition.

If you don’t read anything else, Mr. Ramsey, I hope you will read this. There is no independent testing of GMOs. None. There is no long-term testing of GMOs. None. Independent or black-box at the vendor. As a consumer, you have absolutely no way, outside of eating exclusively organic, that you are not regularly ingesting GMOs. You dismissed the need for 522 not because you investigated and understood why GMOs are not good for us (let alone the planet), but because you inquired with one geneticist and a company trying to sell a GMO apple. Isn’t that rather like asking a WSU student what kind of education UW can provide? Yet GMOs are in almost everything you eat. You trivialize the need for labels, and point the finger at a local Co-op, Whole Foods, and other organic food proponents as renegades trying to force their world on you.

California’s recently failed proposition to label GMOs was, much as Washington’s was, created by volunteers. California’s was crushed under the weight of conventional food and agribusiness giants. They don’t want GMO labeling because, for the food producers, it will result in high cost for ingredients (for example, replacing high-fructose corn syrup, predominantly GM, with non-GM sugars), packaging changes, and reformulation costs. The agribusiness giants? Because it crushes their revenue stream. That’s why Monsanto spent millions to defeat the initiative, and surely will here as well.

I’m not exactly sure why you elected to land on the side of supporting GMOs, given your self-admitted naiveté. But I hope that in the future you will examine and understand the whole debate before injecting yourself into it and using your column to create (my belief) wrong-headed, uninformed public opinion.

The organizations and companies you finger pointed at being “behind 522”? Sure – they stand to make more profit if GMOs are labeled. But I honestly don’t believe that’s why PCC (a local Co-operative), among them, is doing it. They are backing it because consumers deserve to have a choice – to be pulled from their apathy that the industrial food suppliers have happily created – to understand what is in the foods that they buy, and make healthier and more sustainable choices. In the end, it won’t matter if 522 passes. Whole Foods has already decided to label all GMO foods in their stores by 2018. That’s still too far away, but it’s a light at the end of the tunnel.

I can go on discussing the unsustainability of GMOs at length – contrary to the public relations slogan, GMOs are not the only solution to feeding the world (http://www.huffingtonpost.com/jeffrey-smith/vilsack-mistakenly-pitche_b_319998.html). With their seed licensing costs, creation of a biologically unsustainable monoculture, high cost for other inputs matched to many of them, increasing requirements for more volume and higher toxicity herbicides and pesticides to accompany them as pests and weeds develop natural resistance, GMOs as the key to “feeding the world” are a myth.


09
Mar 13

Most smart appliances are stupid

“Smart”. Many device and appliance manufacturers toss that word around like they know what it means.

An app platform on a TV? Voila! It’s a “Smart TV”

An LCD screen and/or an app platform on a refrigerator, washer or dryer? Voila! It’s a “Smart Appliance”.

You go ahead and keep using that word, manufacturers. Just understand that it doesn’t mean what you think it means. You’re turning it into a meaningless modifier, like “green”, “natural”, or one of my favorites, “healthy”.

Let me offer you a hint. Anything that leads the blurb describing itself with a meaningless modifier usually doesn’t actually do the thing it purports to do. Way back in 2010, I discussed  how it was important for purveyors of “family room devices” and app authors to understand the difference between single-user and multiple-user applications/experiences, rather than just shoving the entire (single-user) smart phone experience into the “smart TV”. Just like task-oriented computing for any other class of user, designing proper smart TVs, and even more importantly, smart appliances, needs to start with solving the actual problems that users are having, not just crap out LCD displays on top of old refrigerators or washers and dryers.

Let me start with a real-world collection of problems that I have with appliances, tell you why fixing them is truly “smart”, and why adding an LCD display or “apps” to an appliance won’t solve any of them.

Dishwasher:

  • Problem: I unloaded and reloaded the dishwasher, but forgot to start it.
  • Solution: Send my phone a notification and tell me before I go to bed.
  • Problem: I unloaded, reloaded, and started the dishwasher but forgot soap.
  • Solution: Send my phone a notification and tell me after it starts so I can fix it.

Oven:

  • Problem: I set a timer for a dish I’m baking, and it has been going off for a minute.
  • Solution: Send my phone a notification or CALL ME!

Washer or dryer:

  • Problem: A load of clothes has finished and is ready for the next step.
  • Solution: Send my phone a notification so I can move them/fold/hang them.

You get the idea. Things that can hook in here? Doorbells. Smoke alarms/CO detectors. Garage doors to tell me I left them open and it’s 11 PM. The list goes on and on. Simple notifications to tell me something’s not going the way it should, or I need to pay attention to something I’m not.

None of these involve a display or that much intelligence. What they do involve is:

  1. Network interconnectivity within appliances (WiFi, Bluetooth 4, Zigbee, whatever to a centralized service. Pick a standard across the industry and go with it – a rising tide lifts all boats.
  2. A willingness for someone to step up and build an intra-appliance API for pushing notifications
  3. A kick-ass (simple) user interface for apps on major mobile platforms like Nest has.

Yes – admitting that the smart phone or tablet is the hub of our lives, not our refrigerator or washer and dryer, is also a key tenet. Is that news?

Smart appliances aren’t about doing more things on our appliances. Sorry to break that to you, appliance vendors. It’s about appliances integrating into the way we use them, and helping us make the most of them. Stop adding apps. Stop adding screens. Start solving problems.


06
Mar 13

Windows desktop apps through an iPad? You fell victim to one of the classic blunders!

I ran across a piece yesterday discussing one hospital’s lack of success with iPads and BYOD. My curiosity piqued, I examined the piece looking for where the project failed. Interestingly, but not surprisingly, it seemed that it fell apart not on the iPad, and not with their legacy application, but in the symphony (or more realistically the cacaphony) of the two together. I can’t be certain that the hospital’s solution is using Virtual Desktop Infrastructure (VDI) or Remote Desktop (RD, formerly Terminal Services) to run a legacy Windows “desktop” application remotely, but it sure sounds like it.

I’ve mentioned before how I believe that trying to bring your legacy applications – applications designed for large displays, a keyboard, and a mouse, running on Windows 7/Windows Server 2008 R2 and earlier – are doomed to fail in the touch-centric world of Windows 8 and Windows RT. iPads are no better. In fact, they’re worse. You have no option for a mouse on an iPad, and no vendor-provided keyboard solution (versus the Surface’s two keyboard options which are, take them or leave them, keyboards – complete with trackpads). Add in the licensing and technical complexity of using VDI, and you have a recipe for disappointment.

If you don’t have the time or the funds to redesign your Windows application, but VDI or RD make sense for you, use Windows clients, Surfaces, dumb terminals with keyboards or mice – even Chromebooks were suggested by a follower on Twitter. All possibly valid options. But don’t use an iPad. Putting an iPad (or a keyboardless Surface or other Windows or Android tablet) in between your users and a legacy Windows desktop application is a sure-fire recipe for user frustration and disappointment. Either build secure, small-screen, touch-savvy native or Web applications designed for the tasks your users need to complete, ready to run on tablets and smartphone, or stick with legacy Windows applications – don’t try to duct tape the two worlds together for the primary application environment you provide to your users, if all they have are touch tablets.


20
Feb 13

The machines are coming for your job. Big deal.

This blog post is in response to the TechCrunch piece entitled Get Ready To Lose Your Job.

For my entire life, my father was a physician (until he retired). He had to subscribe to medical journals and take courses to keep his skills up to snuff, but medicine, and his specialty, did not evolve to such a form that his career has been replaced. That said, his specialty (gastroenterology) now has some amazing tools at their disposal that can obviate the need for some procedures or tools. But the point is – he never had to shift jobs, only keep skills up to date.

I recently read the book Punching Out about a steel stamping plant in Detroit being spun down over a year’s time. While the book left a little bit to be desired (still, a good read), the events were what got me thinking. As the author works alongside him, here’s a story from a worker who had been a part of Ford’s assembly line for a long time but had been let go:

 “They built a new assembly line. One day, we went over for a tour of the new line, and they showed me a machine that was doing my job. The line that I was working on was built in 1942, and this was in 1979. They turned the lights out, and the machine was still doing the job. So I said to myself, ‘Now I gotta learn how to build machines.’”

Humans are toolmakers. We find tasks that need repeating, and we find ways to make those tasks more efficient, cheaper, faster, or all of the above. Cotton gins. Threshing machines (or modern day combine harvesters which replaced them). Assembly line. Steel mini-mills. Scripting languages… All of them exist to make repetitive tasks less tedious.

Often when new technology comes along that makes these tedious tasks less cumbersome, the technology is called “disruptive”. Regardless of how disruptive it may actually be in the long run to society overall, when a piece of technology can replace a worker or two… or three.. or more… it becomes a socially significant event. The Swing Riots in the early 1800’s are a good example of technology leading directly to social complications.

However, even technology isn’t permanent. Let me tell you a secret. NOTHING is permanent. Nothing. Technology is ever-changing, ever-evolving, in a perpetual movement forward for better efficiency. As the thresher replaced people, the thresher itself was eventually subsumed by the combine harvester of today, which combined three previous innovations together in one – obviating the need for all three (and likely most manufacturers making them).

Innately, humans become comfortable, almost sedentary, in their ways. We think things won’t change, and the status quo will continue. However, I think Isaac Asimov said it well,

“The only constant is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.”

It’s easy to look at technology like threshing machines or steel stamping machines – which both replaced individual, slow, labor with automation, and see how technology replaces the individual. But the same is true with software.

In my recent Task-Oriented Computing post, I mentioned this as well:

“Rather than making users take hammers to drive in screws, smaller, task-oriented applications can enable them to process workflow that may have been cumbersome before and enable workers to perform other more critical tasks instead.”

Software is a tool. We use that software to make our work more efficient, cheaper, faster, or all of the above. Does that sound familiar? The key value that people do, and always will add to any tool – whether it is a device or a software solution – is the human mind. The article I mentioned early on is similar to other articles we could find in the 1900’s about machines “stealing jobs” from auto workers, textile workers, and more. Yes. Many of the jobs of today will not be jobs for humans in the future. They will be jobs for machines and software. Get used to it. This isn’t a threat – it’s opportunity. Machines and software can free us from the rote tasks of our jobs, if we let them, and if we let ourselves continue to grow and learn throughout our lives. Don’t stand still. You shouldn’t be doing that, even if technology wasn’t coming to get your job.

I ran across this work that Alan Turing wrote  about machines, masters, and servants. As technology continues to accelerate and perform amazing things we would have thought impossible years before, we must not just innovate the technology. All of us – regardless of our role in society – must be constantly reinventing, reinvigorating, and renewing our own role in society. Not content to let our skills and thinking lie dormant, we must push to make a role for ourselves in the ever changing world. Don’t fear the machines. Don’t fear the software. Don’t fear the change. Be a part of it.


19
Feb 13

Microsoft Account – Bring Your Own Identity

When you start a new job, there’s only one you. You don’t get a new identity just because you started at a new company. You have the same Social Security number, you have the same fingerprints, same birthdate, same home town. You get a collection of credentials that give you access to company resources, but you don’t really get a new “identity”.

In fact, pretty much the only time you get a completely new identity is if you enter the Federal Witness Protection Program.

What does happen is, you go in to a new job, and you tell them who you are, you provide your actual identity cards/passport to them, and they established a pseudo-identity for you within the organization.

For some reason, along the way, it became normal to have your corporate identity be who you are. When we look at Active Directory (AD), and any other LDAP based directory over last decade or so, a lot of their growth is been around trying to make that the single identity that you use within the company, and have it even be federated out when you need to connect to external resources outside of the organization.
But again, when you leave that company, you take your identity with you. The email address, the AD access, the server access, the application access, the database access – it was all part of your role in the company, and ceases to be, the day you leave.

Last year when Microsoft announced Windows RT, a lot of us… well… kind of freaked out, because Windows RT didn’t include active directory membership, let alone any ability to manage the device through Group Policy (GP). After over 12 years, Microsoft was saying “no… no… no… you don’t have to use AD to manage this machine. In fact, it can’t even join AD“. It was Windows 9X all over again in terms of centralized management.

What’s most fascinating to me about Windows RT though, is that your identity, when you log on to that machine, is a Microsoft account; the thing we used to call Passport, Live ID, etc. Microsoft has made your personal Microsoft Account the central hub of everything you do now – from Windows, to SkyDrive, Outlook.com, Office, and the Windows Store – because the device is a personal device, which could possibly be used with work resources. Most importantly though, this account is yours. Your employer has no control over the account. Regrettably, that includes a lack of manageability for such things as password complexity, or how you handle data that crosses from company systems over the threshold of your device and out to the unmanaged SkyDrive service – or any cloud storage service.

A blog post I ran across a few weeks ago about “BYOI” caught my attention. That’s bring your own identity, for those of you keeping score of the acronyms at home. Microsoft hasn’t stated anything of the sort, but I have to look
at what’s in Windows RT, and wonder if BYOI isn’t indeed part of a bigger trend.

In many ways BYOI reflects the whole BYOD or COPE, or whatever we want to call it… the idea that IT doesn’t own or manage devices any longer, users do. And just as you never would’ve brought a home computer into your company and have it join AD then, why would you do that today? As I alluded to almost a year and a half ago in my post where I stated that hypervisors on phones were a bad idea, it’s not about managing devices anymore. At best, it’s about managing applications – and even more about managing access to data from within applications. Lose the device? Who cares. Brick it. Lose the application? Who cares – you didn’t lose the credentials. Lost the credentials? Nuke them and provide new ones to the user. We’ve lived in this world of device dictatorship not because it was the best way to create productive users, but because Windows was created as an “anything goes, all users are admin” world, with a common filesystem shared promiscuously by any code that can run on the system. We’re moving towards a world where data, not devices, are the hub – and identity, not devices, are the key that unlock access to that data.

The BYOI post that I mentioned earlier didn’t really talk about this aspect, it was kind of a different tangent – but my point is what if it doesn’t matter what set of credentials you use to log on to a device? Technologies like exchange ActiveSync and other mobile device management technologies give IT the ability to nuke a device from orbit if they want to. It’s not that AD Is dead, it’s just that Microsoft understands that AD isn’t an active part of users’ personal machines (those they personally acquired).

An iPad or iPhone never asks for credentials to log on to the device, but of course it never really establishes your identity either (there are as many as 1 users on any iOS device – they’re sharing a single identity across the OS – for better or worse). Instead, in iOS it really becomes the applications that hold your identity and authenticate you to assets of the company (or Apple, or Netflix, etc). An even better aspect of this is the fact that these applications then usually don’t hold much application state. What they do is allow authentication and state to be managed and secured by the application instead of by the operating system (just like Windows Store applications and many well-managed IT applications do). All iOS owns is the management and security of which applications are allowed to be installed and run on the device, and the secure storage of data. It also owns destruction of the operating system and all of the data on the device if the device is lost or compromised and the pass code is entered incorrectly or the device is forcibly wiped through Exchange or other device management software.

In a somewhat fascinating turn of events, even Office 2013/Office 365 do the same. While you can store data locally, your Microsoft Account or Office 365 account can be different than your AD account, and are used to license the software to you, and provide shared storage in the cloud (yes, an Office 365 account can be tied back to AD – but the point is that Office, not Windows, is providing that authentication gateway). Identity is moving up the stack, from an OS-level service to an application-level service, where you can just as easily bring your own identity – which can, but doesn’t have to be, a single directory used across a device for everything.


11
Feb 13

Delight the customer

At an annual Microsoft company meeting early in my Microsoft career (likely around 1999), Steve Ballmer interrupted the lively flow of the event to read a few letters that had been sent to him from executives around the world. As I recall, Microsoft technology was not working perfectly for these customers, and they weren’t happy. After he read the letters, Steve broke into a speech about “delighting the customer” – a mantra he adopted for some time, and I continue to use to this day. Unfortunately, while that credo ran for a few years, I distinctly remember not hearing it for the last several years of my career at Microsoft before I left in 2004. Instead, the saying I remember hearing more was about shareholder value. Perhaps I over-remember the negative aspects, but that’s what sticks in my head.

My father helped me land my first job as a teenager. I worked at a Taco Bell in Montana that was privately owned. While many corporate-owned and franchised stores had a very forgiving policy on taco sauce packets (the customer always being right and all) and offered free refills, we included only two packets of hot sauce unless you paid for more, and had the soda fountain behind the counter – refills weren’t full-price, but they weren’t free either. The owner was steadfast about these policies, and became quite irate if you violated them – even when a customer became upset at these policies that differed wildly from any other Taco Bell they had ever been to. I hated it, and so did my peers, and our customers.

As I’ve mentioned before, my first job after college was selling VW’s and Subarus. The dealership I worked at was notoriously stingy, and they would “roll you” as the terminology for selling you a car goes, without floor mats (even the ones that had come in the car from the manufacturer) or a full tank of gas – unless the customer specifically asked for them. Customers would inevitably leave the dealership pissed, and possibly in a position where they wouldn’t be likely to return to us in the future for sales or service. Inevitably, I decided to play a few games with the sales process, and began telling customers, “You’re going to want to ask me about floor mats and a full tank of gas.” Inevitably, they’d reply back, “What about floor mats and a full tank of gas?” – and I’d say, “Great! We’ll make sure you’ve got floor mats and a full tank of gas.” It didn’t come out of my pocket in the sale, and frankly, I felt it wasn’t the dealership’s money to keep. More important to me, I even understood then that sales is all about making your customer feel great about their purchase – not making your customer feel like they just got shafted. Customers don’t tell friends, “Hey, I got shafted at that dealership on floor mats and gas. You should buy your car there.” No. They don’t do that.

For the past year or so, my VW GTI has had a slow leak in a tire on the driver’s side. Whenever the temperature dropped, I knew that the tire pressure management system (TPMS) would kick in and tell me that the tire had finally dropped enough pressure to be a problem. For the last 3 days beginning on Friday, this has gotten progressively worse, and I’ve had to inflate the tire every day (yes, I’m getting it fixed tomorrow).

On my way to work, in the northern end of Kirkland, there’s a 76 gas station that offers incredible service. Most importantly for me, they offer free air and water for your car if you need it – no purchase necessary.

This last Saturday morning, I had to stop by my office in Kirkland to pick up a coat that I had left there before my eldest daughter and I went skiing. As we left, I realized I needed to inflate the tire before I left town. I looked in my wallet, knowing I would have to pay $1.00 at the nearby Shell station to fill up the tire. Only three quarters, and $3 in single bills. Digging deeper, I found two dimes and a nickel. I headed over to the Shell station where I would blow $1 on 20 lbs of pressure for one tire – for the day.

I pulled the car up, and – since the machine only took quarters, headed in to the station for a quarter. The attendant was talking quite loudly on the phone, and even though he saw me, continued rambling on his (personal) call while I waited at the counter… for a quarter. After a minute or so, he asked, “What do you need?” in a terse tone. I said, “Need to swap this change for a quarter for the air machine.” He huffed at me, got up, opened the till and swapped my change do a quarter. I left, filled up the tire, got in the car, and told my daughter, “I’m never stopping here again for air – or gas.

I don’t have a problem with someone charging me for air or water. It’s their business. But then don’t be an a-hole when the extent of my transaction with you for the day is that purchase of air. The 76 station in Kirkland gives away air and water. Not because buying that equipment or running those services is free. No, it’s a loss leader. You give those away and when a customer needs gas, they’ll keep you front of mind. Delight your customer. Tonight, as I drove home, I had to fill up my tire again before I can take it in for service in the morning. I stopped and filled up my gas tank while I was there as thanks to them.

When you nickel and dime your customers, you make their lives more complex, you can frustrate them, and make them angry and vengeful. They don’t forget that. When you treat your customers with respect – and go out of your way to help them – they also don’t forget that. Delight your customers.

 

 

 


08
Feb 13

Task-Oriented Computing

Over the past six years, as the iPhone, then iPad, and similar devices have caused a ripple within the technology sector, the industry and pundits have struggled to define what these devices are.

From the beginning, they were always classified as “content consumption devices”. But this was a misnomer then, and it’s definitely wrong today. Whether we’re talking about Apple’s devices, Android phones or tablets, Blackberry’s new phones, or devices running Windows 8/RT and Windows Phone, calling them content consumption devices is just plain wrong.

A while ago, I wrote about hero apps and promiscuous apps. I didn’t say it then, but I’ll clarify it now. Promiscuous apps hit first not because they are standout applications for a device to run, but rather because they’re easy!

Friends who know me well know that I’m often comparing the auto industry of the early 1900’s with today’s computing/technology fields. When you consider Henry Ford at the sunrise of the auto industry, the Quadricycle was his first attempt to build a car. This wasn’t the car he made his name with. But it’s the car that got him started. This car featured no safety equipment, no windscreen – it didn’t even have a steering wheel, instead opting for the still common (at the time) tiller to control the vehicle.

Promiscuous applications show up on new platforms for the same reason that Henry’s Quadricycle didn’t feature rollover protection and side-impact beams. It’s easy to design the basics. It’s hard to a) think beyond what you’ve seen and b) build something complex without understanding the risks/benefits necessary to build it to begin with.

As a result, we see these content portals like Netflix, Skype, Dropbox, and Amazon Kindle Reader show up first because they have a clear and well understood workflow that honestly isn’t that hard to bring to new platforms so long as the platforms deliver certain fundamentals. Also, most mobile platforms are “close enough” that with a little work, these promiscuous apps can get their quickly.

But when we look out farther in the future – in fact, when we look at Windows RT and criticize it for a lack of best-of-breed apps that exploit the platform less than 4 months after the platform first released, it’s also easy to see why those apps aren’t on Windows RT or in the Windows Store (yet), and why they take a while to arrive on any new platform to begin with.

Developing great new apps on any platform is a combination of having the skills to exploit the platform while also intimately understanding the workflow of your potential end-users. Each of these takes time, together they can be a very complicated undertaking. As we look at apps like Tweetie (Twitter for iPhone now) and Sparrow (acquired by Google), the unique ways that they stepped back and examined the workflow requirements of their users, and built clean, constrained feature sets to meet those requirements – and often innovative interface approaches to deliver them – are key things that made them successful.

The iPad being (wrongfully, I believe) categorized as a content consumption device has everything to do with those applications that first arrived on the device (the easy ones). It took time to build applications that were both exploitative of the platform and met the requirements of their users in a way that would drive both the application adoption and platform adoption. People looked at the iPad as a consumption device from the beginning because it is easy to do so. “Look, it’s a giant screen. All it’s good for is reading books and watching cat videos.” Horsefeathers. The iPad, like Windows RT, is a “clean slate”. Given built-in WiFi and optional 3G+ connectivity, tablets become a means to perform workflow tasks in ways we’d never consider with a computer before. From Point of Service tasks to business workflow, anytime a human needs to be fed information and asked to provide a decision or input to a workflow, a tablet or a phone can make a suitable vehicle for performing that task. Rather than the monolithic Line of Business (LOB) apps we’ve become used to over the first 20 years of Windows’ life, instead we’re approaching a school where – although they take time to design and implement correctly – more finite task oriented applications are coming into vogue. Using what I refer to as “task-oriented computing”, where we focus less on the business requirements of the system, and more on what users need to get done during their workday, this new class of applications can be readily integrated into existing back-office systems, but offer a much easier and more constrained user workflow, faster iteration, and easier deployment when improving it versus classic “fat client” LOB apps of yore.

The key in task-oriented computing, of course, is understanding the workflow of your users (or your potential users, if this is a new application – whether inside or outside of a business), and distilling that workflow into the correct discrete steps necessary to result in an app that flows efficiently for the end users, and runs on the devices they need it to. A key tenet here is of course, “less is more” and when given the choice of throwing in a complex or cumbersome feature or workflow – jettisoning the feature until time and understanding enable it to be executed correctly. When we look at the world of ubiquitous computing before us, the role that task-oriented computing plays is quite clear. Rather than making users take hammers to drive in screws, smaller, task-oriented applications can enable them to process workflow that may have been cumbersome before and enable workers to perform other more critical tasks instead.

When talking about computing today in relation to the auto industry, I often bring up the electric starter. After the death of a friend in 1910 due a crank starter kicking back and injuring him, Henry Leland pushed to get electric starters in place on his vehicles, and opened up motoring to a populace that may have shunned motorcars before then, do to the physical strength necessary to start them, and potential for danger if something went wrong with the crank.

When we stand back and approach computing from the perspective of “what does the software need to do in order to accommodate the user” instead of “what does the user need to do in order to accommodate the software” as we have for the last 20 years, we can begin to remove much of the complexity that computing, still in its infancy, has shoved into the face of users.