The Cliff Clavin Effect – AI and the Truth

The Cliff Clavin Effect – AI and the Truth

When another human tells you something, you may question it.

When a machine tells you something, you likely take it as the gospel truth.

On the 1980’s American TV show, “Cheers”, the character Cliff Clavin, a mailman played by actor John Ratzenberger, often doles out facts as if they were known truths.?

He would say, “It’s a little known fact…” followed by something that you might or might not believe to be true. Over time, “It’s a little known fact” became analogous to something that was not, in fact, a fact.

As technology has evolved over the last 30 years, and become more personal with phones, tablets, and truly personal computers (versus the ubiquitous “family PC” of the 1990’s), humans have become increasingly dependent on their devices and the Web accessed via those devices, to learn things, complete tasks, and keep track of current events.

As an increasing number of services take on so-called “AI” behaviors, it becomes increasingly dangerous that humans will treat them “as human”, and not question their output.

With mobile phones, we’ve already seen navigation services suggest that people drive into rivers or in some cases onto unserviced roads, potentially leading to loss of life.

While services surely have disclaimers buried into their license agreements, telling the user to consume them at their own risk, consumers never read those. As the adage goes, consumers just want the dancing pigs, and they’ll do whatever they need to, and click past whatever they need to, in order to get to those darned dancing pigs.

Amazon’s Alexa service, running on a prophetically named “Echo” device, was shown in the not so recent past to polly parrot false information about the 2020 US presidential election as fact. Microsoft’s own short lived “Tay” bot on Twitter lasted only 16 hours before decommissioning after it began spewing hateful language and holocaust denial theories due to effluent it had ingested from users tweeting at the account.

As we look forward to the next several years, particularly with Microsoft pouring Copilot “AI” into all of their enterprise and consumer services, and pushing them harder than Microsoft has with any technology before, it becomes increasingly important to question the information that systems return to us stated as truth.

Again, it should not be expected that consumers will do this—it’s up to the nerds—but I fear the rogue Alexa disinformation and Malicious Tay are just the start of this problem. Expect to see people hurt or injured, and broad political mis- and disinformation battlefields as campaigns figure out how to stupify “AI” to get it to vomit believable “fact” to end users, and expect to see more normals than ever confused by the truth they want to read, the truth that these services deliver, and the actual truth that exists.

Comments are closed.