AI is Sea Monkeys
When I was a child in the 1970’s, I distinctly remember the ads in the back of magazines and comic books advertising “Sea Monkeys”.
The top headline touted, “Own a BOWLFUL of HAPPINESS – Instant PETS!”, and featured an (unrealistic, shockingly) image of a vaguely human-like family with a mother, father, and two children. Now, I never asked my parents for Sea Monkeys, but I’m sure enough kids did to keep the ruse going for some time. In the end, “Sea Monkeys” were… brine shrimp.
I bring this up because it’s pretty apparent to me that we’ve reached “peak sea monkey” with AI. It has become a routine go-to for far too many things, and people view it as this weird cure-all for so many of the ills plaguing businesses today. At the end of the day, though, AI is just a weird promissory note that far too many people have started accepting at far more than face value.
Need to cut staff? Add AI. Need to spin down a division of your business? Add AI. Hemorrhaging money and need to try something new? Add AI. And if all else fails and you don’t know what to do next to solve your problems, you can always… ask AI.
Some may disagree with some or all of this post, but I’ve spent a lot of time thinking about AI over the last 2+ years (far more time than I wanted to, honestly.) But while on vacation in California last month I started thinking about some of the myths of AI that are driving me crazy. So in no exact order, I present to you what I believe to be ten truths about AI:
- AI doesn’t replace people
- AI doesn’t replace programming (“Prompt engineering” is not engineering.)
- AI doesn’t replace code
- AI will always require oversight and error checking.
- Using AI doesn’t make you more effective. It just makes you faster at being done.
- “Coding using AI” is not development.
- Scaling out AI or scaling up AI does not remove the inherent flaws in the technology.
- AI is not conscious. These systems do not “think”.
- Over-reliance on AI clearly seems to cause problems. Social, mental, and societal.
- AI is intrinsically non-deterministic and non-reproducible. Particularly as algorithms change.
Let’s go through each one of these individually, so I can explain what I mean.
- AI doesn’t replace people – This is one of the things that I think is most misunderstood about AI. People—particularly “business leaders” and pundits—will tell you that they’re letting go of staff because they’re being replaced with AI. AI isn’t robotic process automation (RPA), it’s not literally automating someone out of a basic, rote role – at least not without considerable work. However, if you can take your ten staff members, perhaps fire two, and then hand over their responsibilities to the remaining “lucky” eight, and provide them with “AI tools”, you can try squeezing just a little bit more juice out of those poor
suckersstaff members that remain. BOOM, instant cost savings. I’m 99% positive this is what a huge amount of the layoffs related to AI really are. It’s not really about AI, it’s about labor compression. Putting more pressure on the remaining staff (without adjusting compensation, of course) to eke out improvements in earnings without adding real costs. - AI doesn’t replace programming (“Prompt engineering” is not engineering) – This one will be contentious with some people, but… tough shit. For some time, people have been saying that “AI will replace coding.” No, it won’t. AI may be handy at spot-checking code, or solving isolated problems. But it’s not like you can tell AI, “Build me an operating system that will run Windows applications perfectly” and have it build that from scratch. And frankly, it’s kind of idiotic to ever think that it could do that. If someone tells you it’s doable now, they’re lying. If they say it’s possible in the near future, they’re still lying. I look back at the worst bugs I ever had to get fixed in Windows, and I have absolutely zero faith that an AI dev tool could resolve them without breaking something else. Windows is comprised of layers of weird historical exceptions. If you violate one of them you may get code that compiles and runs, you’ve broken something else critical.
- AI doesn’t replace code – There’s this other concept that says we’d replace code completely with prompts. So you could have a set of pre-fab prompts that you used to do task X yesterday, and run those each day to replicate the current version of your application from the ether. Thing is, (see number 10, below), AI is non-deterministic. The concept of running the same set of prompts ten times and getting the same output all ten times is… effectively impossible. So sure, you might have some aspect of code that you feed to AI and save the output for use within your larger application. But it’s a complete myth that you’d be able to have a set of “build scripts” that run every day to build the latest version of your application. It won’t be an iteration. It’ll be a completely new application.
- AI will always require oversight and error checking – because of the space I work and write in (Microsoft licensing in particular), this one frustrates me to no end. You can’t ask AI to answer a question in any highly technical or complicated field without having a domain expert proof it (and inevitably correct it.) And let me tell you, as a writer, nothing pisses me off more than being asked to proofread AI output for technical accuracy. That isn’t worth my time. AI will inevitably invent new things and new concepts out of whole cloth, and it’ll muddle things that are “near” each other, such as thinking the rules for licensing x software for clients is the same as licensing x software for servers, or getting confused because two or more technologies have names that vaguely overlap (try asking any AI a question about “Microsoft Defender”, for example. It’s almost certainly going to get confused with the answer.)
- Using AI doesn’t make you more effective. It just makes you faster at being done – I believe very strongly in this one too. Microsoft in particular promotes so many of their Copilots as tools that will help you get your job done faster. But the reality is that most will help you with a narrow range of tasks that you might or might not need to have help with. I’ve been worried from the beginning about Microsoft 365 Copilot in particular because so many of its features are designed to help you deal with the overwhelming amount of information you’re receiving. Meetings, emails, follow-ups. If event overflow is a chronic problem across your organization… your staff might just be using their tools wrong and ineffectively to begin with! Honestly one of the things that any AI-driven meeting summary should do first in its summary is an annotation at the top that grades the meeting: “This meeting was inefficient.” “This meeting should have been an email.” “This meeting cost your organization tens of thousands of dollars, and did not result in quantifiable action items.”
- “Coding using AI” is not development. (Telling the architect what you want the building to look like) – so this is kind of related to the above, but different. As a writer who has done development (albeit a long time ago and of shady quality), it annoys me when people say they’re programming but they’re feeding commands to a prompt. No, that’s like going in to the architect and telling them you want a building that’s x square feet, fits on this existing lot, and features a brick exterior. That doesn’t make one an architect, or a builder, or… a developer. It makes someone a person with ideas that doesn’t know how to reduce them to practice without feeding them to a machine first.
- Scaling out AI or scaling up AI does not remove the inherent flaws in the technology – as with item 6, this is conceptually redundant, but I feel it’s an important point, nonetheless. Throwing more hardware at a deficiency of AI (its predictability or illogical wording, for example) doesn’t make that problem go away. It’s still there, you just spent more money and energy trying to smooth it out by running it through the planer three more times.
- AI is not conscious. These systems do not “think” – as someone with a psychology background, I feel like my brain is going to explode every time I see someone say that “AI is conscious”. When you see someone saying this next time, I want you to take a look closely at who they are, what organization they represent, and what field they’re in. Nine times out of ten when I read someone’s quote touting AI self-awareness or consciousness, it’s a technologist. A nerd. Not a psychologist. Not a biologist. They’re literally standing on the other side of the AI, squinting, seeing the code’s behavior, and calling it out as “consciousness”. It’s honestly kind of disturbing.
- Over-reliance on AI clearly seems to cause problems. Social, mental, societal – for some time now, I’ve referred to AI as a “Dunning-Kruger accelerator”. What do I mean by this? I mean that people unfamiliar with a particular topic, particular art, or particular concept, will look at AI, see what it tells them, and… just believe it. (See item 4, above.) Unfortunately what this means is that with a few keystrokes, anyone can become dangerously “familiar” with any topic. This isn’s like Neo downloading “Kung Fu” as a skill in The Matrix. No, it’s about garnering just enough familiarity with a field to quite possibly be literally dangerous. If there were any sort of bounds checking built into AI to ensure that it was always correct or always telling you the truth, it would be one thing. But the sheer confidence and conviction that AI will often use to feed you a complete line of bullshit is… depressingly impressive.
- AI is intrinsically non-deterministic and non-reproducible. Particularly as algos change – That is, the result you got from the AI in January is not the same response you’ll get in March, making testing effectively impossible. It may not be apparent why I’m including this as a distinct item. But as someone who has built software, run dev and QA teams, and shipped numerous pieces of commercial software, I think it’s very important. If you cannot reliably build and test your software over and over again without observing different results, it makes it impossible to debug, test, iterate on, maintain, secure, and upgrade. Treating software like it’s a Genmoji you’re generating on your iPhone is s recipe for disaster, and increasingly shitty, unmaintainable, and increasingly insecure software and services.
AI has been taken in around the world without adequate forethought or oversight. I’m really concerned about where we’re at now, where we’re headed, and the knock-off effects this will have on non-software technologies and products that will make them also start to be unpredictable and unreliable.
I miss building software slowly and with consideration for quality, reliability, and stability. And I’m really concerned with where we’re headed in terms of technology, sociology, and culture.