I have a personal web site that consists mostly of blog posts about software on Linux. When I solve a software problem, I write about it to help other programmers.
I recently wrote an article about solving a problem with the Hugo static site generator. I listed in detail the changes that I had to make to three separate files. Anybody familiar enough with Linux to want to use Hugo should have no problem using my instructions.
So today I received an email from somebody who thanked me for the Hugo article, but complained that his favorite AI (some piece of crap called Claude) was unable to read my article, and thus he couldn’t use the instructions in the article. He then asked me to turn off Anubis, the AI blocker that I use on my web site.
So apparently, this person, who is using software (Hugo) that is aimed at people who are somewhat technically competent, is unable to actually operate said software himself. And apparently, he has no interest in learning enough to operate said software, but would rather have this Claude crap do the work for him.
Yes, I’m a grumpy old programmer, and yes, I block AI bots, because they are ruthless, stupid, and unethical.
The ruthless part of AI is that these bots scrape the internet without observing the usual norms that have governed the use of web crawlers for decades. They ignore robots.txt, and they use hundreds of IP addresses at once to scrape your data. At one point, a server that I maintain for a Vermont library was so overloaded that its ILS (catalog and circulation management system) was brought to a standsill. When I looked into the problem, I saw that the server was being attacked by AI scrapers from more than 900 different IP addresses. I installed Anubis, and now the library system no longer runs like maple syrup in January.
The stupid part of AI is that “intelligent” is just marketing bullshit. These things are clever monkeys that recognize patterns. Humans are good at recognizing patterns, too, but they are good at other things that AIs aren’t, like intuition and critical thinking and ethics. The stupid part can only get worse, as more and more programmers write buggy and inefficient code using AIs, and the resulting crappy code gets fed back into the AIs, making future code even more crappy.
The unethical part of AI is so huge, I’m hesitant to write about it. But one thing that really bugs me is the learned laziness and helplessness that AI encourages. When I wrote that article about Hugo, I had just spent several hours solving a problem. I read documentation, I tried dozens of things, I looked at my previous Hugo installation — in short, I worked hard on the problem. I want people reading my article to also do some work: they can take the code I wrote, but I want them to apply the fixes and do the testing themselves, so that maybe they’ll actually understand what they’re doing, and not depend on some highfalutin trained monkey to do the work for them.
The learned laziness and helplessness is a great tool for authoritarians, though, and maybe that’s the whole idea. It’s like a super-powered version of the Scamdemic’s “Trust the Experts™” psyop, only now it’s buggy crap software we’re supposed to trust instead of crooks like Anthony Fauci.
In other news, I’m nearly done with the Tiny House, something that I designed and built without AI. Update soon.

Excellent points, Mark.