December 14, 2025

AI is dangerous--for more reasons than you think.

Virtually all AI systems now used by the public are based on some variation of "large language models" (LLMs), in which the AI program learns how to parse written material, and then how to string words together to plausibly answer questions.

And indeed, the output looks totally plausible--polished.  Let's examine that:

When an article is loaded with mistakes in spelling or grammar, people understandably discount whatever point the writer is trying to make.  Conversely, perfection in spelling and grammar is more convincing, even if the content is nonsense.  And AI output is always polished in spelling and grammar. 

The second thing that makes people defer to AI output is that it's produced by a computer, and citizens of advanced countries have been conditioned to believe computers are unbiased and don't make mistakes.  Needless to say (but we have to because of the conditioning), that's horseshit.  AI output simply looks plausible, like an impeccably dressed con-man.  Because he looks wealthy (plausible), most people don't bother looking behind the facade.  Bernie Madoff and Barack Obama come to mind.

AI isn't designed to find truth, but to create a coherent, plausible Narrative.  It's perfectly happy to quote fictitious court cases, with authentic-looking legal citations, to produce the desired result.  You may have read about this: several attorneys have used AI to help write briefs and find court decisions that support their case, only to have a law clerk for the judge discover that the cases cited don't exist.

The people who programmed AI were trying for "intelligence," but the actual result is simply creating plausible, polished language.  The result makes bad conclusions convincing.

Large language models--AI--aren't programmed to say "I don't know."  When there are opposing viewpoints on a topic, AI almost always backs the one with more cites in newspapers and tech journals--a process very similar to politicians or companies hiring "influencers" to give the impression they have a huge fan base.  AI gives the reader the consensus view, but rarely does it say "Many experts disagree." 

As a result, when AI gives an answer it sounds confident, ignoring flaws in whatever the popular article count supports.  The programmers may not have intended this, but users interpret confidence as truth.  We shouldn't need to point out that just because a conclusion is uttered with conviction doesn't mean it's right. 

Unfortunately, humans are hard-wired to judge things by appearance--and the more polished something looks, the less likely people are to question it.  In fact, even questioning what a computer prints out often prompts waves of derision from people: "Who do you think you are to challenge a computer?" 

"How dare you question a recognized 'authority,' dude?"  Anyone who does so is automatically branded arrogant, subversive.  It's like claiming bribem's FBI was corrupt!  Just not done, sport.  

This isn't just speculation.  Hundreds of experiments, going back 70 years, confirm that humans tend to believe anyone who has the trappings of authority, no matter how absurd their statements or orders may be.  And today, AI has become the ultimate authority to people who don't understand how it works. 

That phenomenon is where the danger lies.  If an "authority" sounds sure of itself, people defer.  But if the "authority"--whether a human or AI--sounds uncertain, people often step back and do some thinking for themselves.  And as noted earlier, AI is always confident, never uncertain.  So unless some human authority counters, AI's error isn't discovered--until the system is stressed.

A couple of months ago in Minneapolis an illegal alien driver hit a 6-year-old girl in a crosswalk, and AI reversed the names of the driver and the six-year-old girl because of a sloppy comma in a post on the website of a local TV station.  AI didn't consider any other sources to confirm its bizarre claim, just the literal inference from the misplaced comma. 

One attraction of AI is that it reaches conclusions fast--which of course makes it seem more authoritative.  But again, is speed an advantage if the conclusions are wrong?  

Errors slip through without being discovered because they look reasonable.  Normal human skepticism is turned off or ignored.  "Authority" saying plausible things wins every time.  And at this point in time, people who know nothing about the real workings of AI consider it to be the authority. 

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home