purple letters on white envelope

Dear Editor

The Luddites were a 19th century uprising of unemployed weavers in Nottingham. 

They smashed machines not because they hated technology. Instead, they saw that machines would steal their livelihoods and reduce human craft to mechanical repetition.

They weren’t rejecting technology; they were rejecting the social costs of automation.

Today we are facing threats posed by a new kind of automation.  According to OpenAI CEO Sam Altman, more than 800 million people use ChatGPT each week and those numbers will only grow larger.  

But, let’s not call it AI because it isn’t intelligence. Let’s call it the automation of human creativity. The automation delivered by ChatGPT, Google’s Gemini, Meta’s Llama series, and excessively hyped by tech companies does not reproduce human thinking. 

These platforms are Large Language Models (LLMs). Simply put, this means that they are trained on massive amounts of information (supplied by the internet, or more accurately, plagiarized from the internet) to be giant statistical prediction machines that repeatedly fill in the next word in a sequence. Undoubtedly there are tasks that can usefully be automated in this way (though that will undoubtedly mean the loss of certain jobs). 

The data centers that run these programs consume massive amounts of water and electricity, raising the costs of these household utilities and occasionally depriving whole communities of their water supply.

There are as many bad uses of AI as useful ones. They threaten human creativity. They will produce a dystopian future plagued by massive surveillance, disinformation, and very believable scams.

AI has already automated surveillance.  Corporations are sucking up your data on social media.   They monitor your “smart” home.  They monitor your health. Can the government be far behind?   

Automated LLMs will vastly increase the amount and believability of disinformation in an already disinformation saturated environment.  Inaccuracy is built into the system.  One study found that about 20% of all chatbot answers gave false or outdated information.

Not surprising since the chatbot does not know what is true or false, moral or immoral.  It only predicts the next word in a sequence. 

Finally Automated LLMs are a scam artist’s dream. Expect more believable scams in your email, texts, social media, and snail mail. 

There are even companies that provide scammers with the tools they need, like face- or voice-changing software, to increase the believability of scams.

The cost of this technology will be fewer jobs, higher utility costs, more surveillance, lies, scams, and environmental degradation. 

Laurie Finke

Gambier, Ohio