I had a dream of soylent chatbots made out of people.
Maybe the chatbots aren't made out of human bodies or even people toiling away in some scam center, but the way this sausage is made, and served, is nevertheless going to sour everyone eventually.
I had a dream that news broke about AI companies and it came out that all the chatbot companies were in fact just employing a bunch of people to do the chatbot generated content like a mechanical turk hoax. And that explained all the lying and making stuff up,1 the wrong information, the uncanny valley, the inconsistent output,2 and the degradation over time.3 Perhaps as the workers have become weary chained to their computers on timers to answer all these pointless prompts like call center work from hell, they become more jaded, weird, and sloppy.
I assume this manifesting in the subconscious came out of a combination of things bubbling up in the public consciousness. The feeling I’ve had for a long time like it's just another habit forming tech product4 — like social media,5 video games like Diablo,6 or literally like gambling. Or the article that came out about how industry interests knew plastic bottles weren’t really so recyclable.7 The fake children’s books written by chatbots8 and sold on Amazon to unsuspecting parents who might not read the nonsense garble9 before handing it to children. Social media overrun with fake pictures people are tricked into thinking are real.10 The fake urban foraging and mushroom collecting books that could certainly lead to horrible deaths if people trust those things.11 All those awful flawed videos being pumped up on social media by what looks like PR funded troll farm sock puppets. And the people talking about scam calls they get assuming it’s career criminals calling, when the awful truth is that they’re the victims of human trafficking forced to do online scamming.12
And Giant, one of the grocery stores I use for online orders of curbside pickup, actually got rid of their message option and replaced it with a cumbersome annoying chatbot. Before if I had a problem with 3 items I used to send just one sentence or two in a message, and now I have to go through the chatbot making sure I give the right prompts over and over again narrowing down the issue, and start over again each individually for every item. This is NOT a time saver on my end, and it’s not ok.
Maybe the chatbots are not made out of dead bodies or even not just people toiling away in some scam center, but the way this sausage is made, and served, I think is nevertheless going to sour everyone on it eventually, even before you get to the electric power and pollution problems13 - including noise pollution.14
Edward Zitron called it a boondoggle,15 and I think that’s exactly what it is. Zitron points out that it’s a practical problem that there’s no consistent output, which should raise eyebrows when it comes to scientific evaluation, and Zitron’s piece points out that media already heavily games the existing SEO (search engine optimization) landscape.16 This is obviously part of why there’s increasing degradation of search engine usefulness - most everyone notices it.
We can already see that hallucinations are a problem, clearly, for disinformation issues.
There has already been a longstanding problem with fake DMV websites making it so that people look up the hours for a local office or the documents they need and find some fake website not connected to the government at all, but just existing to drive traffic to their ads by hijacking people’s searches for information on the DMV services. It leads people to show up at the DMV at the wrong times with the wrong ideas, and sometimes the wrong paperwork.
The search was already messed up that way from bad actors gaming the system with grift. Now we’re going to full on just have people go straight to the lying AI.
References:
Bloomberg - AI Doesn’t Hallucinate. It Makes Things Up. By Rachel Metz, April 3, 2023 Saying that a language model is hallucinating makes it sound as if it has a mind of its own that sometimes derails, said Giada Pistilli, principal ethicist at Hugging Face, which makes and hosts AI models. “Language models do not dream, they do not hallucinate, they do not do psychedelics,” she wrote in an email. “It is also interesting to note that the word ‘hallucination’ hides something almost mystical, like mirages in the desert, and does not necessarily have a negative meaning as ‘mistake’ might.” As a rapidly growing number of people access these chatbots, the language used when referring to them matters. The discussions about how they work are no longer exclusive to academics or computer scientists in research labs. It has seeped into everyday life, informing our expectations of how these AI systems perform and what they’re capable of. Tech companies bear responsibility for the problems they’re now trying to explain away.
Public Citizen - Chatbots Are Not People: Designed-In Dangers of Human-Like A.I. Systems - By Rick Claypool - September 26, 2023 Roose described the Sydney persona as resembling “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine” and expressed concern that the technology could manipulate users into engaging in harmful or destructive behavior. Other journalists reported similarly bizarre behavior from the early version of Bing – and A.I. safety experts noted that if the chatbot seems human, it’s because it was designed to emulate human behavior. To mitigate the strange behavior, Microsoft placed limits on the number of prompts users can submit to Bing.
Popular Science - ChatGPT’s accuracy has gotten worse, study shows The LLM's ability to generate computer code got worse in a matter of months, according to Stanford and UC Berkeley researchers. By Andrew Paul | Published Jul 19, 2023 6:00 PM EDT A team from Stanford and UC Berkeley noted in a research study published on Tuesday that ChatGPT’s behavior has noticeably changed over time—and not for the better. What’s more, researchers are somewhat at a loss for exactly why this deterioration in response quality is happening.
Axios: Sean Parker unloads on Facebook: “God only knows what it's doing to our children's brains” by Mike Allen Nov 9, 2017 "The thought process that went into building these applications, Facebook being the first of them, ... was all about: 'How do we consume as much of your time and conscious attention as possible?'" "And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that's going to get you to contribute more content, and that's going to get you ... more likes and comments." "It's a social-validation feedback loop ... exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology.""The inventors, creators — it's me, it's Mark [Zuckerberg], it's Kevin Systrom on Instagram, it's all of these people — understood this consciously. And we did it anyway."
The New York Times - Diablo IV Wants Gamers to Slay Monsters in Hell Forever An addictive formula — kill enemies and collect the powerful gear they drop — is back, this time in a transformed video game industry where players now expect free content updates. By Brian X. Chen June 5, 2023 Part of Diablo’s deviousness was that it took forever to get what you wanted. As players fought through dungeons, slain monsters dropped loot, such as weapons and jewelry, that would empower their characters. Sometimes they were randomly rewarded with rare and exceedingly potent loot, much like pulling a slot machine and hitting the jackpot. The endless dopamine-inducing hunt was a winning formula that persuaded Diablo players over the past 26 years to invest hundreds, sometimes thousands, of hours into building the perfect character.
Curmudgucation - Children's Books To Really Avoid PETER GREENE AUG 12, 2023 If that sounds as if it comes from that special place in the uncanny valley between "English is not my first language" and "I am not actually a human being," well, let's take a look inside the books. Sondra Eklund is a librarian in charge of selecting children's and young adult books for a large public library system. She ordered Rabbits: Children’s Animal Fact Book and on her blog (a great resource for book reviews) wrote about what she found inside. First she was struck by poor wording and organization; then she got to some Very Special pages
AI Is Coming For Your Children - Con-men are flooding kindle with AI children's books. They could do permanent damage to childhood literacy. ROBERT EVANS JUN 20, 2023 Because the text generated by these AI programs is really just chopped and screwed together pieces of actual stories written by people, AI hustlers are very concerned with plagiarism. So they feed their text into apps like Grammarly to run test plagiarism detection scans. Inevitably, the story fails.
The Allegheny Front - Violation at a Pennsylvania drilling site raises questions about lack of Bitcoin regulation Anne Danahy - April 21, 2023 A gas and oil company mining for cryptocurrency in Elk County, Pennsylvania, was found to be in violation of state regulations when it installed equipment before getting approval from the state Department of Environmental Protection, but the company says it is following the rules. A March 1 DEP inspection at Diversified Gas and Oil’s Jay Township site found the company had put in place tools for cryptocurrency operations. The company does have a General Permit. But, according to the DEP, Diversified did not have a permit to install the air pollution sources used to generate power for crypto mining. That specific permit is currently under review.
Daily Mail - Residents of small Pennsylvania town are being driven mad by huge BITCOIN MINE whose two large cooling towers vibrate and hum more loudly than a waterfall By Dominic Yeatman For Dailymail.Com Published: 14:40 EDT, 13 December 2023 'I have a little pond in front of my house where I used to sit and have my coffee at,' he added. 'I can't even enjoy that because I can't even hear the water over the Bitcoin. It is louder than the waterfall.' Talen Energy won over locals with promises of hundreds of news jobs and an economic boom in the township of 6,000 when they announced plans for the operation last year. 'Amazon, Google, all those cloud computing applications, those are the potential clients, customers that we will have in the data center buildings,' said Dustin Wertheimer, VP and Division CFO Talen Cumulus and Susquehanna Data Center. 'On the coin mining side, there will be computers again located in those buildings and those computers will run computations that will trigger and generate the issuance of coins.' The controversial cryptocurrency has been in the news again after a wild ride since the start of December. A rally last week saw it rise above $44,000 to reach its highest level in almost two years - then on Sunday it lost 6.5 percent of its value in just 20 minutes and dipped below $41,000. Global bank Standard Chartered thinks bitcoin could surpass $100,000 before the end of 2024 - yet well-known JP Morgan boss Jamie Dimon said last week that US lawmakers should 'close it down'. The first 1,500 Bitcoins out of Salem Township were sold for $37.6 million after the 180 megawatt mine was plugged in this summer, but that was little consolation to residents at an angry town hall meeting on Tuesday.
Boondoggle Definition & Meaning - Merriam-Webster a wasteful or impractical project or activity often involving graft
Subprime Intelligence EDWARD ZITRON FEB 19, 2024 I am also completely out of patience when it comes to being told what it "will do" in the future. Generative AI's greatest threat is that it is capable of creating a certain kind of bland, generic content very quickly and cheaply. As I discussed in my last newsletter, media entities are increasingly normalizing their content to please search engine algorithms, and the jobs that involve pooling affiliate links and answering where you can watch the Super Bowl are very much at risk. The normalization of journalism — the consistent point to which many outlets decide to write about the exact same thing — is a weak point that makes every outlet "exploring AI" that bit more scary, but the inevitable outcome is that these models are not reliable enough to actually replace anyone, and those that have experimented with doing so have found themselves deeply embarrassed.