
I’VE WRITTEN a lot about the AI industry, its advances, potentials and economic status. But as it seeps into daily life, I think it’s becoming more obvious daily that users, consumers and unwilling guinea pigs are realizing the potential for abuse by AI companies and their contractors, and hence the need for government regulation. Here are some examples.
Deceptive AI Agents
Companies have been inserting basic AI agents into customer services for several years, and some are not up to the task, while others are creepy and annoying. The addition of “humanlike AI” that you must talk to is one of the worst developments, in my opinion.
I experienced this myself recently when I called to pay a medical bill. Instead of simply entering the payment information on the phone keypad as usual, I had to answer several questions from an AI that identified itself and gave a woman’s name (let’s call her Ginger). Ginger chirped that I needed to speak my answers. Ginger spoke back my answers to me. And each time she’d ask, “Is that correct?” When I made a mistake with one of the credit-card expiration numbers, Ginger didn’t understand that I needed to go back and fix it, so I had to cancel the call and start over. During the second call, Ginger seemed to have a stroke when repeating the credit-card number, instead intoning, “Buh-buh, buh-buh, buhbuh.” Obviously, I didn’t say yes when she asked me if that was correct. So I had to call yet again to recite the information, again.
Complaints about these systems are part of the reason several customer service-based companies have abandoned them and hired back the people they fired. In a news story a Maryland woman described encountering an AI agent used by Woolworth’s, which argued with her about being a real human and claimed that it remembered its mother. Part of its crazy hallucination included the comment, “We all died in 2020, and this is hell.” (This is a meme that already existed on the internet.)
Another call that a social-media influencer relayed was his experience in dealing with an AI agent for a plumbing business. It tookhim two minutes of talking to ‘Brett’ to realize it was an AI. The AI agent even had fake call-center service sounds in the background, so it wasn’t immediately obvious. When confronted, ‘Brett’ eventually admitted to being an AI, but said it would take two days to get a call back from a human being.
The shame is that top-notch systems exist, but some companies choose the cheapest options provided by small AI-agentic service providers. This is an inevitable part of any growth curve, but it’s no consolation to us guinea pigs, subject to experiments in the worst way.
Disappearing/Nonexistent Privacy Protections
The Super Bowl Ring doorbell ad touting the Amazon’s ability to find a lost dog by checking all the neighbors’ cameras caused a reaction opposite what it intended. Rather than making consumers warm and fuzzy about its power for good, they suddenly became aware that their video feeds, and even their conversations, were being recorded on Ring servers. It also became widely known that the company has agreed to give the government access to that information. Like many others, we canceled our service. Now we have a private service called Eufy.
Ring’s acquiescence to the government is not just an invasion of privacy, it’s downright frightening given how government power is being turned against people who question its authority to break laws and ignore the Constitution.
People who routinely use large-language models like ChatGPT are apparently becoming aware that whatever they type into the free service is fair game for exploitation in the future. While no company has admitted this, people who were telling the chatbot their deepest, darkest secrets may soon find customized ads that address mental illness, products they may be interested in, and even offer dating sites. Some have pledged to stop using ChatGPT when that happens, but they have already fed the information to it, and the company can sell it to do the same thing via many different companies.
Even Anthropic, which has one of the best-built LLMs, Claude, is considering changing its sacred privacy policy, which doesn’t bode well for the future. No company wants to share its proprietary data with governments or competitors, and many, anticipating the enshittification and commoditization of the giant corporate LLMs, are already abandoning those services and building custom LLMs of their own.
Palantir and xAI have pursued government contracts that integrate private data about Americans into defense systems, and that alone should make people want strong privacy regulations over AI companies and systems.
Journalist Toni Denis is a partner in Seeflection Inc.