
When the US Department of Defense tried to force Anthropic to remove guardrails from its LLM Claude that prevent it from being used for lethal autonomous operations or mass surveillance, and since ethics and safety are the company’s prime directives, CEO Dario Amodei turned it down cold. Then the government, which already has contracts with the company, declared it a “supply chain security risk,” which effectively bans any government agency or contractor from using its products.
Oh, the irony.
Anthropic had raised $7.3 billion based on those government contracts. In a brave but also necessary move, the company sued right away to have Trump’s petulant declaration overturned. A court is reviewing the case, and a judge has already spoken aloud what anyone who cares about AI safety is thinking: this is a bogus ban. He literally called it “troubling.”
OpenAI jumped into action to try to snag Anthropic’s business, which has led to yet more backlash against the company, based on its willingness to do anything the Pentagon wants. OpenAI CEO Sam Altman seems willing to sell his soul to make the company profitable.
MIT’s Technology Review reports that a “QuitGPT” campaign encouraging people to cancel their ChatGPT accounts has been underway since February and is picking up momentum. In addition to the company’s willingness to ignore safety standards, many users are uneasy that OpenAI President Greg Brockman made a $12.5-million donation to Trump’s Super PAC, MAGA Inc., which feels to some that OpenAI is supporting a “fascist regime,” as one QuitGPT supporter said. It also carries the smell of bribery in reference to government contracts.
To top it off, it’s easier to convince people to switch to Claude once they try it, because the latest version of ChatGPT doesn’t compete as well on speed or accuracy. Some people hate the way ChatGPT gives sycophantic responses to questions. It’s so irritating that South Park dedicated an entire episode to parodying ChatGPT praising a character for his ridiculous ideas.
According to a story in The Hill (March 24), the administration may find it extremely difficult to enact the ban anyway. Government agencies and private contractors are already using Claude and deriving great benefits from it. The company works closely with employees to help use the tech more effectively.
Sarah Kreps, director of the Tech Policy Institute in the Cornell Brooks School of Public Policy, told The Hill that it’s an “ecosystem” of entanglement, and that “if one company fails, then there’s a way in which the trust of the entire enterprise could be imperiled.” As with most actions by Trump, banning the company entirely was not thought through to its logical conclusion. It would not only be costly, but also cause immediate problems for entire government departments.
For instance, Claude Code, a coding tool that helps build software features, fix errors and automate tasks, is already embedded in many government agencies, including the Department Homeland Security.
Microsoft and other tech companies are supporting Anthropic’s case because of the risk of unreasonable government demands on all AI companies. Regardless of how the case shakes out, the publicity is running in Anthropic’s favor. The public appears to be getting behind Claude. Major corporations, too, may be more interested in a product that’s now known as safe and reliable.
Meta Loses Social-Media Lawsuit
New Mexico Attorney General Raul Torrez filed the first state lawsuit against Meta for making its social-media platforms unsafe for children, despite internal warnings. A jury awarded the state damage compensation of $375 million on March 24. Staff memos showed that the company had solutions to fix the problems, but Meta ignored them in the pursuit of making more money off kids. Children have become addicted to the platforms, and sexual predators have been able to target them.
An NPR story, for instance, describes how when the AG’s office posted a fake profile for a twelve-year-old girl, it was inundated with ‘friend requests’ from older men. The platform actually told “her” how she could attract more followers and monetize them.
The company is supposed to enforce a ban on users under 13, as well as monitor posts about bullying, suicide and other mental-health issues. Torrez alleged that it prioritized profits over safety. He wants the company to change its model for children, and said he expects that it will thanks to the lawsuits moving forward in 42 states, including Arizona. California’s case is the next to go to trial, and on March 25 its AG was preparing to select the jury.
Social-media companies, including YouTube, have relied on Section 230 of the US Communications Decency Act to shield them from the consequences of user posts. But when prosecutors can show that the companies are aware their actions are causing real harm, and that they could easily mitigate those issues if they wanted to, courts have the last word.
Groundbreaking Social Media Verdict
But when a jury in another lawsuit in California found both Meta’s Instagram and YouTube guilty of mental health harms and awarded a plaintiff $6 million, it set a precedent that is possibly far more damaging to social media companies. The case, completed on March 25th, hinged on proving that the very design of the media—encouraging endless scrolling, launching autoplaying of videos and sending notifications of new content—was addictive to young people and, thus, damaging. The amount of the damages may sound like chickenfeed for the companies, but it could lead to hundreds of similar lawsuits.
Torres said there’s a movement in Congress to pass laws related to this issue, so it’s likely Meta and other companies will be forced to change how they do business in any case.
Journalist Toni Denis is a partner in Seeflection Inc.