Can artificial intelligence be objectively good or bad? Many people have a negative image of the AI concept because of speculation about its potential to think for itself. Hollywood depictions of Terminator and Matrix-type plots built on ‘humanity against machines’ have helped ingrain that perception.
The large language models (LLMs) like ChatGPT, however, are only as ‘evil’ as we are, since they are trained on data scraped from the internet. OpenAI trained its early LLM GPT2 on unedited, biased data, which led to AI products that discriminate against women and minorities in many ways, including hiring apps, for example. But new, better models trained on ‘clean’ data are correcting these early mistakes. Legal challenges and government oversight are forcing that change.
Recently the Biden administration issued its AI guidelines and announced plans to establish a regulatory structure to moderate and regulate AI in all areas of life. This comes as the European Union, as of March 18, codified AI regulations of its own that will force most American companies to adjust the way they operate to maintain access to the European market.
President Biden’s executive order on AI at the end of October established an array of new standards, including requirements to label AI-generated content and be more transparent about models and how they are created. Developers have to share safety tests and ensure that their models won’t threaten national security. As executive orders these regulations can be altered or overturned by subsequent administrations, of course, as could the oversight infrastructure the government is putting in place.
To securely codify these rules, and most importantly fund them, Congress must act to pass sweeping legislation. Unfortunately our current Congress has shown little interest in tech regulation, which can be attributed to both the politicization of just about everything and a general lack of understanding of AI’s crucial importance within a body consisting predominantly of older citizens. Meanwhile the technology is advancing without those legislative checks and balances.
The European Union has put together the most comprehensive AI regulations to date. Its Artificial Intelligence Act ensures that AI products comply with the law before the public can access them. The EU is concerned about “AI hallucinations (when models make errors and invent things), the viral dissemination of deepfakes, and the automated manipulation of AI that could mislead voters in elections.” While some tech companies are not happy about this, many researchers advocate for stronger protections.
Despite the hype and hand-wringing in the media about potential issues, most people won’t be able to avoid AI applications.
AI is a far more powerful tool than anything we’ve had access to before. It can save organizations money, time and effort on a monumental scale. In a matter of a few years, large companies that don’t get on board with AI will find themselves unable to compete with those that take advantage of it. Workers who don’t learn how to use it may eventually be shut out of the office workplace.
That’s why it’s in everyone’s best interest to ensure that AI has ethical guardrails and that regulations are enforced to protect users and the public at large. The world already has an example of what can go wrong when technologies are left to proliferate without regulations: social media. The disasters that ensued from the ’90s to the 2010s included everything from an attempted genocide of the Rohingya in Myanmar to suicides by American teens to elections around the world being influenced by propaganda and lies.
Before the potential development of true Artificial General Intelligence, known in the industry as “the Singularity,” we are in a position to make sure that AI remains a miraculous tool. It has already generated revolutionary ideas, crunched data to come up with complex solutions to problems, and is on the cusp of developing many lifesaving treatments and medicines.
If governments divert a small fraction of the effort and money they put into developing weapons into developing AI protections instead, we’ll be in the position to make sure that AI never becomes a threat to us.
AI itself is not ‘bad’ — it’s the bad human actors we should worry about. That’s why AI regulation should be a top priority now.