Many people are worried about what AI is doing to their jobs and the future of human society. And rightly so: There’s no doubt that, eventually, our computers will take over much of what we do for a living. That’s why governments worldwide are considering how to mitigate the dangers of emerging technology.
Last year, the White House’s Office of Science and Technology Policy released a blueprint that, among other things, called for a set of “AI bill of rights” to protect consumers from algorithmic discrimination. In late April, top officials from the Federal Trade Commission, the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission committed to applying existing laws prohibiting deceptive trade practices and discrimination to new threats presented by AI.
As these discussions progress, Senate Majority Chuck Schumer calls for “comprehensive legislation” to advance and ensure safeguards on one of the most consequential industries in the world: artificial intelligence (AI). But according to sources familiar with his efforts to get ahead of this issue, Schumer’s framework will fall short of addressing the most immediate risks of AI.
A prime example is the soaring popularity of so-called generative AI, which uses data to create new content like ChatGPT’s human-sounding prose. Whether it’s answering Stack Overflow questions, writing articles or essays for the New York Times, or creating jokes for your favorite sitcom, generative AI is changing how we communicate with machines increasingly dominating our lives.
But while the answer to any question is likely to be “yes,” the answers are often incorrect or misleading and exhibit a glaring lack of context. That’s because generative AI doesn’t have the brainpower to admit it’s wrong; it merrily Dunning-Kruger’s way along, pumping out a stream of words as if there were no other choice.
The other risk of generative AI is that it might be a precursor to the automation of the creative process, in which the human factor that makes the world interesting is replaced by a machine that can produce any text from an endless pool of possibilities. This could have catastrophic implications for humanity, particularly in art and other fields in which creativity is a defining trait.
The plan Schumer’s office is putting together will advance “four guardrails” to deliver transparent, responsible AI without stifling innovation or preventing the technology from being developed for military purposes. The first three of those “guardrails”—Who, Where, and How—will focus on requiring companies to make their AI technologies available for independent testing and review by experts. The fourth, Protect, will aim to align these systems with American values and reduce the risk of harm. But those sources who have been involved in the discussions say they fear that lawmakers are skipping over the most immediate concerns of AI ethicists and civil rights advocates in their rush to put forward a solution to this new threat.