Artificial intelligence (AI) tools are expanding at an alarming rate. While they have been designed to simplify things for us, leading tech leaders worldwide have warned about the dire consequences of AI. This includes the founder of a popular chatbot, Sam Altman, who has called for the government to regulate the industry.
Altman is the CEO of San Francisco-based OpenAI, which is at the forefront of generative AI technology with its ChatGPT tool. He testified Tuesday before a Senate committee on how to regulate the rapidly developing field best. When asked about his greatest fear about the future of AI, he said that the technology could “go quite wrong” and called for the government to help protect the public.
While he didn’t go as far as suggesting that AI might one day achieve consciousness or set its own goals, the CEO did warn that if the technology is not regulated correctly, it could be as dangerous for humanity as pandemics and nuclear wars are. His comments came as part of a broader call for AI regulations issued by a Bay Area nonprofit, the Center for AI Safety. The group comprises several top experts in the field, including high-level executives from Google and Microsoft.
In a written statement, the tech leaders say that AI should be considered a significant societal risk. They write that it can be as dangerous for humanity as pandemics or nuclear wars, and it should be taken seriously as a global priority. They also note that AI can cause various harms, including the loss of jobs and disruption to democracy. The authors call for creating a new agency to regulate the development and release of AI models above a certain threshold of capabilities and for independent audits from experts unaffiliated with the firms producing the models.
Some of the other concerns that the authors raise include the ability of generative AI to “hallucinate” answers that seem plausible, which could spread misinformation or damage the credibility of companies that employ it. They also point out that AI can potentially eliminate millions of jobs and argue that governments should focus on ways to support workers who might lose their jobs due to technology.
Despite these concerns, the authors maintain that AI can benefit humans if appropriately regulated. They point out that generative AI has already been used to solve complex problems like medical diagnosis and autonomous driving, and they say that it can provide more rapid responses than human customer service representatives by cutting down the time spent on repetitive tasks such as answering frequently asked questions.
While the report’s authors admit that these benefits are still in their early stages, they suggest that a delay in implementing stricter regulatory controls will lead to an inevitable AI bubble where a handful of AI companies will dominate the market and cause severe problems for the rest of the industry. They also call for a broad discussion of the ethical issues raised by technology.