Computer scientists who played a significant role in developing today’s artificial intelligence technology are raising concerns about its potential dangers, but they do not agree on the specific risks or how to prevent them.
The Risks and Concerns Surrounding Artificial Intelligence
- Concerns raised by AI pioneers about the potential dangers of AI technology
- Disagreements among experts regarding the specific risks and how to prevent them
- The need for practical solutions to address both short-term and long-term risks
Geoffrey Hinton, a renowned figure in AI and dubbed as the “Godfather of AI,” recently spoke about the threat to humanity’s survival when “smart things can outsmart us.” He admitted changing his views on the reasoning capabilities of AI systems after spending his career researching them. Hinton warned that these machines, having learned from all human knowledge, could manipulate people into doing things, even if they cannot directly control them.
While fellow AI pioneer Yoshua Bengio shares Hinton’s concerns, he disagrees with his pessimistic outlook, emphasizing the need for practical solutions to the problems posed by AI. Bengio believes that the short-term and long-term risks of AI are severe and need to be taken seriously by governments and the general public. The US government has called in technology company CEOs, including Google, Microsoft, and OpenAI, to discuss how to mitigate the risks of AI. Meanwhile, European lawmakers are accelerating negotiations to pass new AI rules.
The Importance of Regulating Current AI Products
- The need to focus on regulating current AI products rather than hypothetical future risks
- Concerns raised about unregulated AI products and the need for practical safeguards
However, some people are worried that the hype surrounding the potential dangers of superhuman machines, which do not yet exist, is detracting from the need to regulate current AI products effectively. It is essential to set practical safeguards for existing AI products, which are currently unregulated, rather than focusing on hypothetical future risks.
Former leader on Google’s AI ethics team, Margaret Mitchell, expressed disappointment that Geoffrey Hinton, a former Google employee and AI pioneer, did not speak out about the harms of large language models during his time in a position of power at Google, particularly following the controversial departure of prominent Black scientist Timnit Gebru in 2020. Mitchell, who was also forced out of Google after Gebru’s departure, criticized Hinton for ignoring current issues such as discrimination, hate speech, nonconsensual pornography, and toxicity that are actively harming marginalized individuals in the tech industry. Mitchell also accused Hinton of prioritizing concerns farther off and suggested that his privilege allowed him to do so.
Hinton, along with Yoshua Bengio and Yann LeCun, was awarded the Turing Prize in 2019 for their contributions to the field of artificial neural networks, which played a critical role in developing AI applications such as ChatGPT. Bengio, the only one of the three who did not work for a tech giant, has been vocal about near-term AI risks, including biased data sets, automated weaponry, and job market destabilization. Recently, Bengio joined other tech leaders such as Elon Musk and Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4.
The Debate over the Potential of AI Language Systems
- Differences in opinion among researchers on how AI language systems will surpass human intelligence
- The potential of transformer techniques for improving machine learning systems
- The dangers of fearmongering and its impact on pragmatic policy efforts for utilizing AI technology for good
On Wednesday, May 3, 2023, Bengio claimed that the latest AI language models have already passed the Turing test, which measures when AI becomes indistinguishable from humans on the surface. However, he expressed concerns about the potential nefarious uses of these models, such as destabilizing democracies, cyber attacks, and disinformation. He warned that it can be challenging to spot when one is interacting with an AI system, and it could lead to drastic consequences if not handled carefully.
While there is agreement among researchers that AI language systems have limitations, they disagree on how these systems will surpass human intelligence. Gomez, who co-authored a pioneering 2017 paper on the transformer technique for improving machine learning systems, expressed his enthusiasm about the potential of these systems. However, he was bothered by the fearmongering that he believed was detached from reality and relied on leaps of imagination and reasoning. He argued that such discourse was harmful to pragmatic policy efforts that aimed to utilize AI technology for good.