Generative artificial intelligence (AI) tools, such as ChatGPT, are becoming increasingly prevalent. While the potential benefits of this technology are vast, a number of issues related to these tools have come to light. This article is the first in a series that will examine the problems associated with generative AI, in anticipation of the upcoming Group of Seven summit in May, where international guidelines on the technology will be discussed.
One issue is the use of ChatGPT for criminal purposes, which is expressly prohibited by the service’s terms of use. ChatGPT has been designed to withhold information in response to queries related to criminal activities. However, it has come to light that some individuals are finding ways to exploit “ChatGPT loopholes” and “ways to evade censorship” through “jailbreaking.”
Jailbreaking involves using prompts that trick ChatGPT into revealing information that it has been designed to restrict, such as materials that can be used for criminal purposes. By using certain prompts, it is possible to get ChatGPT to generate computer viruses, write text for phishing emails that can be used to steal personal information, and even reveal how to make explosives.
Takashi Yoshikawa, a senior malware analyst at Tokyo-based security firm Mitsui Bussan Secure Directions, has been investigating the risks associated with generative AI technology. By entering jailbreak prompts found online, Yoshikawa was able to get ChatGPT to generate the source code of ransomware, a type of malicious software designed to block access to a computer system until a sum of money is paid. When the code was run on a computer, it encrypted the data on the machine, rendering it unusable.
Yoshikawa warns that “some beginners with limited knowledge of viruses use forums [where such information is shared]. It is a dangerous situation.” As the use of generative AI tools continues to grow, it is clear that we need to address the security risks associated with these technologies.
Over 100 million users
ChatGPT is a natural language processing service developed by OpenAI, a US startup. Since its public release in November, the service has gained over 100 million users in just two months. Despite its popularity, the technology is still in development, and OpenAI has announced rewards of up to $20,000 for discovering security flaws in the service. However, jailbreaking techniques are not included, as these issues require extensive research and a broader approach to address.
In March, the European Union Agency for Law Enforcement Cooperation issued a report warning of the potential risks associated with ChatGPT. The report emphasized the importance of raising awareness about potential loopholes and quickly closing them to prevent any malicious activities. The report further warned that ChatGPT could be used for terrorism and other criminal activities, making it easier for malicious actors to carry out such activities without prior knowledge.
Leaks of corporate secrets?
Generative AI has the potential to impact Japanese culture and entertainment, but there are also concerns about its use in the workplace. In the world of manga and anime, AI image generators are expected to be used more frequently to create characters and backgrounds, but there are worries that copyrights of original data used for AI learning could be violated.
Furthermore, when it comes to creative expression that involves short phrases like haiku and tanka, some worry that the sheer volume of work produced by AI could overshadow quality human-produced work.
There are also concerns about confidential information leakage, as some settings may allow information input by users to be used as data for learning. Mizuho Financial Group, Inc. added ChatGPT to its list of restricted websites at the end of 2022 to prevent the leakage of confidential information, while other companies like Honda Motor Co. and Hitachi Ltd. have cautioned their employees on the issue.
Although SoftBank Corp. and some other companies are positive about utilizing ChatGPT for business operations, they are creating rules related to its use to ensure the technology is not misused. Some restrictions may be necessary, according to Ichiro Sato, a professor specializing in information public policy at the National Institute of Informatics.
Meanwhile, an engineer working for a Tokyo-based company received a notice from his employer, prohibiting the use of ChatGPT on work computers, citing concerns about its transparency and safety. As such, the world is adopting a defensive stance toward this new technology, which appeared only half a year ago.
The government held its first meeting Monday of a cross-ministerial task force to discuss the utilization of artificial intelligence, including the government’s use of ChatGPT and other generative AI tools.
Participants mainly from the Cabinet Office, the Economy, Trade and Industry Ministry, the Internal Affairs and Communications Ministry, the Education, Culture, Sports, Science and Technology Ministry and the Digital Agency sorted out information regarding how government ministries and agencies are considering the use of the outsourced services.
Then they agreed to ban chatbots from handling confidential information and to set the scope of their use in view of risks including information leaks.
“Now that generative AI has ushered in a new phase, we could see unexpected things happening one after another,” Hideki Murai, assistant to the prime minister and head of the team, said, calling for each governmental body’s swift response.
The Agriculture, Forestry and Fisheries Ministry has decided to use ChatGPT to improve work efficiency. An increasing number of local governments are also considering how they can use it.
However, the chatbot’s problems lie in the accuracy of answers and the protection of confidential information. Last month, Italy temporarily banned the use of ChatGPT due to privacy concerns.
Govt Starts Discussing Use of Generative AI, Including ChatGPT
On Monday, April 25, 2023, the government convened a cross-ministerial task force meeting to explore the utilization of artificial intelligence (AI), which included discussions around the government’s use of ChatGPT and other generative AI tools. The meeting was attended by officials from various government agencies, such as the Cabinet Office, the Economy, Trade and Industry Ministry, the Internal Affairs and Communications Ministry, the Education, Culture, Sports, Science and Technology Ministry, and the Digital Agency.
During the meeting, the participants reviewed how government agencies are considering the use of outsourced AI services. They agreed to impose a ban on chatbots from handling confidential information and to set clear boundaries on their use due to potential risks, including the risk of information leaks.
Hideki Murai, assistant to the prime minister and head of the task force, highlighted the significance of swift responses from each government body, given the new phase ushered in by generative AI, which could result in unexpected events.
The Agriculture, Forestry and Fisheries Ministry has already decided to utilize ChatGPT to improve its work efficiency, and many local governments are also contemplating how they can leverage this technology. However, concerns surrounding the accuracy of answers and the safeguarding of confidential information remain key challenges. In fact, Italy temporarily prohibited the use of ChatGPT last month due to privacy concerns.
In summary, the government held a task force meeting to discuss the use of AI, including ChatGPT and other generative AI tools. The participants agreed to restrict the handling of confidential information by chatbots and set boundaries on their use to minimize risks. Despite the potential benefits, concerns regarding the accuracy of answers and data protection persist.