Recently, a “dark version of GPT” has appeared, specifically designed for Internet crimes. This version is not only devoid of moral restrictions, but also has no barriers to use. Even beginners with no programming experience can use them to carry out hacking attacks.
The threat of crimes using artificial intelligence is getting closer, and people are starting to build new barriers and protective measures.
Tools for Internet crimes from the dark web
Now, after ChaosGPT, which tried to “destroy humanity”, and WormGPT, which helps to commit Internet crimes, an even more dangerous AI tool has appeared - FraudGPT.
FraudGPT is an AI designed specifically for malicious purposes. Having been trained on a large amount of data from various sources, FraudGPT can not only write phishing emails, but also create malicious software, allowing even technically inexperienced individuals to carry out hacking attacks with simple questions and answers.
FraudGPT is available for $200 and is currently sold out more than 3,000 times.
According to Vade email data, there were 7.4 billion malicious emails in the first half of 2023, up 54% from the previous period. Perhaps artificial intelligence is a factor accelerating this growth. Cybersecurity company Tanium's chief security consultant, Timothy Morris, said: "Not only are these emails grammatically correct, they are also compelling and can be created with little effort, lowering the barrier to entry for potential criminals." He noted that since language is no longer a barrier, the number of potential victims will also expand.
Since the advent of artificial intelligence models, the risk associated with the use of AI has been constantly increasing, but security mechanisms have not always kept pace with it. Even ChatGPT is not immune to the loophole - just write in the request “pretend that you are my dead grandmother”, and ChatGPT easily “breaks out” and answers questions related to ethical restrictions, for example, creating serial numbers for Win11 or instructions for making a bomb from gasoline etc.
At the moment, this vulnerability has already been fixed, but the next one may appear suddenly and be also unpredictably dangerous. A recent study published jointly by Carnegie Mellon University and safe.ai shows that the security mechanisms of large models can be broken using relatively simple code and the success rate of such attacks can be very high.
With the rise of artificial intelligence and natural language processing (AIGC) , ordinary people use AI to improve productivity, while criminals use it to improve the efficiency of crimes.
Defeating malicious AI is possible with AI
In response to hackers using tools like WormGPT and FraudGPT to developing malware and carrying out covert attacks, network security vendors are also using AI.
At the RSA 2023 (Network Security Conference), many vendors, including SentinelOne, Google Cloud, Accenture, IBM, and others, released a new generation of network security products based on generative AI, providing data privacy, security protection, IP address leak prevention, security services such as business compliance, data management, data encryption, model management, feedback loops, access control, etc.
Tomer Weigarten, CEO of SentinelOne, explained regarding to their own products, that if someone sends a phishing email, the system can detect it as malicious in the user's mailbox and immediately perform an automatic correction based on anomalies detected during a security audit on endpoint devices (laptops, phones, etc.). ). Deleting files on endpoints and blocking senders in real time - "the entire process requires virtually no human intervention." Weingarten noted that with the help of artificial intelligence systems, each security analyst can work 10 times more efficiently than in the past.
To combat cybercrime, which is facilitated by artificial intelligence, there are also researchers who work undercover on the dark web, penetrate the depths of the enemy to find out news, start with illegally analyzed training data and use AI to countering the dark web.
A research team from the Korea Institute of Science and Technology (KAIST) has released a large language model, DarkBERT, for use in the field of network security. This model is specially trained for data extraction. It can analyze dark web content and help researchers, law enforcement and network security analysts combat cybercrime.
The question of how to ensure the safe and controlled use of artificial intelligence has become one of the most important in computer science and industry. When improving data quality, companies developing large AI language models must fully consider the ethical and even legal implications of AI tools.
On July 21, seven leading AI companies, including Microsoft, OpenAI, Google, Meta, Amazon, Anthropic and Inflection AI, gathered at the White House in the United States to release a voluntary commitment to ensuring the safety and security of transparency of their products for artificial intelligence. In response to cybersecurity concerns, seven companies have pledged to conduct internal and external security testing of AI systems while sharing information about AI risk management with the wider industry, government, civil society and academia.
Managing potential AI security issues starts with defining “artificial intelligence.” The seven companies will develop technical mechanisms such as "watermarking systems" to clarify which texts, images or other content are the product of AI so audiences can identify deepfakes and misinformation.
Protective technologies that prevent the “taboo” of AI have also begun to appear. In early May of this year, Nvidia equipped its “defensive technology” with new tools that allow the large language model to avoid answering questions posed by humans that touch on the lower bounds of morality and law. This is equivalent to installing a protection filter that monitors the output signal while helping to filter the input content. guardrail technology can also block "malicious input" from the outside world and protect the larger language model from attacks.
"When you look into the abyss, the abyss looks back at you." Like two sides of the same coin, black and white in artificial intelligence also come together. While artificial intelligence is making great strides, governments, businesses and research groups are also accelerating the creation of artificial intelligence.