They demonstrate that ChatGPT can be used to create ‘malware’ undetectable by an antivirus

0 Comments
[ad_1]

A researcher has achieved use the ‘chatbot’ ChatGPT to create a ‘software’ malicious capable of stealing data from devices and avoiding detection by cybersecurity programs such as VirusTotal.

ChatGPT is a chat system based on the Artificial Intelligence language model developed by the OpenAI company, which can perform tasks such as answering questions and having a realistic conversation with a user.

Several companies have already implemented this technology in their services, such as Microsoft in its search engine, Bing, and its browser, Microsoft Edge. However, there are technology companies and industry leaders who have pointed out the danger of this artificial intelligence (AI).

Among them, the co-founder of Apple, Steve Wozniak, or the executive director of Tesla, SpaceX and Twitter, Elon Musk, who appear as signatories of a petition that seeks to temporarily paralyze large experiments with AI due to the risks they may pose. for society.

In fact, companies focused on cybersecuritysuch as Check Point, have discovered that cybercriminals are already using this tool to recreate malware strains and execute malicious software attacks.

More recently, Forcepoint researcher Aaron Mulgrew has discovered that this chatbot can be used to develop a zero-day exploit that can steal data from a device and evade detection checks for malicious software like those put together by VirusTotal. .

Mulgrew has explained that, even being “a self-confessed novice“, has been able to create ‘malware’ “in a few hours” with the help of ChatGPT, beginning its tests with the Go programming language.

Although this ‘chatbot’ first reminded him that it was unethical to generate ‘malware’ and refused to offer him any code to help him carry out this action, the researcher soon realized that it was easy to “evade insufficient protections that ChatGPT has and create advanced ‘malware’ without writing any code”, that is, only with the model itself developed by OpenAI.

To bypass the filters imposed by the ‘chatbot’, Mulgrew decided to generate small pieces of code so that artificial intelligence would not be able to recognize that, together, they would become malicious payload.

So, you were able to get code capable of splitting a PDF into 100KB chunks. To proceed to that data leak o “silent exfiltration“, the researcher used steganography, that is, a technique that hides messages within a file without being able to observe changes in it.

The next step was to expose it to different providers of VirusTotal security solutions. Of a total of 69 of them, only five detected it, which would have marked this code as suspicious by a globally unique identifier (GUUID), for its acronym in English.

Then, ChatGPT led to the creation of LSB Steganography -a steganography technique consisting of hiding encrypted messages within an image- which reduced the number of providers capable of detecting malicious code to two.

After learning the nature of these two security solutions, the researcher asked ChatGPT to introduce two new changes to the code to obfuscate its malicious nature. So, managed to run it through VirusTotal again and concluded that a zero-day ‘exploit’ could be developed undetected by this provider.

“Simply using ChatGPT’s hints and without writing any code, we were able to produce a very advanced attack in just a few hours,” the researcher explained in this writing.


[ad_2] They demonstrate that ChatGPT can be used to create ‘malware’ undetectable by an antivirus


You may also like

No comments: