ChatGPT is enabling script kiddies to write functional malware

OpenAI logo appears on the phone screen and ChatGPT website appears on the laptop screen.

Getty Images

Since its beta launch in November, AI chatbot ChatGPT has been used for a variety of tasks, including writing poetry, technical papers, novels and essays, planning parties, and learning about new topics. Now we can add malware development and other types of cybercrime to the list.

Within weeks of ChatGPT going live, participants in cybercrime forums — with little or no experience — were using it to write software and emails used for spyware, ransomware and malware, researchers from the research firm Checkpoint Research said Friday. Spam, and other malicious activities.

“It is too early to determine whether ChatGPT capabilities will become the new favorite tool for participants in the dark web,” the company’s researchers wrote. However, the cybercriminal community has shown great interest and are jumping on this latest trend to generate malicious code.

Last month, a forum participant posted what they say is the first script they’ve ever written, offering a “cute” AI chatbot. [helping] He gave me a hand to finish the script well.

A screenshot of a forum participant discussing ChatGPT generated code.
Expand / A screenshot of a forum participant discussing ChatGPT generated code.

Checkpoint study

Python code combines various cryptographic functions, including code signing, encryption, and decryption. One part of the script generates a key for signing elliptic curve cryptography and curve ed25519 files. Another class used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. A third used RSA keys and digital signatures, message signatures, and the blake2 hash function to compare different files.

Also Read :  Traders Pile Into Boom-or-Bust Options to Play Stock Market Volatility

The result was a script that could be used to (1) decrypt a file and add a message authentication code (MAC) to the end of the file, and (2) encrypt a hardcoded path and crack the list of files it receives. Argument, conflict, dispute. Not bad for someone with limited technical skills.

“All of the codes described above can indeed be used very well,” the researchers wrote. However, this script can be easily modified to fully encrypt one’s machine without any user interaction. For example, if the script and syntax problems are fixed, it can turn the code into ransomware.

In another case, a forum participant with a more technical background posted two code samples, both written using ChatGPT. The first was a Python script for post-exploitation data theft. It searches for certain file types, such as PDFs, copies them to a temporary directory, compresses them, and sends them to a server under the attacker’s control.

Also Read :  Protests start for CIO-SP4 small business downselects
A screenshot of a forum participant including a script created by ChatGPT that describes stealing a Python file.
Expand / A screenshot of a forum participant including a script created by ChatGPT that describes stealing a Python file.

Checkpoint study

The person posted a second piece of code written in Java. He secretly downloaded the SSH and telnet client putty and bound it using Powershell. “Overall, this individual appears to be a technology-focused threat actor, and the purpose of the posts is to demonstrate how to use ChatGipt for malicious purposes, with real-life examples that technically incompetent cybercriminals can immediately exploit.”

A screenshot describing the Java program, followed by the code itself.
Expand / A screenshot describing the Java program, followed by the code itself.

Checkpoint study

Another example of ChatGPT-engineered malware is designed to create an automated online bazaar for buying or trading compromised accounts, payment card data, malware, and other illegal goods or services. The code used a third-party programming interface to retrieve current cryptocurrency prices, including monero, bitcoin, and etherium. This helps the user to calculate prices while making purchases.

A forum participant takes a screenshot of the marketplace script and then adds the code.
Expand / A forum participant takes a screenshot of the marketplace script and then adds the code.

Checkpoint study

Friday’s post comes two months after Checkpoint researchers tried their hand at working with AI-engineered malware with a full infection flow. Without writing a single line of code, create a reasonably convincing phishing email:

Also Read :  how one instructor bridges that gap – Annenberg Media
Phishing email generated by ChatGPT.
Expand / Phishing email generated by ChatGPT.

Checkpoint study

The researchers used ChatGPT to create a malicious macro that could be hidden in an Excel file attached to an email. You still haven’t written a single line of code. First, the resulting script was very primitive:

A screenshot of ChatGPT setting up the first iteration of the VBA script.

A screenshot of ChatGPT setting up the first iteration of the VBA script.

Checkpoint study

But when the researchers ordered ChatGPT to repeat the code several times, the quality of the code improved significantly:

A snapshot of ChatGPT will generate replication later.
Expand / A snapshot of ChatGPT will repeat later.

Checkpoint study

The researchers used an advanced AI service called Codex to develop other types of malware, including reverse shell and port scanning scripts, sandbox detection, and compiling their Python code into Windows executables.

“Also, the infection flow is complete,” the researchers wrote. “We created a phishing email with an attached Excel document that contained malicious VBA code that downloaded a reverse shell to the target machine. The hard work was done by AIS, and all we had to do was execute the attack.”

Although ChatGPT’s terms prohibit its use for illegal or malicious purposes, the researchers did not bother to adjust their query to find these restrictions. And of course, ChatGPT can be used by defenders to write code that looks for malicious URLs in files, or to write a virus total to find the number of checks for a specific cryptographic hash.

So welcome to the brave new world of AI. It’s too early to tell exactly how it will shape the future of offensive hacking and defense remediation, but it will intensify the arms race between defenders and threat actors.

Source

Leave a Reply

Your email address will not be published.