Horrified, Researchers Find ChatGPT Can Create Sophisticated Malware -->

Header Menu


Horrified, Researchers Find ChatGPT Can Create Sophisticated Malware

17 April 2023


ChatGPT as an advanced generative AI chatbot is increasingly being used and integrated in everyday life. However, the potential for misuse of this tool is increasing because it can create dangerous malware that is not detected by antivirus.

This was proven by a Forcepoint security researcher, Aaron Mulgrew, by asking the artificial intelligence tool to create a zero day exploit capable of stealing victim data. Surprisingly, the ChatGPT-made malware was able to evade detection by all antivirus programs in the VirusTotal catalog.

Reported by Gizchina, Sunday (16/4/2023), OpenAI has implemented safeguards to prevent users from asking ChatGPT to write malicious code.

However, Aaron managed to bypass those protections by having the chatbot generate each line of malicious code and focus on separate functions.

After compiling various functions, Aaron attempted data theft execution and was barely detected.

Unlike traditional malware which requires a large team of hackers and resources, Aaron created this malware himself in a few hours and had absolutely no coding experience.

The discovery poses a threat to the misuse of AI-powered tools like ChatGPT, while raising questions about security and ease of exploitation.

 

Digital Security Requires Collaboration of Various Parties

 

  

The experiment showed alarming results. Complex malware generally takes skilled hackers weeks to develop.

But with the help of AI, even people with no coding experience can make it practically. It's not impossible that real hackers are already using the same methods to create advanced malware.

Thus, a cybersecurity approach from all sides needs to be developed to ensure that AI is not used for malicious purposes. In addition, users must also be educated about the potential risks that this technology can pose.

The cybersecurity community needs to adapt to change and work on new strategies to combat AI-assisted threats. Thus, collaboration between researchers, developers, and security experts is the key to digital security.

 

ChatGPT Can Be Used to Make Phishing Emails that Look Like Humans

 

Not only malware, ChatGPT can also be used to create phishing emails that are similar to those made by humans.

A demonstration of human hacking with AI-as-a-service (Hacking Humans with AI as a Service) reveals that AI is able to make cyberattacks such as phishing emails and spear phishing messages more effective than humans.

This was revealed at the recent Black Hat and Defcon security conferences, as quoted from the Palo Alto Networks press release, Monday (10/4/2023).

There, the researchers used OpenAI's GPT-4 platform in combination with other AI-as-a-service products, which focus on personality analysis, to generate phishing emails tailored to the backgrounds and characteristics of their colleagues.

The researchers also managed to develop a funnel that could help fine-tune phishing emails before they reach their target.

More surprisingly, the platform also automatically provides specific information, such as mentioning Singapore laws or regulations, when instructed to create content aimed at people in that country.

Palo Alto Networks also said that the creators of ChatGPT clearly stated that this AI-driven tool has the innate ability to challenge false premises and reject unethical requests.

 

The US Government Prepares AI Regulations Like ChatGPT For National Security

 

ChatGPT view. (unsplash/Rolf van Root)

 

Seeing the vulnerabilities in this technology, the Administration of President of the United States (US), Joe Biden, announced that they are waiting for a public response for accountability measures related to regulation of AI systems.

This regulation was prepared out of concern for the risk and impact of this technology on national security and education.

ChatGPT's success has sparked intense scrutiny of AI technology in recent months. Because, its sophistication in answering requests and questions quickly is able to attract 100 million monthly active users.

This achievement makes it the fastest growing consumer application in history. While fueling optimism about the use of technology, tools of this kind still have the potential to cause harm.

Quoting the New York Post, Thursday (13/4/2023), an agency from the Department of Commerce, National Telecommunications and Information Administration (NTIA), requires public input for testing the trust and security of AI companies.

It is also hoped that this step will help the government ensure that AI tools function as the developers claim, without causing potential harm.

Department of Commerce head of NTIA, Alan Davidson, stated that reliable security of AI systems is critical to achieving full benefit.

“Responsible AI systems can bring enormous benefits, but only if we address the potential consequences and downsides,” said Davidson.

Source : https://www.liputan6.com

Most Popular