Wednesday, March 22, 2023

  

TECH


ChatGPT General Motors

Blackmamba: the new malware powered by ChatGPT

HYAS Institute researcher and cybersecurity expert Jeff Sims has developed a new ChatGPT-powered malware called Blackmamba.

In January of this year, cybersecurity researchers at CyberArk also reported how ChatGPT can be used to develop polymorphic malware. By “using an authoritative tone”, they were able to create polymorphic malware bypassing the content filter in ChatGPT.

According to report by HYASO Institute keylogger Blackmamba can collect confidential data such as usernames, debit/credit card numbers, passwords and other sensitive data entered by the user in his device.

After “capturing” the data, Blackmamba uses MS Teams to transfer it to the attacker's Teams channel, which can then sell it on the dark web.

Sims has created polymorphic malware, powered by ChatGPT, that can modify malware by randomly examining user input, using the chatbot's language capabilities.

Polymorphic malware is a type of malware that changes its code whenever it replicates or infects a new system. This makes it difficult for traditional antivirus software to detect and analyze because malware is different each time it infects a new system, even if it performs the same malicious activities. The use of polymorphic malware has become more common in recent years as cybercriminals look for new and innovative ways to bypass traditional security measures. The ability to transform and alter code makes it difficult for security researchers to develop effective security measures to prevent the attack, making it a significant threat to organizations and individuals alike.

Yes created a keylogger so that EDR filters cannot detect it. Attackers can use ChatGPT to modify the code and make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

The malware can run on various devices such as macOS, Windows and Linux systems. Furthermore, malware can be shared in the target environment via social engineering or email.

Of course, as ChatGPT's capabilities advance, these threats will continue to emerge and become more sophisticated and difficult to detect over time. Automated security controls are not foolproof, so organizations must remain proactive in developing and implementing their strategies to protect against such threats.

by: mundophone

No comments:

Post a Comment

  TECH Climate tech startup aims to store carbon in oceans and reshape the energy sector A Los Angeles startup is making waves by claiming i...