OpenAI Finds Growing Exploitation of AI Tools by Foreign Threat Groups
OpenAI’s new report warns hackers are combining multiple AI tools for cyberattacks, scams, and influence ops linked to China, Russia, and North Korea.
More results...
OpenAI’s new report warns hackers are combining multiple AI tools for cyberattacks, scams, and influence ops linked to China, Russia, and North Korea.
Cybersecurity researchers at Varonis have discovered two new plug-and-play cybercrime toolkits, MatrixPDF and SpamGPT. Learn how these AI-powered tools make mass phishing and PDF malware accessible to anyone, redefining online security risks.
Cato CTRL uncovers new WormGPT variants on Telegram powered by jailbroken Grok and Mixtral. Learn how cybercriminals jailbreak top LLMs for uncensored, illegal activities in this latest threat research.
Xanthorox AI, a darknet-exclusive tool, uses five custom models to launch advanced, autonomous cyberattacks, ushering in a new AI threat era.
The post Xanthorox AI: A New Breed of Malicious AI Threat Hits the Darknet appeared first on eSecurity Planet.
New Xanthorox AI hacking platform spotted on dark web with modular tools, offline mode, and advanced voice, image, and code-based cyberattack features.
Hacker Stephanie “Snow” Carruthers and her team found phishing emails written by security researchers saw a 3% better click rate than phishing emails written by ChatGPT.
By Deeba Ahmed
Cybercriminals on multiple hacker forums claim to jailbreak AI chatbots to write malicious content, including phishing emails, a new report from SlashNext has revealed.
This is a post from HackRead.com Read the original post: Hackers Cla…
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”
The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.
By Waqas
FraudGPT: Yet Another AI-Driven Chatbot Assisting Cybercriminals in Malicious Activities.
This is a post from HackRead.com Read the original post: Following WormGPT, FraudGPT Emerges for AI-Driven Cyber Crime
WormGPT, a black-hat-based tool has been recently launched by cybercriminals and has the potential to conduct various social engineering as well as Business Email Compromise (BEC) attacks. This tool has no limitations towards its use and has no boundar…