IT threat evolution Q1 2023
Recent BlueNoroff and Roaming Mantis activities, new APT related to the Russo-Ukrainian conflict, ChatGPT and threat intelligence, malvertising through search engines, cryptocurrency theft campaign and fake Tor browser
More results...
Recent BlueNoroff and Roaming Mantis activities, new APT related to the Russo-Ukrainian conflict, ChatGPT and threat intelligence, malvertising through search engines, cryptocurrency theft campaign and fake Tor browser
With groundbreaking innovations and intelligent machines revolutionizing industries, the advent of AI is sparking endless possibilities If you’re…
The post Antivirus Security and the Role of Artificial Intelligence (AI) appeared first on Quick Heal …
Perception Point’s team has identified a 356% increase in the number of advanced phishing attacks attempted by threat actors in 2022. Overall, the total number of attacks increased by 87%, highlighting the growing threat that cyber attacks now po…
Drawing from a recent HackerOne event, HackerOne study and GitLab survey, learn how economic uncertainties are driving budget cuts, layoffs and hiring freezes across the cybersecurity industry.
The post HackerOne: How the economy is impacting cybersecu…
Kaspersky research on ChatGPT capabilities to tell a phishing link from a legitimate one by analyzing the URL, as well as extract target organization name.
Project Maven, the Pentagon’s flagship AI program established in 2017, was shifted to the NGA last year.
Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.
Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:
As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued …
Cisco XDR simplifies security operations, allowing security teams to elevate productivity and stay resilient against the most sophisticated threats.
I’m not sure there are good ways to build guardrails to prevent this sort of thing:
There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6)…
Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.
But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance….” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams…