Google: state-backed hackers exploit Gemini AI for cyber recon and attacks

Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations. Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to […]

U.S. CISA adds SolarWinds Web Help Desk, Notepad++, Microsoft Configuration Manager, and Apple devices flaws to its Known Exploited Vulnerabilities catalog

U.S. Cybersecurity and Infrastructure Security Agency (CISA) adds SolarWinds Web Help Desk, Notepad++, Microsoft Configuration Manager, and Apple devices flaws to its Known Exploited Vulnerabilities catalog. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added SolarWinds Web Help Desk, Notepad++, Microsoft Configuration Manager, and Apple devices flaws to its Known Exploited Vulnerabilities (KEV) catalog. Below are the flaws […]

ApolloMD data breach impacts 626,540 people

A May 2025 cyberattack on ApolloMD exposed the personal data of over 626,000 patients linked to affiliated physicians and practices. ApolloMD is a US-based healthcare services company that partners with hospitals, health systems, and physician practices. It provides practice management, staffing, revenue cycle, and administrative support services. The company works with affiliated physicians across specialties […]

LummaStealer activity spikes post-law enforcement disruption

Bitdefender reports a surge in LummaStealer activity, showing the MaaS infostealer rebounded after 2025 law enforcement disruption. Bitdefender observed renewed LummaStealer activity, proving the MaaS infostealer recovered after 2025 takedowns. Active since 2022, it relies on affiliates, social engineering, fake cracked software, and fake CAPTCHA “ClickFix” lures. CastleLoader plays a key role in spreading it. […]

Apple fixed first actively exploited zero-day in 2026

Apple fixed an exploited zero-day in iOS, macOS, and other devices that allowed attackers to run code via a memory flaw. Apple released updates for iOS, iPadOS, macOS, watchOS, tvOS, and visionOS to address an actively exploited zero-day tracked as CVE-2026-20700. The flaw is a memory corruption issue in Apple’s Dynamic Link Editor (dyld) that […]

Volvo Group hit in massive Conduent data breach

A Conduent breach exposed data of nearly 17,000 Volvo Group North America employees as the total impact rises to 25 million people. A data breach at business services provider Conduent has impacted at least 25 million people, far more than initially reported. Volvo Group North America confirmed that the security breach exposed data of nearly […]

Reynolds ransomware uses BYOVD to disable security before encryption

Researchers discovered Reynolds ransomware, which uses BYOVD technique to disable security tools and evade detection before encryption. Researchers found a new ransomware, named Reynolds, that implements the Bring Your Own Vulnerable Driver (BYOVD) technique to disable security tools and evade detection before encrypting systems. Broadcom’s cybersecurity researchers initially attributed the attack to Black Basta due […]

Prompt Injection Via Road Signs

Interesting research: “CHAI: Command Hijacking Against Embodied AI.”

Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness…