A CISO’s guide to securing AI models

In AI applications, machine learning (ML) models are the core decision-making engines that drive predictions, recommendations, and autonomous actions. Unlike traditional IT applications, which rely on predefined rules and static algorithms, ML models a…

March 26, 2025
Read More >>

Navigating the New Frontier: Agentic AI’s Promise and Challenges

Artificial intelligence (AI) is entering a new era with the rise of agentic AI, a groundbreaking innovation redefining how machines interact with the world and perform tasks. Unlike traditional AI systems that operate within the bounds of human-defined algorithms and instructions, agentic AI stands apart because it can act autonomously, adapt to changing environments, and […]

Navigating the New Frontier: Agentic AI’s Promise and Challenges was originally published on Global Security Review.

February 4, 2025
Read More >>

Provisioning cloud infrastructure the wrong way, but faster

By Artem Dinaburg Today we’re going to provision some cloud infrastructure the Max Power way: by combining automation with unchecked AI output. Unfortunately, this method produces cloud infrastructure code that 1) works and 2) has terrible security properties. In a nutshell, AI-based tools like Claude and ChatGPT readily provide extremely bad cloud infrastructure provisioning code, […]

August 27, 2024
Read More >>

Trail of Bits’ Buttercup heads to DARPA’s AIxCC

With DARPA’s AI Cyber Challenge (AIxCC) semifinal starting today at DEF CON 2024, we want to introduce Buttercup, our AIxCC submission. Buttercup is a Cyber Reasoning System (CRS) that combines conventional cybersecurity techniques like fuzzing and static analysis with AI and machine learning to find and fix software vulnerabilities. The system is designed to operate […]

August 9, 2024
Read More >>

Should artificial intelligence be banned from nuclear weapons systems?

By: Professor Steffan Puwal Against a backdrop of conflict and global security concerns, 2023 may prove to have also been a pivotal year for automated nuclear weapons systems. A year that began with chatbots and Artificial Intelligence (AI) as the subjects of major news stories – some with particularly concerning headlines – ended with members of the […]

Should artificial intelligence be banned from nuclear weapons systems? was originally published on Global Security Review.

June 7, 2024
Read More >>

Artificial Intelligence for Nuclear Deterrence Strategy

The document “Artificial Intelligence for Nuclear Deterrence Strategy 2023” outlines the Advanced Simulation and Computing (ASC) program’s strategy to integrate artificial intelligence (AI) and machine learning (ML) into the U.S. nuclear deterrence mission. Here are the key points: Foreword and Executive Summary: The ASC program has utilized high-performance computing for nearly three decades to support […]

Artificial Intelligence for Nuclear Deterrence Strategy was originally published on Global Security Review.

June 7, 2024
Read More >>

Assessing the security posture of a widely used vision model: YOLOv7

By Alvin Crighton, Anusha Ghosh, Suha Hussain, Heidy Khlaaf, and Jim Miller TL;DR: We identified 11 security vulnerabilities in YOLOv7, a popular computer vision framework, that could enable attacks including remote code execution (RCE), denial of service, and model differentials (where an attacker can trigger a model to perform differently in different contexts). Open-source software […]

November 15, 2023
Read More >>