The Relevance of Prompts in AI and Cybersecurity

Introduction to Prompting Artificial Intelligence (AI) has become an increasingly popular topic in recent years due to its potential to revolutionize various industries. The ability to automate tasks, analyze vast amounts of data, and make predictions has made AI a valuable tool for businesses and researchers alike. However, developing effective AI systems can be a […]

April 30, 2023
Read More >>

Security Risks of AI

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued …

April 27, 2023
Read More >>