The risk of pasting confidential company data into ChatGPT

Experts warn that employees are providing sensitive corporate data to the popular artificial intelligence chatbot model ChatGPT. Researchers from Cyberhaven Labs analyzed the use of ChatGPT by 1.6 million workers at companies across industries. They reported that 5.6% of them have used it in the workplace and 4.9% have provided company data to the popular […]

The post The risk of pasting confidential company data into ChatGPT appeared first on Security Affairs.

March 13, 2023
Read More >>

Prompt Injection Attacks on Large Language Models

This is a good survey on prompt injection attacks on large language models (like ChatGPT).

Abstract: We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM…

March 7, 2023
Read More >>