Bard AI Causes Google Losses of $100 Billion

By Habiba Rashid
It turns out that the intelligence in Google’s newly-announced ChatGPT rival, Bard AI, is a bit artificial at this moment.
This is a post from HackRead.com Read the original post: Bard AI Causes Google Losses of $100 Billion

February 10, 2023
Read More >>

Google Introduces Bard: New ChatGPT Rival

By Habiba Rashid
Google’s CEO, Sundar Pichai, described the ChatGPT rival, Bard, as an “experimental conversational AI service” powered by LaMDA.
This is a post from HackRead.com Read the original post: Google Introduces Bard: New ChatGPT Rival

February 7, 2023
Read More >>

Attacking Machine Learning Systems

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems…

February 6, 2023
Read More >>

Threats of Machine-Generated Text

With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype.

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Abstract: Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability…

January 13, 2023
Read More >>

How hackers might be exploiting ChatGPT

The popular AI chatbot ChatGPT might be used by threat actors to hack easily hack into target networks. Original post at https://cybernews.com/security/hackers-exploit-chatgpt/ Cybernews research team discovered that the AI-based chatbot ChatGPT – a recently launched platform that caught the online community’s attention – could provide hackers with step-by-step instructions on how to hack websites. Cybernews […]

The post <strong>How hackers might be exploiting ChatGPT</strong> appeared first on Security Affairs.

January 5, 2023
Read More >>