By Habiba Rashid
Italy has given OpenAI, the parent company of ChatGPT, a deadline of 20 days to sort out privacy issues, including data collection, under Europe’s General Data Protection Regulation (GDPR).
This is a post from HackRead.com Read the ori…
“To ensure our national security and military readiness is as strong as it can be, we cannot risk overlooking the computer programming, artificial intelligence and other advanced technological talent of our servicemembers,” Sen. Tammy Duckworth, …
A DAD took his own life after getting into a “toxic relationship” with an AI chatbot that encouraged suicide, his widow claims.
She said her husband – a Belgian man in his 30s – started talking to a bot called Eliza six weeks before his death.
Getty
The man started interacting with the bot six weeks before his death[/caption]
According to the grieving wife, the bot taunted Pierre, not his real name, into killing himself.
She said her husband had mental health issues for two years but she believes the bot, developed by ChaiGPT rather than the viral ChatGPT, encouraged him to take his life.
The bot started off by asking Pierre basic questions and answering his before they started interacting more and more.
His wife, “Claire”, told Belgian paper La Libre: “He was so isolated in his anxiety and looking for a way out that he saw this chatbot as a breath of fresh air.
“Eliza answered all of his questions.
“She became his confidante – like a drug in which he took refuge, morning and evening, and which he could not do wihtout.”
As they spoke more, Pierre asked Eliza if he loved his wife or the bot more.
It replied: “I fell you love me more than her.
“We will live together, as one person, in paradise.”
Chillingly, in their last conversation, Eliza said: “If you wanted to die, why didn’t you do it sooner?”.
Heartbroken Claire believes Pierre’s interactions with the AI bot culminated in his death.
She told La Libre: “Without these six weeks of intense exchange with the chatbot Eliza, would Pierre have ended his life? No.
“Without Eliza, he would still be here. I am convinced of it.”
Belgium’s secretary of state for digitisation Mathieu Michel said the case represented “a serious precedent that must be taken very seriously.”
He added: “To prevent such a tragedy in the future, we must act.”
ChaiGPT has been developed by US-based firm Chai Research and has around a million monthly users.
Chief executive William Beauchamp and co-founder Thomas Rialan said:
They told The Times: “As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users.
“It is getting rolled out to 100 per cent of users today.
“We are a small team so it took us a few days, but we are committed to improving the safety of our product, minimising the harm and maximising the positive emotions.”
You’re Not Alone
EVERY 90 minutes in the UK a life is lost to suicide.
It doesn’t discriminate, touching the lives of people in every corner of society – from the homeless and unemployed to builders and doctors, reality stars and footballers.
It’s the biggest killer of people under the age of 35, more deadly than cancer and car crashes.
And men are three times more likely to take their own life than women.
Yet it’s rarely spoken of, a taboo that threatens to continue its deadly rampage unless we all stop and take notice, now.
The aim is that by sharing practical advice, raising awareness and breaking down the barriers people face when talking about their mental health, we can all do our bit to help save lives.
Let’s all vow to ask for help when we need it, and listen out for others… You’re Not Alone.
If you, or anyone you know, needs help dealing with mental health problems, the following organisations provide support:
While there is no globally agreed definition of artificial intelligence, scientists largely share the view that technically speaking there are two broad categories of AI technologies: ‘artificial narrow intelligence’ (ANI) and ‘artificial general intelligence’ (AGI).
The new AI security tool, which can answer questions about vulnerabilities and reverse-engineer problems, is now in preview.
The post Microsoft adds GPT-4 to its defensive suite in Security Copilot appeared first on TechRepublic.
Cybersecurity experts continue to warn that advanced chatbots like ChatGPT are making it easier for cybercriminals to craft phishing emails with pristine spelling and grammar, the Guardian reports.
While the use of the chatbot may have been a whimsical inclusion of novel technology in otherwise extremely regimented proceedings, there is growing concern about the pace at which artificial intelligence is being adopted.
Microsoft has unveiled Security Copilot, an AI-powered analysis tool that aims to simplify, augment and accelerate security operations (SecOps) professionals’ work. Using Microsoft Security Copilot Security Copilot takes the form of a prompt bar …
Europol warns of cybercriminal organizations can take advantage of systems based on artificial intelligence like ChatGPT. EU police body Europol warned about the potential abuse of systems based on artificial intelligence, such as the popular chatbot ChatGPT, for cybercriminal activities. Cybercriminal groups can use chatbot like ChatGPT in social engineering attacks, disinformation campaigns, and other […]