The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of information security grows in parallel. A recent experiment by dat…
SUBSCRIBER+MEMBER EXCLUSIVE – The battle for semiconductors, a core ingredient of both basic and advanced technologies – such as Artificial Intelligence (AI) – has long […]More
I’m not sure there are good ways to build guardrails to prevent this sort of thing:
There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6)…
Indiefoxx said someone threatened to leak the pictures[/caption]
@indiefoxxlive
The Twitch streamer’s fans told her to “stay strong”[/caption]
How you can get help
Women’s Aid has this advice for victims and their families:
Always keep your phone nearby.
Get in touch with charities for help, including the Women’s Aid live chat helpline and services such as SupportLine.
If you are in danger, call 999.
Familiarise yourself with the Silent Solution, reporting abuse without speaking down the phone, instead dialing “55”.
Always keep some money on you, including change for a pay phone or bus fare.
If you suspect your partner is about to attack you, try to go to a lower-risk area of the house – for example, where there is a way out and access to a telephone.
Avoid the kitchen and garage, where there are likely to be knives or other weapons. Avoid rooms where you might become trapped, such as the bathroom, or where you might be shut into a cupboard or other small space.
If you are a victim of domestic abuse, SupportLine is open Tuesday, Wednesday and Thursday from 6pm to 8pm on 01708 765200. The charity’s email support service is open weekdays and weekends during the crisis – messageinfo@supportline.org.uk.
Women’s Aid provides a live chat service – available weekdays from 8am-6pm and weekends 10am-6pm.
You can also call the freephone 24-hour National Domestic Abuse Helpline on 0808 2000 247.
“The closest proximity that we have imagery of was 400 meters” taken during a pass on Feb. 28, said Karla Brown, program manager for the demonstration satellites at Lockheed Martin Space. “So we hope, with two viable assets up there, …
Motherboard is reporting on AI-generated voices being used for “swatting”:
In fact, Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as “Torswats” on the messaging app Telegram, the swatter has been calling in bomb and mass shooting threats against highschools and other locations across the country. Torswat’s connection to these wide ranging swatting incidents has not been previously reported. The further automation of swatting techniques threatens to make an already dangerous harassment technique more prevalent…
Remember The Sims? Well Stanford created a small virtual world with 25 ChatGPT-powered “people”. The simulation ran for 2 days and showed that AI-powered bots can interact in a very human-like way.
Over 100 officers, officials, academics, and industry insiders will discuss how to make AI more reliable, from ChatGPT-style “decision aids” to missile defense and cybersecurity, deputy CTO Maynard Holliday told Breaking Defense.
The use of Large Language Models (LLMs) is the fine tuning AI engines like ChatGPT need to focus the scam email output to only effective content that results in a wave of new email scams.