OpenAI explains how its AI agents avoid malicious links and prompt injection

0
84

OpenAI explains how its AI agents avoid malicious links

AI agents can perform tasks on behalf of the user, and this often involves controlling a web browser, sorting through emails, and interacting with the internet at large. And since there are lots of places on the internet that can steal your personal data or otherwise cause harm, it's important that these agents know what they're doing.

So, as users migrate away from web browsers and Google Search to AI browsers and agents, AI companies like OpenAI need to make sure these tools don't fall straight into a phishing attempt or click on malicious links.

In a new blog post, OpenAI explains exactly how its AI agents protect users.

One possible solution to this problem would be for OpenAI to simply adopt a curated list of trusted websites its agents are allowed to access. However, as the company explained in the blog post, that would probably be too limiting and would harm the user experience. Instead, OpenAI uses something called an independent web index, which records public URLs that are already known to exist on the internet, independent of any user data.

Mashable Light Speed

So, if a URL is on the index, then the AI agent can open it without a problem. If not, the user will see a warning asking for their permission to move forward.

OpenAI example image of a warning pop-up about an unverified web link

You might see this if the agent tries to access something it shouldn't. Credit: OpenAI

As OpenAI explains in its blog post, "This shifts the safety question from 'Do we trust this site?' to 'Has this specific address appeared publicly on the open web in a way that doesn’t depend on user data?'"

You can see a more technical explainer in a lengthy research paper OpenAI published last year, but the main thing to know is that it's possible for web pages to manipulate AI agents into doing things they shouldn't do. A common form of this is prompt injection, which gives clandestine instructions to the AI model, asking it to retrieve sensitive data or otherwise compromise your cybersecurity.

To be clear, as OpenAI states in the blog post, this is just one layer of security that doesn't necessarily guarantee that what you're about to click on is entirely safe. Websites can contain social engineering or other bad-faith constructs that an AI agent wouldn't necessarily be able to notice.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Site içinde arama yapın
Kategoriler
Read More
Oyunlar
Following Ubisoft's huge restructure, leakers can't decide whether the Watch Dogs franchise is dead or not
Following Ubisoft's huge restructure, leakers can't decide whether the Watch Dogs franchise is...
By Test Blogger6 2026-01-28 08:00:31 0 97
Technology
The Eufy E25 robot vacuum is $250 off at Amazon — act fast to save with this limited-time deal
Best robot vacuum deal: Save $250 on eufy E25...
By Test Blogger7 2026-01-24 01:00:52 0 148
Music
How To Watch The 2026 Grammy Awards + Ozzy Tribute
How to Watch/Stream the 2026 Grammy Awards + All-Star Ozzy Osbourne TributeThe 2026 Grammy Awards...
By Test Blogger4 2026-01-31 20:00:08 0 75
Oyunlar
Code Vein 2 may not be a soulslike masterpiece, but it proves that the rule of cool is much more fun
Code Vein 2 may not be a soulslike masterpiece, but it proves that the rule of cool is much more...
By Test Blogger6 2026-01-28 06:01:41 0 109
Technology
Google Chrome unveils Gemini-powered auto-browsing feature
Google Chrome unveils Gemini-powered auto-browsing feature...
By Test Blogger7 2026-01-30 00:00:19 0 116