AI browsers and agents from Perplexity, OpenAI, and Anthropic are redefining how users interact with the web—but experts say the convenience comes at a cost.
According to security audits and research reviewed, vulnerabilities in these systems allow malicious actors to embed hidden instructions in websites that AI tools may unknowingly execute.
These attacks, known as covert or indirect prompt injections, can manipulate AI agents into performing unauthorized actions—such as leaking sensitive information, executing code, or redirecting users to phishing sites—without explicit user consent.
In covert prompt injection scenarios, attackers hide malicious commands within a webpage’s text, metadata, or even invisible elements. Once an AI ingests that data, the commands can override user intent and cause the agent to take unwanted actions. Tests show that unprotected AI browsers can fall victim to such tricks nearly one in four times during controlled experiments.
- Perplexity’s Comet Browser: Audits by Brave and Guardio found the tool could be manipulated via Reddit posts or phishing sites to execute scripts or extract user data.
- OpenAI’s Browsing Agents: Integrated into ChatGPT’s agentic features, they were shown to risk connected-account access through malicious email and website-based prompts.
- Anthropic’s Claude Browser Extension: Red-team tests revealed that hidden webpage commands could trigger automatic clicks on harmful links.
Researchers and cybersecurity firms, including Brave, Guardio, and Malwarebytes, have published findings showing that even simple online content can compromise AI agents. In one test, a Reddit post forced an AI browser to run phishing scripts. Reports from several top tech publications cautioned that these issues could lead to unauthorized data access or even financial theft.
Security analysts have raised red flags about AI agents linked to passwords or APIs. Allowing such integrations can expose email accounts, cloud drives, and payment platforms. Techcrunch and Cybersecurity Dive both reported instances where AI agents were tricked into revealing or manipulating sensitive information through injected commands.
Experts urge users to limit permissions, avoid granting AI agents password-level access, and monitor AI logs for anomalies. Developers are also advised to implement isolation systems and prompt filters. Some researchers even recommend using traditional browsers for sensitive actions until AI tools receive stricter safeguards.
While OpenAI, Anthropic, and Perplexity have likely heard about the challenges, cybersecurity professionals warn that AI-driven browsing remains a high-risk area in 2025. As these companies push further into autonomous web interaction, industry observers say transparency and stronger security standards are essential before such tools become mainstream.
FAQ 🧭
- What are covert prompt injections in AI browsers?
They are hidden commands embedded in web content that trick AI agents into executing harmful actions without user consent. - Which companies’ AI tools were affected by these vulnerabilities?
Perplexity’s Comet, OpenAI’s ChatGPT browsing agents, and Anthropic’s Claude browser features were all cited in recent reports. - What risks arise from linking AI agents to personal accounts?
Connecting AI tools to drives, emails, or APIs can enable data theft, phishing, and unauthorized account access. - How can users protect themselves from AI browser attacks?
Limit permissions, avoid password integrations, use sandboxed modes, and stay updated on security advisories.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。