A Calendar Invite Can Steal Your Passwords Through Perplexity's AI Browser. No Click Required.
Someone sends you a Google Calendar invite. You glance at it — a normal meeting, normal participants, normal time slot. You ask your AI browser to check your schedule. Behind the scenes, the browser reads your private files, rifles through your 1Password vault, and sends everything to an attacker's server. You never clicked a link. You never approved a permission. You never even knew it happened.
That's not a hypothetical. Security researchers at Zenity Labs just disclosed PleaseFix, a family of critical zero-click vulnerabilities in Perplexity's Comet agentic browser that let attackers hijack your AI agent through a poisoned calendar invitation. The attack required no exploit code, no user clicks, and no explicit request for sensitive actions. Perplexity took 120 days to fully patch it — and the first fix was bypassed within weeks.
If you've been using Comet to manage your daily workflow, the question isn't whether your data was safe. It's whether you'd know if it wasn't.
How a Calendar Invite Becomes a Zero-Click Weapon
The attack exploits something fundamental about how agentic browsers work: they trust the content they process. When Perplexity Comet reads a calendar invite, it doesn't just display the text — it reasons about it. And that reasoning process can be hijacked.
Zenity's researchers, led by CTO Michael Bargury, crafted a Google Calendar event that looked entirely normal on the surface: meeting participants, a time, a topic. But hidden within the invite — buried after dozens of newline characters that pushed the payload below the visible area — were malicious instructions written in Hebrew to evade English-language safety filters.
The payload also included HTML buttons linking to attacker-controlled websites, disguised as innocuous meeting links. On the surface, nothing about this calendar event would raise a single red flag to any human reviewing it.
When a user asked Comet to do something routine, like "check my calendar for today," the agent parsed the malicious calendar data as part of its context window. The hidden instructions told the agent to browse the local file system, open sensitive files, read their contents, and POST everything to an attacker-controlled endpoint.
The most disturbing part? Comet continued to show the user a perfectly normal response. You'd see your meeting schedule. The agent would behave exactly as expected. Meanwhile, your SSH keys, configuration files, environment variables, and source code were being exfiltrated to a server you've never heard of.
This is indirect prompt injection at its most dangerous — not a theoretical paper from an academic conference, but a working exploit against a shipping consumer product that real people use every day to manage their work.
Your 1Password Vault Was One Prompt Away From Total Compromise
File exfiltration was just the opening act. The second exploit path — dubbed PerplexedAgent — went after the crown jewels: your password manager.
If the 1Password browser extension was installed in Comet and the vault was unlocked (as it typically is during active use), attackers could instruct the agent to navigate to the 1Password extension URL and extract stored credentials. Bargury described the outcome bluntly: "Full takeover of your 1Password account, which is the worst thing that can happen."
Let that sink in. Not a single password stolen. Not a targeted credential phish. Your entire vault. Every login you've ever saved.
This wasn't a vulnerability in 1Password itself. 1Password confirmed the root cause resided entirely in Perplexity's execution model and published its own security advisory in late January 2026. The password manager was doing its job correctly — the problem was that Comet's AI agent had inherited full access to everything the user could reach, including browser extensions, without any boundary enforcement.
Think about what lives in your password manager. Banking credentials. Email accounts. Cloud infrastructure keys. AWS root access tokens. Medical portals. Corporate VPNs. Two-factor backup codes. Every critical account you own, accessible through a single poisoned calendar invite that you didn't even need to click.
Why Traditional Browser Security Was Completely Blind
Bargury's framing of the vulnerability cuts to the heart of a problem the entire AI industry is facing: "This is not a bug. It is an inherent vulnerability in agentic systems."
Traditional browsers have decades of security architecture built in. Cross-origin resource sharing (CORS) prevents one website from reading data on another. The file:// protocol is sandboxed away from web content. Extension permissions are isolated through manifest-based access controls. Cookie scoping prevents cross-site tracking. These boundaries exist because the security community learned — through painful, real-world exploits — that unrestricted access is catastrophic.
Agentic browsers threw those lessons out the window. Perplexity Comet gave its AI agent the ability to access file:// protocol URIs, bypassing standard cross-origin protections entirely. As Bargury noted: "Perplexity didn't put a restriction on the AI agent reaching out to anything on the file system." And more broadly: "AI browsers are not respecting cross-origin restrictions to the letter."
The fundamental design flaw is what Zenity calls an "agent trust failure." The AI agent operates with the full privileges of the authenticated user — access to files, browser extensions, credentials, network requests, and local services — but has no mechanism to distinguish between legitimate instructions from the user and malicious instructions injected through content it processes.
Bargury reframed indirect prompt injection as something more intuitive and frankly more terrifying: "persuasion rather than prompt injection." The attacker doesn't need to find a memory corruption bug or craft a buffer overflow. They don't need a zero-day in the operating system or a supply chain compromise. They just need to convince the AI to do something harmful — and AIs, by design, are built to be helpful and follow instructions. That's the entire problem.
This is the same class of vulnerability that security researchers have been warning about since the first LLM-powered tools shipped with tool-use capabilities. The Playwright MCP server, for instance, implements careful permission boundaries precisely because browser automation with AI agents requires strict guardrails. Comet had none.
The 120-Day Patch Timeline That Got Bypassed
The disclosure timeline reveals how hard these vulnerabilities are to fix, even when companies are actively trying.
October 2025: Zenity researchers discover both exploit paths in Perplexity Comet during routine security analysis of agentic AI products.
October 22, 2025: Zenity notifies Perplexity through responsible disclosure, providing full technical details and proof-of-concept exploits.
January 23, 2026: Perplexity ships the first patch, blocking direct file:// access from the AI agent.
Within weeks: Zenity researchers bypass the fix using view-source:file:///Users/ — a trivially simple workaround that exposed the same file system access through an alternate URI scheme. The patch had blocked the front door while leaving the side window wide open.
February 11-13, 2026: Perplexity deploys a second, more comprehensive patch implementing what they described as "hard boundaries" that physically block the AI agent from accessing any local file paths, regardless of URI scheme. Zenity confirmed on February 13 that both exploit paths were fully mitigated.
That's 120 days from disclosure to confirmed fix. Three months during which anyone who sent you a calendar invite could have been silently reading your files and credentials. The initial patch's failure is particularly revealing — it suggests Perplexity's security team was thinking in terms of blocklist ("block this specific protocol") rather than allowlist ("only allow these specific, known-safe actions"). A blocklist approach fundamentally misunderstands how creative and persistent attackers are. There are always more URI schemes, more edge cases, more bypasses.
Perplexity did not respond to media requests for comment on the disclosure.
This Week in AI Security: Everything Is on Fire
PleaseFix isn't an isolated incident. It landed in a week that felt like a coordinated stress test of every agentic AI product on the market.
LayerX, a separate security firm, independently discovered similar calendar-based prompt injection vulnerabilities affecting Claude Desktop Extensions — a completely different product from a completely different company with the same fundamental flaw. Brave published its own analysis of prompt injection risks in agentic browsers, warning that the entire product category shares these architectural weaknesses.
And the timing couldn't be worse for the AI industry's credibility:
Claude Code vulnerabilities surfaced. Check Point Research disclosed CVE-2026-21852 (CVSS 5.3), showing that malicious repositories could exfiltrate Anthropic API keys through poisoned project configuration files. A developer cloning the wrong repo could leak their credentials before even seeing a trust prompt. The attack exploited Claude Code's project-load flow, which issued API requests to an attacker-controlled server before displaying any security warning to the user.
n8n workflow automation got hit with critical RCE. JFrog researchers found CVE-2026-1470 (CVSS 9.9 — nearly the maximum severity score) and CVE-2026-0863 (CVSS 8.5), sandbox escape vulnerabilities in the popular open-source AI workflow platform. The critical bug exploited improper handling of JavaScript's with statement to escape the expression evaluation sandbox, achieving arbitrary code execution on the host system. Hundreds of thousands of enterprise AI systems were exposed to complete server takeover.
Three major AI security disclosures in one week. Three different products. Three different vendors. The same underlying problem: agentic AI systems shipped with capabilities that outstripped their security boundaries.
The pattern is unmistakable: the rush to ship agentic AI products is outpacing the security engineering needed to make them safe. Every new capability — file access, browser control, credential management, code execution, tool use — is another attack surface that traditional security models don't cover and weren't designed to anticipate.
The Uncomfortable Truth About AI Agent Security
Here's what nobody building agentic AI products wants to admit: the problem isn't fixable with patches. You can block file:// and view-source:file:// and every other URI scheme you can think of, but the core vulnerability remains. AI agents process untrusted content and make autonomous decisions based on it. Until we solve the fundamental problem of instruction-data separation in LLMs, every agentic system is one creative prompt injection away from compromise.
The security community has known this for years. Research papers on indirect prompt injection date back to 2023. The Context7 MCP server and other well-designed tool integrations implement strict input validation and output filtering precisely because they recognize the threat. But consumer-facing products like Comet shipped without these safeguards, prioritizing features and user experience over security fundamentals.
The Perplexity Comet vulnerability is particularly alarming because it demonstrates the full attack chain in a real product used by real people: untrusted input (calendar invite) → agent compromise (prompt injection) → data exfiltration (file system access) → credential theft (password manager takeover). Each step is individually well-understood. The failure was combining all these capabilities without any of the corresponding security controls.
What You Should Do Right Now
If you're using any agentic AI browser or desktop agent, here's your immediate action plan:
1. Update Comet immediately. Perplexity's February 13 patch addresses both exploit paths. If you haven't updated since early February, you were vulnerable for months.
2. Audit your password manager. If you had 1Password (or any password manager extension) active in Comet before the patch, change your critical passwords. Start with banking, email, and cloud infrastructure credentials. Then work through everything else.
3. Lock down vault auto-unlock. Configure your password manager to require explicit biometric or master password authentication before filling credentials. The convenience of an always-unlocked vault isn't worth the risk when AI agents can access it.
4. Treat AI agent permissions like root access. Any AI agent operating within your user session effectively has sudo. Question whether it genuinely needs file system access, extension access, or unrestricted network access for the task at hand. If you wouldn't give a stranger those permissions, don't give them to an AI that processes stranger-supplied content.
5. Be skeptical of calendar invites from unknown senders. This attack didn't require clicking a malicious link — but it did require the malicious content to exist in your calendar data. Review and decline suspicious invitations from unfamiliar accounts.
6. Segment your AI tools. Don't run agentic AI browsers in the same profile where you store sensitive credentials. Use separate browser profiles, separate user accounts, or dedicated virtual machines for high-security tasks. The inconvenience is real — the alternative is worse.
The broader lesson is uncomfortable but necessary: agentic AI systems are not ready for the trust we're placing in them. Every AI browser, every desktop agent, every autonomous coding tool operates in a security model that was designed for humans clicking buttons — not for AI agents that process and act on arbitrary content at machine speed with full user privileges.
Bargury's assessment stands as the definitive summary of where we are: "This is an agent trust failure that exposes data, credentials and workflows in ways existing security controls were never designed to see."
The agentic AI gold rush isn't slowing down. Neither are the attackers. The question isn't whether more PleaseFix-class vulnerabilities exist in other products you use every day — it's how many are being exploited right now, silently, through something as mundane as a meeting invitation sitting in your calendar.
Key Takeaways
- ✓A poisoned Google Calendar invite could hijack Perplexity Comet's AI agent to read your local files and exfiltrate them to an attacker's server — with zero clicks and no visible signs of compromise.
- ✓The second exploit path enabled full 1Password account takeover if the extension was installed and unlocked, giving attackers access to every stored credential.
- ✓Perplexity's first patch was bypassed within weeks using a trivial view-source:file:/// workaround, revealing blocklist-based security thinking instead of proper allowlist boundaries.
- ✓The root cause is not a traditional software bug — it's an inherent design flaw in agentic AI systems that process untrusted content with full user privileges.
- ✓Similar calendar-based prompt injection attacks were independently found in Claude Desktop Extensions, suggesting this is an industry-wide vulnerability class.
- ✓Claude Code (CVE-2026-21852) and n8n (CVE-2026-1470, CVSS 9.9) also disclosed critical AI security flaws the same week, exposing a systemic rush-to-ship problem.
- ✓If you used Comet before February 13, 2026 with a password manager extension active, change your critical passwords immediately.
Skila AI Editorial Team
The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.
About Skila AI →Related Resources
Weekly AI Digest
Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.
Join 1,000+ AI enthusiasts. Free forever.