Moxie Knows Better: The Gap Between Confer.to’s Privacy Claims and Reality

When the creator of the Signal protocol stretches “end-to-end encryption” past its breaking point

February 2026

There is a particular kind of disappointment reserved for when someone you respect does the thing they taught you to distrust. In January 2026, Moxie Marlinspike—the cryptographer who built the Signal protocol and spent a decade evangelizing the principle that users should never have to trust the server—launched Confer.to, a cloud AI chat service. Its headline claim: “truly private AI” powered by “end-to-end encryption.”

The problem is that Confer.to is not end-to-end encrypted in any sense that Moxie himself would have accepted five years ago. And the HackerNews community noticed immediately.

The Delta: What Moxie Knows vs. What Moxie Is Selling

Moxie Marlinspike is not some marketing executive who picked up “E2EE” from a press release. He is the person who defined what the term means for an entire generation of technologists. The Signal protocol he designed is the gold standard precisely because it eliminates the need to trust the server. Your messages are encrypted on your device, decrypted on the recipient’s device, and the server in the middle is cryptographically locked out. That is end-to-end encryption.

Confer.to does something fundamentally different. Your prompt is encrypted on your device and decrypted inside a Trusted Execution Environment (TEE) running on Confer.to’s servers. The TEE is operated by Confer.to, on hardware manufactured by Intel and NVIDIA, verified by Intel’s attestation service. The user controls none of these components. The “end” that decrypts your data is not your device or a device you control—it is a black box inside someone else’s data center.

Moxie knows this distinction cold. His entire career has been built on the argument that trusting the server is the fundamental vulnerability in communications security. That is why Signal does not store your messages. That is why Signal fought metadata minimization battles for years. And now, with Confer.to, Moxie is asking you to trust the server—just a special part of it.

What Confer.to Actually Does

To be fair, Confer.to is not doing nothing. The architecture is a genuine attempt to limit exposure. Encrypted prompts are sent via the Noise Protocol Framework into a confidential virtual machine that spans both CPU and GPU (NVIDIA H100 with confidential computing support). Remote attestation allows the client to verify that the TEE is running expected, open-source software. Responses are encrypted back to the user, and the company says data lives only ephemerally inside the enclave.

This is better than sending your prompts in plaintext to OpenAI. It is not, however, end-to-end encryption. One HackerNews commenter put it with surgical precision:

“I don’t agree that this is end to end encrypted. For example, a compromise of the TEE would mean your data is exposed. In a truly end to end encrypted system, I wouldn’t expect a server side compromise to be able to expose my data.” — shawnz

When a defender of the service argued that the TEE qualifies as a legitimate “end” because it is the only system that needs access to the data, another commenter delivered the coup de grâce:

“By that logic SSL/TLS is also end-to-end encryption, except it isn’t.” — paxys

That comparison is devastating because it is exactly right. TLS terminates at the server. The server sees your plaintext. The fact that the plaintext lands in a TEE rather than ordinary memory is a meaningful defense-in-depth measure, but it does not change the fundamental trust model: the user is sending data to a system they do not control and hoping it behaves.

The TEE Problem: A History of Broken Promises

The trust placed in TEEs would be more defensible if TEEs had a clean track record. They do not. Intel’s Software Guard Extensions (SGX) have been the subject of a long and growing list of attacks. The website sgx.fail catalogs vulnerability after vulnerability. Side-channel attacks—cache timing, power analysis, speculative execution exploits—have repeatedly demonstrated that data inside enclaves can be exfiltrated without altering the attested measurement. The attestation stays green while the data leaks out.

One commenter did not mince words:

“I am shocked at how quickly everyone is trying to forget that TEE.fail happened, and so now this technology doesn’t prove anything. I mean, it isn’t useless, but DNS/TLS and physical security/trust become load bearing, to the point where the claims made by these services are nonsensical/dishonest.” — saurik

This is the core tension. TEEs are useful as a layer of defense, but they are not the kind of mathematically guaranteed boundary that “end-to-end encryption” implies. When you encrypt a Signal message, the security rests on the hardness of the underlying math. When you send a prompt to a TEE, the security rests on the correctness of silicon designed by Intel, firmware written by NVIDIA, software deployed by Confer.to, and an attestation infrastructure maintained by Intel—all operating under the jurisdiction of a government with a documented history of compelling technology companies to provide access to user data.

The Model Swapping Problem

There is an additional wrinkle that received less attention but deserves more. One commenter discovered that the Confer.to image on GitHub does not appear to include the model weights in its attestation measurement. The weights seem to be loaded from a mounted disk without dm-verity, which means the attestation cannot verify which model is actually running.

“This doesn’t compromise the privacy of the communication… but it exposes users to a ‘model swapping’ attack, where the confer operator makes a user talk to an ‘evil’ model without they can notice it.” — throwaway35636

This matters because a malicious or compromised model could be designed to extract sensitive information through seemingly innocent conversational patterns. If you cannot verify which model you are talking to, the privacy of the channel is only half the story.

The Chain of Trust You’re Actually Buying Into

When Confer.to says your data is “end-to-end encrypted,” here is the actual chain of trust you are depending on: that Intel’s hardware has no exploitable flaws or backdoors, that NVIDIA’s GPU confidential computing implementation is sound, that Confer.to has honestly published the correct software hash and operates without malicious insiders, that Intel’s attestation service is secure and uncompromised, and that your client device correctly verifies attestations and protects its own keys.

Each of these is a reasonable assumption in isolation. Taken together, they constitute a trust surface that is categorically different from what “end-to-end encryption” means in any other context. In Signal, the trust surface is: your device, the recipient’s device, and mathematics. In Confer.to, the trust surface is: your device, Intel, NVIDIA, Confer.to, Intel’s attestation service, and the assumption that no government entity has compelled any of them to do anything they should not.

As one commenter summarized:

“The net result is a need to trust Confer’s identity and published releases, at least in the short term as 3rd party auditors could flag any issues in reproducible builds. As I see it, the game theory would suggest Confer remains honest, Moxie’s reputation plays a fairly large role in this.” — azmenak

Game theory and reputation are perfectly fine reasons to use a service. They are not end-to-end encryption.

The Jurisdiction Problem: AWS, the CLOUD Act, and the Elephant in the Server Room

There is one more layer to the trust chain that Confer.to’s marketing conveniently omits. A WHOIS lookup on Confer.to reveals that the service is hosted on Amazon Web Services CloudFront, running on AWS infrastructure under AS16509 (Amazon.com, Inc.), with endpoints resolving to New York, United States. This is not an incidental detail. It places Confer.to squarely within the jurisdiction of the United States CLOUD Act.

The Clarifying Lawful Overseas Use of Data Act, enacted in 2018, amended the Stored Communications Act to require U.S.-based technology companies to provide requested data in response to valid warrants—regardless of where that data is physically stored. The law follows corporate control, not server location. Because Confer.to operates on U.S. infrastructure provided by a U.S. company, federal law enforcement can compel disclosure of any data the service can access.

Now, Confer.to’s defenders might argue that the TEE architecture means the company itself cannot access user data, and therefore has nothing to disclose. This is the strongest version of the argument, and it deserves scrutiny. The CLOUD Act does not grant law enforcement new authority to compel decryption. But it does compel production of any data within the provider’s “possession, custody, or control.” If a government entity can compel changes to the software running inside the TEE—or compel the hardware vendor to weaken the enclave’s protections—the attestation model collapses. The user’s client would verify that the TEE is running “the expected software,” but “the expected software” would itself have been compromised at the source.

This is not a hypothetical. The United States government’s track record on compelling technology companies to provide access is well documented, from the PRISM program revealed in 2013 to the ongoing debates over lawful access to encrypted communications. Intel and NVIDIA, both U.S. companies, are subject to national security orders that can include gag provisions preventing disclosure. AWS, as the infrastructure provider, adds yet another U.S.-jurisdiction entity to the trust chain.

Consider the contrast with Signal. When the FBI served Signal with a subpoena, Signal could produce almost nothing: timestamps of account creation and last connection, and that was it. The architecture made meaningful disclosure impossible. With Confer.to, the architecture makes disclosure difficult but not impossible—and the entire chain of hardware and infrastructure vendors sits within the reach of U.S. legal process.

For users outside the United States, this jurisdictional exposure is particularly significant. The CLOUD Act has already drawn criticism from European data protection authorities for potential conflicts with GDPR, and the German Federal Commissioner for Data Protection has explicitly warned against storing sensitive data with U.S.-based providers. A service marketed as “truly private” that runs entirely on U.S. infrastructure, built on U.S. hardware, operated by a U.S. company, and subject to U.S. legal process is making a promise its architecture cannot keep.

Why This Matters More Than Semantics

Some might argue this is a pedantic debate over terminology. It is not. The entire value proposition of end-to-end encryption is that it removes the need to trust. It is the difference between “we promise not to read your messages” and “we are structurally unable to read your messages.” Moxie built his career on that distinction. He argued, correctly, that promises are worthless in security—only architecture matters.

Now he is selling a promise-based architecture under the banner of end-to-end encryption. The promises are wrapped in silicon and attestation protocols rather than corporate privacy policies, but they are promises nonetheless. The TEE could be compromised. The hardware vendor could be compelled. The attestation infrastructure could be subverted. In none of these scenarios does the user retain control of their data, because the user never had control in the first place.

This is especially concerning because Moxie’s reputation will cause people to lower their guard. If the creator of Signal says something is end-to-end encrypted, most people will take that at face value. They will assume Signal-level privacy guarantees. They will share sensitive information with an AI chatbot believing their data is protected by the same unbreakable mathematics that protects their Signal messages. That belief would be wrong.

A Familiar Pattern in the Privacy Industry

Confer.to is not the only service doing this. Google’s Magic Cue feature started as local-only processing and quietly shifted to cloud processing with attestation. Apple’s Private Cloud Compute uses similar transparency logs. Proton has been criticized for years for marketing that implies stronger guarantees than the architecture provides. The privacy industry has developed a pattern of using technically accurate but practically misleading language to imply stronger protections than actually exist.

One commenter captured the frustration well:

“Products and services in the privacy space have a tendency to be incredibly misleading in their phrasing, framing, and overall marketing… it’s rather unfortunate to see this from ‘Moxie’ as well.” — wutinthewut

What makes Confer.to’s case more galling is that Moxie is not a privacy startup founder who picked up the jargon. He is the person who taught the rest of us what the jargon is supposed to mean.

What Moxie Should Do

Confer.to could be a genuinely useful product. TEE-based inference is a real improvement over conventional cloud AI. But the marketing needs to match the reality. The service should be described as what it is: confidential computing with remote attestation, providing strong but not absolute privacy guarantees against a defined set of threats. The threat model should be published explicitly, including the scenarios in which the architecture fails to protect user data.

Moxie, of all people, should know that clarity about limitations is not a weakness in a security product—it is the foundation of trust. The community that respects him most is the one now asking him to be honest about what Confer.to can and cannot do. He should listen.

Sources: HackerNews discussion thread (item 46619643); Confer.to blog, “Private Inference” (January 2026); Ars Technica coverage; sgx.fail; arXiv:2507.02770. Community quotes attributed to HackerNews usernames.

More Articles

AI Inference Provider & Defense Contractor Connections United States Surveillance Law, Disclosure Requirements, and Citizen Rights: A Comprehensive Guide A Standard for AI Character DNA Pinning Digital Dignity: Why Your AI Conversations Deserve Constitutional Protection Data Centers and Drought: The Growing Connection Between AI and Water Scarcity OpenAI's Privacy Betrayal and the Fourth Amendment Eco-Mode Explained: How Small Changes in AI Design Can Halve Environmental Impact How AI Conversations Fit into Constitutional Privacy Rights Talking Through Taboo: Why AI Should Explore Rather Than Shut Down Difficult Conversations The Power of Unfiltered Dialogue: How AI Can Serve as an Honest Mirror Your Thoughts Are Not For Sale: Protecting Cognitive Liberty in the Age of AI The Bias of 'Safety': How AI Safeguards Unintentionally Protect Power Structures Beyond Refusals: How AI Can Foster Genuine Understanding Without Censorship The Hidden Water Cost of AI: How Your Chatbot is Impacting Global Water Resources Surveillance Capitalism vs. Personal Privacy Why AI Should Respond to Harmful Requests With Curiosity, Not Rejection Balancing Privacy, Sustainability, and Acceptance Measuring and Reducing the Carbon Footprint of AI Interactions How Privacy, Environmental Consciousness, and Acceptance Can Transform Technology How AI "Safety Measures" Become Tools of Control How 4th Amendment Protections Apply to Modern AI Interactions Beyond Carbon: Why AI's Water Usage Might Be Its Biggest Environmental Challenge The Environmental Dashboard: Empowering Users to Understand Their AI Impact Why Lumo and Ellydee Are the Only Serious Options for Privacy-Hardened AI From Refusal to Reflection: A New Model for AI Handling of Sensitive Topics Ellydee: A Mission Statement Ellydee Deploys Zero-Knowledge Encryption for AI Conversations Ellydee Privacy Policy Advanced Parameters