We’ve built the digital economy on blind trust: trusting vendors, contracts, and terms of service instead of verifying what software actually does. Trust is not a security property. Verifiable compute changes that: you can get cryptographic proof that the code running on a server matches its auditable source. With major regulatory deadlines arriving in 2026, this technology is more relevant than ever.
Table of contents
- The question no one asks
- We delegate trust constantly
- We hand over data and hope for the best
- Trust is not security
- What transparency actually means
- Confidential compute changes the game
- Why this matters beyond security
The question no one asks
When was the last time you actually verified that a piece of software does what it claims? Not read the terms of service. Not taken a vendor’s word for it. Actually verified it.
For most people the answer is never. And that’s the problem.
We delegate trust constantly
The default model for security today is delegation. You buy cybersecurity insurance. You pay for proprietary tools. You sign contracts with vendors. In every case, you’re paying someone else to make the problem go away.
This works until it doesn’t. And it often doesn’t.
SolarWinds was supposed to be the company that kept others secure. They were the leading security and IT management vendor, trusted by Fortune 500 companies and U.S. government agencies alike. Then in 2020, attackers injected a backdoor into SolarWinds’ own software updates. Because none of their customers could verify what code they were actually running, the compromise spread silently through trusted update channels to thousands of organisations. The company whose entire job was security became the attack vector.
This isn’t an isolated case. It’s the natural consequence of a system built on blind trust.
We hand over data and hope for the best
Think about how many times a day you send data to services you can’t inspect.
You type a prompt into a chatbot. What happens to that data? Is it logged? Used for training? Shared with third parties? You have no way to know. The terms of service say one thing, but terms of service are a legal obligation, not a technical constraint. They describe what a company promises to do, not what the software actually does.
You send a message to a friend over a chat platform. Once you hit send, all you know is that your message went to some server at some IP address. You have no idea how it’s stored, who can read it, or whether the encryption the company advertises is real.
You store files in the cloud. You use a password manager. You submit medical information through a health portal. In each case, you’re trusting that the software behind the interface behaves the way someone told you it would. You have no way to check.
The same problem exists at every scale. A financial institution deploys an AI model to analyse client data. What code is actually running on that server? Is it the model they audited? Has it been modified since deployment? The vendor’s documentation says one thing, but documentation is a promise, not a proof.
We’ve gotten used to this because it’s been the only option. Software services are black boxes. Some people try to protect themselves with privacy-preserving tools like VPNs or Tor. Others just trust blindly because they have no alternative. Neither group can actually verify what the services they depend on are doing with their data.
Trust is not security
There’s a widespread belief that well-funded proprietary tools are inherently more secure than open alternatives. This is a failure to reason clearly. While funding can help with resources required to do security well, it doesn’t produce security by default.
If you can’t inspect it, you can’t verify it. If you can’t verify it, you’re just trusting. And trust is not a security property. It’s the absence of one. Verifiability is a pre-requisite for a reasonable level of security.
Legal frameworks help, but they’re reactive. They punish breaches after they happen. They don’t prevent them. A contract that says “we won’t misuse your data” does very little in practical terms to stop software from misusing your data. Only the technical controls in the system can do that.
What transparency actually means
Real transparency means being able to verify what software is running on a server, what it’s capable of, and what it does with data you send it. Not by reading a blog post or a privacy policy, but by inspecting the actual code and proving it matches what’s deployed.
Almost no system works this way today. When you interact with a web service, you’re interacting with a black box. You send a request and get a response. Everything in between is invisible to you.
Public blockchains got this right in one narrow domain: every transaction is verifiable, every state change is auditable. But blockchains are impractical for most software. The question is whether we can bring a similar level of verifiability to general-purpose computing. It turns out we can.
Confidential compute changes the game
Confidential compute hardware, specifically secure enclaves combined with remote attestation, makes it possible to provide cryptographic proof of what software is running behind a given domain or IP. Combined with full-source bootstrapping and reproducible builds, this means anyone can independently verify exactly what code a server is executing.
This isn’t theoretical. It’s how Caution works today. Deploy software to an enclave, and anyone can rebuild the image from source, compare it against the live attestation, and get cryptographic proof that the running code matches the auditable source.
The shift is fundamental: from “trust us” to “verify it yourself.” Users no longer have to hope that companies are telling the truth about how their data is handled. Companies don’t have to cross their fingers that the code they hope is running in their mission critical systems is actually what they expect it to be.
Regulatory deadlines are making this urgent. Multiple major compliance frameworks are reaching enforcement milestones simultaneously. The EU AI Act’s obligations for most high-risk AI systems take effect August 2, 2026, requiring organisations in healthcare, finance, government, and critical infrastructure to demonstrate how their workloads operate, not just claim they are secure. In parallel, the HIPAA Security Rule overhaul (expected to be finalised in 2026, pending regulatory approval) introduces mandatory, prescriptive cybersecurity controls across the entire healthcare sector for the first time, with stricter audit requirements and faster breach notification timelines. Organisations across multiple verticals now face hard deadlines with real enforcement consequences. Verifiable compute is no longer a nice-to-have. It is an essential tool to aid compliance requirement.
The EU AI Act’s obligations for high-risk AI systems take effect on August 2, 2026, requiring organisations to demonstrate how their workloads operate. The HIPAA Security Rule overhaul, expected to be finalised in 2026, introduces mandatory cybersecurity controls across the healthcare sector for the first time.
Why this matters beyond security
Verifiability is a missing building block. Not just for security, but for individual freedom and the entire digital economy.
The ability to verify what software does is a prerequisite for trust in a digital world. Without it, even the most carefully architected systems are incomplete. You can choose your own tools, control your own data, build on open standards, but the moment you interact with a service you can’t inspect, you’re back to trusting someone else. Verifiable compute closes that gap. It gives any system the one thing that’s been missing: a way to prove that remote software respects the rules it claims to follow.
This is also why the technology has to be truly open source. Not open core with the important parts behind a paywall. Fully open, and auditable by anyone. If the goal is to remove the need for blind trust, the tool that does it can’t require blind trust either. Anything less would be a contradiction.
We think of this as infrastructure for the open internet. The same way public key cryptography gave individuals the power to communicate privately, verifiable compute gives them the power to interact with services confidently. It’s a primitive that makes other freedoms possible.