Know what runs on a server
No more black boxes. Caution gives you cryptographic proof, total portability and minimal complexity.
ProblemAn enclave without verifiability is still a black box. Attestation reports a hash, not whether it matches your source. You cannot confirm what the enclave is running.
ProblemTrusting one type of hardware creates a single point of failure. All companies, including hardware manufacturers, are susceptible to a wide range of attacks.
Remove single points of failure
Reproducible stack down to the kernel
Create enclave images based on a full-source bootstrapped and fully deterministic toolchain to allow full reproduction of software.
This allows one to inspect the code used to build an image, hash it, and compare it to the hashes provided by the TEE attestation.
Multi-hardware, multi-cloud resilience
Leverage a diverse set of TEE hardware for isolation and attestations to mitigate single points of failure.
Then leverage seamless multi-cloud workload deployments for never before seen levels of resilience. We distribute risk at the network level, why not on the hardware level?
From the security engineers trusted by industry leaders
Fast, verifiable deployments
Replace months of custom infrastructure and security engineering with one unified workflow that runs in minutes. Deploying to a TEE has never been this fast.
Verifiable compute, explained
Learn what verifiable compute is, how it replaces blind trust with independent proof, and how it closes the security gaps current TEE solutions fail to.
Get early access6 minute overview by Caution co-founder Anton Livaja
Deploy in minutes
From source to a verifiable enclave in minutes.
Initialize
Run caution init to capture the build environment and lock it for
reproducible enclave builds.
Deploy
Push with git push caution main. Caution builds a reproducible
enclave image and provisions the TEE.
Verify
Run caution verify --reproduce to rebuild the image, compare hashes,
and confirm exactly what the enclave is running.
One workflow, any TEE
Caution runs across AWS Nitro today, with Intel TDX, AMD SEV-SNP, and TPM 2.0 attestations coming in 2026.
Learn about multi-hardware attestation
Verifiably run VPN
Swipe to explore real-world applications of verifiable compute.
Essentially all server-side computation requires trusting the operator. There's no way to verify what code is actually running or how data is being handled.
Run any workload in a verifiable manner to cryptographically prove what software is running on a server. The technology is finally here - use it.
Users must take a VPN provider's word they won't log traffic, weaken encryption, or run outdated software.
Run VPNs in a verifiable manner to have cryptographic evidence there's no logging or tampering.
Users can't see how oracle data is sourced or computed. Operators could alter inputs, swap feeds, or run patched binaries that bias results.
Run oracles in a verifiable manner to cryptographically prove the code, configuration, and data flows behave as intended.
Remote LLMs could swap models, tweak parameters, modify weights, or capture and leak input data without visibility.
Run LLMs in a verifiable manner to cryptographyically prove the model, weights, parameters, and runtime are behaving as intended.
A compromised Tor node could log traffic, inject headers, downgrade cryptographic algorithms, or run unpatched software without detection.
Run Tor nodes in a verifiable manner to cryptographically prove trusted code that is free of backdoors is used.
Node operators can modify and alter the behavior of nodes as they see fit. This could introduce unexpected behavior into networks.
Run nodes in a verifiable manner to cryptographically prove the exact version of software that's being used.
Keep learning about verifiable compute
Explore our latest posts to understand the platform, the ideas behind it, and the problems verifiable compute solves. Email newsletter is coming soon.
Read blogOpen source, not open core
Caution is fully open source. You get the entire platform with nothing held back. A hosted managed service will be available in Q1 2026.
Frequently asked questions
Core Concepts
Verifiable compute is a method of using confidential compute / TEE workloads in a way as to give them the ability to prove exactly what software is running on a server using cryptography and hardware. Instead of trusting that a system behaved as intended, you get cryptographic evidence of the code, configuration, dependencies, and kernel that were actually used during execution.
It turns opaque infrastructure into something you can inspect, reproduce, and verify. No black boxes. No guesswork. No relying on a provider's promises.
Verifiable compute solves the fundamental problem of trust in remote systems. When you run code on infrastructure you don't physically control, you're trusting that the operator hasn't modified the software, that the system hasn't been compromised, and that what's running matches what was deployed.
With verifiable compute, you eliminate this trust requirement. You get cryptographic proof of exactly what's running, allowing third parties to independently verify your claims, customers to trust your security posture, and auditors to confirm compliance without taking your word for it.
Confidential compute protects data while it's being processed, but it doesn't tell anyone what code is doing the processing. An enclave could be running malicious software or a backdoored version of your application, and confidential compute alone wouldn't reveal that.
Verifiability completes the picture. It lets external parties confirm not just that data is protected, but that the right software is protecting it. Without verifiability, you're asking people to trust your word about what's running inside the enclave.
Zero knowledge proof (ZKP) technology lets you prove that a computation was done correctly without revealing the inputs or intermediate steps. They're powerful for privacy-preserving verification, and pre-computing but they're computationally expensive and only prove the math was right — not what software actually ran.
Verifiable compute proves what code executed, how it was built, and where it ran. It gives you full-stack transparency: the source, the build, the dependencies, the kernel, and the runtime environment. You're not just verifying a result — you're verifying the entire system that produced it.
In other words, these technologies are complementary. ZKP technologies running on top of verifiable compute have better trust guarantees. Without verifiable compute, ZKPs can be tampered with.
Verifiable compute can prove: exactly what code was used to build the software running in an enclave. It ties together the source code, which can be reviewed as a result, and the code that's running on a server (in an enclave).
It cannot prove: that the code itself is free of bugs or vulnerabilities, that the logic does what you intend, that the hardware and firmware implementation is secure, or that external dependencies behave correctly. Verifiable compute proves what's running, not that what's running is correct.
Yes, verifiable compute relies on confidential compute and TEE hardware. This is what provides the hardware "trust anchors" for the system. These technologies span AWS Nitro, AMD SEV, Intel TDX, TPM2.
Using Caution
Caution is currently available via closed alpha. We're working closely with initial users to refine the experience. Learn more on the Early Access page.
Without rebuilding from source, you're trusting whoever built the image. Caution solves this with two verification modes: reproduce (full verification) and PCR (quick verification).
Reproduce mode is the gold standard. It rebuilds the enclave image from source and compares it against the live attestation, giving you the strongest possible guarantee that the runtime matches the code you can audit.
PCR mode is faster. If you've already done a reproduce verification (or trust someone who has), you can verify future attestations against known PCR values without rebuilding. This requires trusting the source of the PCR file, making it useful for quick checks but not true verification.
The overhead is minimal / negligible. Verifiable compute adds transparency, not overhead. The verification and attestation processes run alongside your workload and don't affect its runtime performance. Once a workload begins executing, it runs at native speed. The attestation endpoint exists alongside the workload.
Security and Trust Model
Caution protects against tampering with deployed software, unauthorized modifications by cloud providers or administrators, supply chain attacks where build artifacts are swapped, and situations where operators claim to run one thing but actually run another. It provides cryptographic proof that what's running matches what was built from a specific source.
Verifiable compute does not protect against bugs or vulnerabilities in the code itself, flaws in the hardware TEE implementation, compromised source repositories, or denial of service attacks. It also doesn't verify the correctness of business logic. The security guarantees are about what code runs, not whether that code is secure or correct.
A failed attestation means the running enclave doesn't match what you expected. This could indicate tampering, a deployment error, or a version mismatch. You should treat it as a serious security event: stop interacting with the enclave, investigate the cause, and redeploy from known source if necessary. The appropriate response depends on your threat model and operational procedures.
No. Every deployment is verifiable: enclave images are reproducible (rebuild it yourself and check the hashes), all components are open source, and the CLI runs locally with verification happening on your machine.
Even if Caution's infrastructure were completely compromised, an attacker couldn't deploy malicious code without it being detectable via reproduce verification.
Yes. Caution is fully open source, which is essential for verifiable compute. You can't ask people to trust a verification system they can't inspect. Open source means you can audit the build process, verify our tooling does what we claim, and even run the entire system yourself if you prefer.
Yes. Caution can be self-hosted entirely on your own infrastructure. You run the build system, the deployment tools, and the enclaves on hardware you control. This is important for organizations that can't rely on external services or need to maintain complete control over their verification infrastructure.