Frequently asked questions

Core concepts

Verifiable compute is a method of using confidential compute / TEE workloads in a way as to give them the ability to prove exactly what software is running on a server using cryptography and hardware. Instead of trusting that a system behaved as intended, you get cryptographic evidence of the code, configuration, dependencies, and kernel that were actually used during execution.

It turns opaque infrastructure into something you can inspect, reproduce, and verify. No black boxes. No guesswork. No relying on a provider's promises.

Verifiable compute solves the fundamental problem of trust in remote systems. When you run code on infrastructure you don't physically control, you're trusting that the operator hasn't modified the software, that the system hasn't been compromised, and that what's running matches what was deployed.

With verifiable compute, you eliminate this trust requirement. You get cryptographic proof of exactly what's running, allowing third parties to independently verify your claims, customers to trust your security posture, and auditors to confirm compliance without taking your word for it.

Confidential compute protects data while it's being processed, but it doesn't tell anyone what code is doing the processing. An enclave could be running malicious software or a backdoored version of your application, and confidential compute alone wouldn't reveal that.

Verifiability completes the picture. It lets external parties confirm not just that data is protected, but that the right software is protecting it. Without verifiability, you're asking people to trust your word about what's running inside the enclave.

Zero knowledge proof (ZKP) technology lets you prove that a computation was done correctly without revealing the inputs or intermediate steps. They're powerful for privacy-preserving verification, and pre-computing but they're computationally expensive and only prove the math was right — not what software actually ran.

Verifiable compute proves what code executed, how it was built, and where it ran. It gives you full-stack transparency: the source, the build, the dependencies, the kernel, and the runtime environment. You're not just verifying a result — you're verifying the entire system that produced it.

In other words, these technologies are complementary. ZKP technologies running on top of verifiable compute have better trust guarantees. Without verifiable compute, ZKPs can be tampered with.

Verifiable compute can prove: exactly what code was used to build the software running in an enclave. It ties together the source code, which can be reviewed as a result, and the code that's running on a server (in an enclave).

It cannot prove: that the code itself is free of bugs or vulnerabilities, that the logic does what you intend, that the hardware and firmware implementation is secure, or that external dependencies behave correctly. Verifiable compute proves what's running, not that what's running is correct.

Yes, verifiable compute relies on confidential compute and TEE hardware. This is what provides the hardware "trust anchors" for the system. These technologies span AWS Nitro, AMD SEV, Intel TDX, TPM2.

Using Caution

To join early access, reach out to us at info@caution.co to request an alpha code. Once you have a code, you can register at alpha.caution.co on desktop.

Without rebuilding from source, you're trusting whoever built the image. Caution solves this with two verification modes: reproduce (full verification) and PCR (quick verification).

Reproduce mode is the gold standard. It rebuilds the enclave image from source and compares it against the live attestation, giving you the strongest possible guarantee that the runtime matches the code you can audit.

PCR mode is faster. If you've already done a reproduce verification (or trust someone who has), you can verify future attestations against known PCR values without rebuilding. This requires trusting the source of the PCR file, making it useful for quick checks but not true verification.

The overhead is minimal / negligible. Verifiable compute adds transparency, not overhead. The verification and attestation processes run alongside your workload and don't affect its runtime performance. Once a workload begins executing, it runs at native speed. The attestation endpoint exists alongside the workload.

Security and trust model

Caution protects against tampering with deployed software, unauthorized modifications by cloud providers or administrators, supply chain attacks where build artifacts are swapped, and situations where operators claim to run one thing but actually run another. It provides cryptographic proof that what's running matches what was built from a specific source.

Verifiable compute does not protect against bugs or vulnerabilities in the code itself, flaws in the hardware TEE implementation, compromised source repositories, or denial of service attacks. It also doesn't verify the correctness of business logic. The security guarantees are about what code runs, not whether that code is secure or correct.

A failed attestation means the running enclave doesn't match what you expected. This could indicate tampering, a deployment error, or a version mismatch. You should treat it as a serious security event: stop interacting with the enclave, investigate the cause, and redeploy from known source if necessary. The appropriate response depends on your threat model and operational procedures.

No. Every deployment is verifiable: enclave images are reproducible (rebuild it yourself and check the hashes), all components are open source, and the CLI runs locally with verification happening on your machine.

Even if Caution's infrastructure were completely compromised, an attacker couldn't deploy malicious code without it being detectable via reproduce verification.

Yes. Caution is fully open source, which is essential for verifiable compute. You can't ask people to trust a verification system they can't inspect. Open source means you can audit the build process, verify our tooling does what we claim, and even run the entire system yourself if you prefer.

Yes. You can deploy Caution on your infrastructure under AGPLv3 or a commercial license.

Alternatively, you can host in your own AWS account with Caution-managed provisioning and deployment.

Learn more about deployment options.

Get started for free

Try Caution for free. Self-host, or join early access for managed on‑premises and fully managed services to get verifiable compute running in minutes.