Allow me to add some context. The idea behind "proving" computation relies on the assumption that someone wants to verify it.
Some systems require verification. Most notably: distributed systems that don't rely on a centralized authority to enforce rules.<p>Historically, you either don't verify compute: you don't verify that AWS runs correctly code on the cloud, nor do you verify the algorithm used to compute your taxes.
Or you verify compute via re-execution (to make sure a program ran correctly, just run it locally on given inputs): e.g. every participant in the Bitcoin protocol re-executes all transactions on the network. This means that all payments on this platform are verified by peer-to-peer nodes. Moreover, anyone who desires to convince themselves that the system runs with integrity can simply boot a node and join the network. This adds robust security characteristics! Nonetheless, this creates great friction as every time a transaction is verified on these distributed systems, it is re-executed. This does not scale.<p>Enters "ZK" (zero-knowledge), a technology able to grant verifiability to computation that does not rely on re-execution but rather checking a succinct certificate of correct computation. Anything that refers to succinct proofs of computation is currently referred to as ZK. Nonetheless, ZK can grant either Scalability or Privacy. It is usually known to refer to privacy. ZK-VMs are mostly used to achieve scalability.<p>The disruptive advantage of ZK is the separation of two actors: Prover and Verifier. The prover runs a computation once and generates a proof that it did so with integrity (following some pre-defined rules such as "i can't create new money"). Verifiers are then able to check the integrity of the network with overwhelmingly low hardware requirements.<p>TL;DR: ZK compresses compute. The first use case for ZK (in terms of scalability) is to prove Blockchain transactions to greatly increase scalability of blockchains. Many use cases will follow.