visit
TL;DR:
Ingonyama has open-sourced new way to accelerate Zero Knowledge Using FPGA. In AWS cloud. Easy. Accessible. Cheap.Zero Knowledge (ZK) technology is a fundamental building block for decentralized computing. Its two main applications are privacy-preserving computation and verifiable computation.For specific types of ZK such as SNARK and STARK-based systems, additional properties include public verifiability, smaller proof sizes, and fast verification, making these kinds of ZK perfect for use in blockchains for scalability and privacy purposes.
At present, ZK tech is being developed and used by leading blockchains (e.g. Filecoin, Aleo), L2s (e.g. Starknet, zkSync) and decentralized applications (e.g. Dark Forest, ZKAttestor).Unfortunately, nothing great ever comes easily. The Prover, responsible for generating the proof, must run a computationally intensive algorithm with significant data blowup during the computation. suggest a factor of up to 10 million in prover overhead while producing the proof, compared to directly running the computation.Today, prover overhead is considered the main computational bottleneck for applied ZK. Without exception, every project built on ZK technology is facing or will face this bottleneck, which manifests adversely in either latency, throughput, memory, power consumption, or cost.A unique property of Zero Knowledge computation is that it runs modular arithmetic under the hood, on enormous field sizes. This requirement and its trials on CPUs lead to the conclusion that modern CPU architecture is simply not built to handle this form of computation efficiently.As a result, the need for specialized hardware ZK prover acceleration is clear.This is a GAME-CHANGER in the world of hardware acceleration.
Introducing Cloud-ZK
With the Ingonyama has made open source, the benefits of ZK hardware acceleration on FPGAs are accessible to anyone, anywhere, anytime and with costs comparable to any other standard CPU instance on an hourly basis.As part of the initial release we have included:1. An example AFI for BLS12–377 Multiscalar Multiplication (MSM) on instance.2. A Rust library for developers to run variable-size MSM in their code, accelerated in the cloud. Use Ingonyama FPGA-accelerated MSM of any size3. A Rust implementation for AWS XDMA Driver, to transfer data between AWS FPGA and host CPU4. Benchmarks and tests using arkworks for reference5. A developer guide for creating your own Amazon FPGA Images (AFIs)Our FPGA code achieves 4x the baseline of FPGA MSM competition, where max prize criteria is 2x :)As an example application, Ingonyama's AWS F1 instance can be used to accelerate Aleo prover. In terms of proofs per second, the accelerator can generate up to ~5x proofs per second compared to running on CPU alone.In the future, we plan to improve the toolkit in the following ways:1. Improving the MSM accelerator core: power consumption, speed, functionality (more curves)2. Adding NTT accelerator AFI3. Supporting Ali Cloud and other FPGA based instancesFollow the Journey on Twitter:
Github:
Blog: //medium.com/@ingonyama/
Join the team: