visit
As cryptographic technologies continue to advance and find new and ever more important uses in our lives, the processes these technologies carry out grow ever more complex. While a tremendous amount can be done with simple cryptographic primitives, it is what they can do when combined that is the most exciting.
Even more impressive is the idea that some cryptographic protocols are designed with hardware description capabilities, granting them the power to tackle universal challenges. This idea, fittingly called “programmable cryptography,” has the promise of making more complicated actions possible by, to paraphrase Brian Gu, turning the mathematical problem of designing new protocols into the programming problem of combining existing ones.
In this article, we will explore the layers of cryptographic application, from high-level goals to low-level algorithms, to understand where these ideas come from. Then, we will have a look at where they are going.
Before we start, let's take a moment to reflect on the fundamental motivation driving cryptographers to delve into their craft. After all, it is much easier to stay home and not do anything than to work on mathematical proofs that a new protocol is secure, feasible, and a meaningful improvement over existing models.
It is because of the ever-increasing importance of the digital data we store, share, and process that new and improved methods of ensuring privacy and safeguarding that data against tampering are needed. It is the desire to fill that need that gets cryptographers out of bed in the morning.
It is truly staggering to think of how much information is processed online these days. More immediate to most people is how much more time they spend interacting with data now than they did even a few years ago. All of this information they produce, engage with, review, and send is at risk of being spied on, stolen, or manipulated if it is not properly protected.
This is why there is always a need for cryptography. This is why new and improved methods of keeping data private continue to be developed.
Like many other disciplines, cryptography is based on simple concepts that are scaled up as the task becomes more interesting. These simple concepts, often referred to in modern cryptography as “cryptographic primitives,” are themselves often basic but can be combined to build something complex.
For example, consider one of the oldest codes - the Cesar cipher. Named after its most famous user, this code involves writing words in a cipher text that shifted three letters back from the original message. In this case, the word “the” would be written “qeb.” Each letter was shifted to the one three spots ahead of it in the English alphabet.
While this code is fairly simple, it is well-tested, useful, and not at all experimental. If you need to encrypt data, this will encrypt it. While it isn’t the most secure code in the world, it can also be combined with other techniques to make it stronger.
To take another example, the Vigenère cipher is a tool for encoding a message using several different Cesar ciphers. In this system, each message is combined with a key; let's imagine “eagle” and “lemon,” respectively. The key tells you how many places to shift the letters in the message, but each letter has a different number of shifts. The “L” in lemon tells you to shift the first letter in the message twelve spaces, as L is the twelfth letter in the English alphabet. The “E” tells you to shift the second letter to five spaces, and so on.
So, “apple” becomes “peszr.” Without access to the key, it becomes much more difficult to decode the message. While it still has the weaknesses of the Cesar cipher- given enough time, a brute force calculation will determine what the message is- by combining existing tools in a new way, the level of security increases dramatically.
As you can probably guess, it is often much, much easier to combine existing ciphers such as these together in new, more complex ways than it is to invent a new system. Cesar died a long time ago, and we are still using his codebook.
Just as the wisdom of older codes persists, much of modern cryptographic technology stands on a similar pedestal. Getting a cryptographer to write new proof that a novel system will keep your digital secrets safe is fantastic, but it is also quite time-consuming, and it isn’t guaranteed to work. On the other hand, cryptographic primitives such as RSA (Rivest-Shamir-Adleman), AES (Advanced Encryption Standard), or Digital Signature systems are known to work and can easily be applied to a wide range of problems. For instance, RSA is widely used for secure data transmission, while AES is a standard for encrypting sensitive data. If they are combined, they can provide innovative functionality and solve more complex problems than any of them could do alone.
While combining simple methods together is a great way to make more complex systems, there are limitations to it. Each of these primitives is designed to be good at a particular task, and it is not uncommon that mistakes are made when combining them that leave their weaknesses exposed.
Building upon low-level primitives, mid-level protocols target more advanced features and functionalities. In the following, we will explore some of the most widely adopted and discussed mid-level protocols.
Homomorphic encryption is a protocol that allows for encrypted data to be processed without having to decrypt it first. Examples of it exist today, though it is still in its comparatively early phases- it was only even demonstrated to be practical in 2009. Existing models are sometimes limited in what processes can be conducted on the encrypted data.
However, the concept is extremely interesting and has many obvious possible applications. Consider how often sensitive yet useful data like medical records or credit information is stolen from the organizations that need access to it to help you. What if it were possible to interact with your encrypted medical information without ever decoding it? The benefits of this improvement to security go without saying.
Multi-Party Computation (MPC) is a tool for hiding inputs provided by different actors working together on a common output. It is often described as the “Millionaire problem.”
Imagine that there are two millionaires who want to learn which of them has more money. However, they don’t want to just come out and say what their net worth is. They can use MPC to resolve this problem. The first millionaire is able to add their encrypted net worth to a program designed to compare the values before sending it off to the second one. The second millionaire is unable to see the first value when they add in their net worth.
They can then both decrypt the output and find out which one of them entered a larger value- all while not being able to see either of the inputs.
Lastly, let's look at Zero-Knowledge Proofs (ZKPs). These are likely well known to the reader, as they are widely used, and we will consider them briefly. ZKPs are very good at allowing a prover to tell another person, often called the verifier, that something is true without saying anything else. Typically, they provide this service to a single user; a person asks for proof, and they get it. There are a number of ZKPs, including zk-SNARK and zk-STARK. Each has its own advantages and disadvantages.
As research on these advanced protocols has progressed, the focus has expanded toward developing general-purpose cryptographic protocols. These initiatives aim to prove that it's feasible for cryptography to enable universal computation to be done securely and privately. Initially, these endeavors were purely theoretical, prioritizing feasibility over practical implementation efficiency. However, as research has deepened, cryptographers have shifted their attention toward making these concepts practically applicable. They enhance, combine, and invent new protocols and components. Often, the ultimate protocol ends up being a hybrid, leveraging the strengths of multiple approaches. For example, homomorphic encryption utilizes zero-knowledge proofs for range proofs to ensure calculations remain within a valid range. Meanwhile, MPC protocols might incorporate elements of homomorphism for executing non-linear operations.
Among the plethora of experimental protocols, some have edged close enough to the practical utility that they've paved the way for real-world development. These tools function similarly to compilers, interpreting high-level languages and converting them into circuits that cryptographic protocols can process. This transformation is akin to converting software into CPU register operations or translating Solidity into EVM state transitions. Achieving this compiler-like capability, complete with support for Turing-complete computation, marks the advent of what we call programmable cryptography. While this might seem overly optimistic, the reality is nuanced. Bit-wise hash functions, for instance, are not as efficient in a zero-knowledge proof protocol, whereas hashes executed through modular multiplication offer superior efficiency. Hence, it's advisable to steer clear of algorithms like SHA-3. Moreover, avoiding floating-point calculations is a common practice, as cryptographic protocols predominantly operate within finite fields. Tricks like these exist everywhere when bringing programmable cryptography to life.
Programmable cryptography is still a new concept, but one that offers the chance to make very complicated problems much simpler. It is easy to speculate on the directions it will take. It is all but certain that attempts will be made to program with all manner of cryptographic tools, though the levels of success that will be found with them are to be determined.
However, some of these experiments will work. Some of them will work well, and those that work well will provide powerful functionalities and high levels of security without having to go to the expense of having a cryptographer create a brand-new system for one application. This possibility alone will likely drive a great deal of interest in the field.
The problem of how to do this in a way that works with existing systems will have to be addressed, and it is probable that a great deal of what is adopted is that which works efficiently with the data that needs to be interacted with.
The impact of this technology on data security, privacy, and the broader field of digital security could be difficult to overstate. A great number of complex actions will become easier to implement. While bad programming will cause problems, as it always does, where the technology works, we will see better security and more robust privacy systems.
Perhaps most encouraging of all, it is still fairly early in terms of the uses of this technology. ZK proofs were devised in the 1980s but only made possible in 2012. There may be many possible mechanisms and combinations of mechanisms that nobody has dreamed of yet. The next world-shaking idea could arrive tomorrow. We may not even be able to guess what it will do.