3. QAP (QAP: The S in Succinct)

R1CS to QAP

QAPs are the primary reason ZK-SNARKs are able to be succinct.

What is a Quadratic Arithmetic Program (QAP)?

  • It is the polynomial representation of our R1CS vector form via Lagrange interpolation.

What do we achieve by converting it into the QAP form?

Instead of checking each of the constraints in the R1CS individually, we can now check all of the constraints at the same time by doing the dot product check on the polynomials.

  • A program can be represented as an R1CS, but evaluating it is not succinct due to the many operations of matrix multiplication.

  • A complex circuit would have thousands of constraints, and therefore thousands of corresponding rows in the R1CS form.

  • Imagine doing a row-wise operations on a matrix with thousands of rows. The number of pairings would be numerous and computationally intensive for both proving and verification.

  • Instead of having to deal with so many rows/constraints, if we could we condense it all down to a single polynomial expression, the computational complexity would move from O(n) to O(1).

  • We all know straightforward it is to evaluate a polynomial. QAP is the polynomial expression of R1CS; compression without losing any information loss.

                                R1CS —> QAP
    
                     `Cw = Aw * Bw`  —> `(W.a) = (U.a)(V.a)`
     
           # This conversion is achieved by lagrange interpolation

Interpolating A to U

import numpy as np
from scipy.interpolate import lagrange

# arbitrary x-values
# 3 elements per col.
x = np.array([1,2,3])

#  y^2 + 4yx^2 -2 
# Cw = Aw * Bw

# A ---> U 
#| 0  0  0  1  0  0 | 
#| 0  0  1  0  0  0 | 
#| 0  0  0  0  0  4 |  

# interpolation has to be done per column of A
# The zero vectors get turned into the polynomial 0 (this obviously interpolates all the zero values in the column). 
a_3 = np.array([0, 1, 0])
a_4 = np.array([1, 0, 0])
a_5 = np.array([0, 0, 4])

poly_a3 = lagrange(x, a_3)
poly_a4 = lagrange(x, a_4)
poly_a5 = lagrange(x, a_5)

print(poly_a3)     
#-x^2 + 4x - 3

print(poly_a4)
# 0.5x^2 - 2.5x + 3
****
print(poly_a5)
#2x^2 - 6x + 4
  • Since there are 3 elements per column, we will be interpolating across x = [1,2,3].

  • Only the non-zero columns need to be interpolated, in the case of A, that would be the 3rd, 4th and last columns

  • Each column is interpolated separately, to obtain a polynomial

Visually, this is what we are doing with the matrix A

  • Notice that through interpolation, we have transformed the matrix into a polynomial expressed in terms of x

  • Note that the x in the witness vector is not synonymous with the x of the interpolated polynomial

  • Thus, for some given user input of witness values, we simply need to substitute the values into the polynomial as seen above.

A similar process occurs for both matrices B and C. Therefore,

  • A: 29.5x^2 - 87.5x + 61

  • B: -x^2 + 4x

  • C: 84.5x^2 - 246.5x +171

Thus, instead of checking equality between matrices in the form: Cw = Aw * Bw, we can easily check equality between polynomials.

  • In R1CS form the complexity is O(n), where n is the number of constraints

  • We can make it O(1) via QAP.

  • Instead of checking the constraints in the R1CS individually, we can now check all of the constraints at the same time by doing the dot product check on the polynomials.

Cw = Aw * Bw —> (W.a)= (U.a)(V.a)

Common literature tends to represent the resulting polynomials as matrices still. The polynomial that A has been interpolated into is represented as U, seen below:

  • Even though we are presenting this in matrix form, don’t forget this is really just shorthand for staking polynomials.

  • In similar vein,

    • C → W

    • B → V

Remember, here a is the witness vector, but it is a vector of values as provided by the person trying to create a proof for his claim.


CHECKPOINT

Now that we have converted our matrices into a compressed polynomial form, our O(n) problem becomes O(1).

  • (W.a) = (U.a)(V.a)

  • 84.5x^2 - 246.5x + 171 = (29.5x^2 - 87.5x + 61)(-x^2 + 4x)

The proof is succinct.


Balancing QAP

Let’s see if the interpolated polynomials line up:

(W.a) = (U.a)(V.a)

84.5x^2 - 246.5x +171 = (29.5x^2 - 87.5x + 61)( -x^2 + 4x)

They do not; due to the multiplication on the RHS, the degree is off by 2. We need to add a balancing term to honour the equality.

To that end, we will introduce a degree 4 polynomial on the LHS that effectively represents the 0 vector in R1CS land - this way we can balance the equality without meaningfully influencing the values themselves.

If we add a zero vector, we aren’t changing the equality. And if the zero vector is transformed to a polynomial degree four, then we can have both LHS and RHS be a degree four polynomial, making it possible for them to be equal everywhere.

Essentially,

0 + Cw = Aw * Bw → h(x)t(x) + (W.a)= (U.a)(V.a)

h(x)t(x) is the interpolated polynomial representing the 0 vector in R1CS land.

  • Addresses the imbalance in degree of polynomials

  • Since it represents the 0 vector, it does not affect the equality relationship

How do we compute h(x)t(x) ?

t(x)

When we say the “zero vector” that does not mean a polynomial y = 0. Rather, it means a polynomial that is zero at x = {1,2,3}, as per our earlier interpolation.

  • t(x): (x-1)(x-2)(x-3)

  • 3 terms, since we used x=1,2,3 for interpolation values.

h(x)

We need to multiply t(x) by yet another polynomial that interpolates zero and balances out the equation.

h(x)=(U.a)(V.a)(W.a)t(x)h(x) = \frac{(U.a)(V.a) - (W.a)}{t(x)}

Here’s a handy theorem: When two non-zero polynomials are multiplied, the roots of the product is the union of the roots of the individual polynomials. Therefore, we can multiply t(x) by anything except zero, and it will still correspond to a zero vector in vector land.

Why not just have h(x)t(x) be one polynomial?

  • The fact that t(x) is a public polynomial matters. It forces the prover to compute an h(x) that interpolates zero at x = 1,2,3.

  • Otherwise, the prover might pick polynomial that satisfies the equation, but doesn’t actually correspond to anything from the R1CS.

  • We want to know that the polynomials that correspond to Ls, Rs, and Os have roots at x = 1,2,3. If they do, then it is a homomorphism from vector math.

Last updated

Was this helpful?