diff --git a/img/starknet/query.png b/img/starknet/query.png new file mode 100644 index 0000000..dafe050 Binary files /dev/null and b/img/starknet/query.png differ diff --git a/rfcs/starknet/fri.html b/rfcs/starknet/fri.html index bb4baad..c74e490 100644 --- a/rfcs/starknet/fri.html +++ b/rfcs/starknet/fri.html @@ -272,8 +272,7 @@
To prove that two polynomials and exist and are of degree at most , a prover simply shows using FRI that a random linear combination of and exists and is of degree at most .
-TODO: what if the different polynomials are of different degrees?
-TODO: we do not make use of aggregation here, the way the first layer polynomial is created is sort of transparent here, is it still worth having this section?
+Note that if the FRI check might need to take into account the different degree checks that are being aggregated. For example, if the polynomial should be of degree at most but the polynomial should be of degree at most then a degree correction needs to happen. We refer to the ethSTARK paper for more details as this is out of scope for this specification. (As used in the STARK protocol targeted by this specification, it is enough to show that the polynomials are of low degree.)
This means that the verifier computes the queries on at points on the original subgroup. So the queries of the first layer are produced using (assuming no skipped layers).
- +After that, everything happens as normal (except that now the prover uses the original blown-up trace domain instead of a coset to evaluate and commit to the layer polynomials).
Note that these changes can easily be generalized to work when layers are skipped.
@@ -322,9 +321,11 @@See the Channel specification for details.
As part of the protocol, the prover must provide a number of evaluations of the first layer polynomial (based on the FRI queries that the verifier generates).
-We abstract this here as an oracle that magically provides evaluations. It is the responsibility of the user of this protocol to ensure that the evaluations are correct. See the Starknet STARK verifier specification for a concrete usage example.
+As part of the protocol, the prover must provide a number of evaluations of the first layer polynomial (based on the FRI queries that the verifier generates in the query phase of the protocol).
+We abstract this here as an oracle that magically provides evaluations. It is the responsibility of the user of this protocol to ensure that the evaluations are correct (which most likely include verifying a number of decommitments). See the Starknet STARK verifier specification for a concrete usage example.
+ +Finally, when all FRI queries have been generated, they are sorted in ascending order.
- + -TODO: include how we provide the y_value
and how we verify the first layer's evaluations still
TODO: also talk about how the first query is fixed to move away from the coset
-A query (a value within for the log-size of the evaluation domain) can be converted to an evaluation point in the following way.
-First, compute the bit-reversed exponent:
+A query (a value within for the log-size of the evaluation domain) can be converted to an evaluation point in the following way. First, compute the bit-reversed exponent:
Then compute the element of the evaluation domain in the coset (with the generator of the evaluation domain):
TODO: explain why not just do
+Finally, the expected evaluation can be computed using the API defined in the Verifying the first FRI layer section.
TODO: refer to the section on the first layer evaluation stuff (external dependency)
-Besides the last layer, each layer verification of a query happens by simply decommitting a layer's queries.
-table_decommit(commitment, paths, leaves_values, witness, settings);
-
+Besides the last layer, each layer verification of a query happens by:
+We illustrate this in the following diagram, pretending that associated evaluations are not grouped under the same path in the Merkle tree commitment (although in practice they are).
+To verify the last layer's query, as the last layer polynomial is received in clear, simply evaluate it at the queried point 1/fri_layer_query.x_inv_value
and check that it matches the expected evaluation fri_layer_query.y_value
.
TODO: As explained in the section on Merkle Tree Decommitment, witness leaves values have to be given as well.
-TODO: link to section on merkle tree
-Each reduction will produce queries to the next layer, which will expect specific evaluations.
+Each query verification (except on the last layer) will produce queries for the next layer, which will expect specific evaluations.
The next queries are derived as:
index / coset_size
point^coset_size
where coset_size is 2, 4, 8, or 16 depending on the layer (but always 2 for the first layer).
-TODO: explain the relation between coset_size and the step size. coset_size = 2^step_size
-The next evaluations expected at the queried layers are derived as:
Queries between layers verify that the next layer is computed correctly based on the current layer .
The next layer is either the direct next layer or a layer further away if the configuration allows layers to be skipped.
Specifically, each reduction is allowed to skip 0, 1, 2, or 3 layers (see the MAX_FRI_STEP
constant).
TODO: why MAX_FRI_STEP=3?
-no skipping:
+The formula with no skipping is:
1 skipping with the generator of the 4-th roots of unity (such that ):
+The formula with 1 layer skipped with the generator of the 4-th roots of unity (such that ):
As you can see, this requires 4 evaluations of p_{i} at , , , .
-2 skippings with the generator of the 8-th roots of unity (such that and ):
+The formula with 2 layers skipped with the generator of the 8-th roots of unity (such that and ):
As you can see, this requires 8 evaluations of p_{i} at , , , , , , , .
-3 skippings with the generator of the 16-th roots of unity (such that , , and ):
+The formula with 3 layers skipped with the generator of the 16-th roots of unity (such that , , and ):
as you can see, this requires 16 evaluations of p_{i} at , , , , , , , , , , , , , , , .
-TODO: reconcile with section on the differences with vanilla FRI
+As you can see, this requires 16 evaluations of p_{i} at , , , , , , , , , , , , , , , .
TODO: reconcile with constants used for elements and inverses chosen in subgroups of order (the s)
FriDecommitment { values, points }
.fri_verify_initial(queries, fri_commitment, decommitment)
. Takes the FRI queries, the FRI commitments (each layer's committed polynomial), as well as the evaluation points and their associated evaluations of the first layer (in decommitment
).
decommitment
.FriLayerQuery { index, y_value, x_inv_value: 3 / x_value }
for each x_value
and y_value
given in the decommitment
FriLayerQuery { index, y_value, x_inv_value: 3 / x_value }
for each x_value
and y_value
given in the decommitment
. (This is a correction that will help achieve the differences in subsequent layers outlined in Notable Differences With Vanilla FRI).(
FriVerificationStateConstant {
n_layers: config.n_layers - 1,
@@ -623,23 +615,22 @@ Full Protocol
step_sizes: config.fri_step_sizes[1:], // the number of reduction at each steps
last_layer_coefficients_hash: hash_array(last_layer_coefficients),
},
- FriVerificationStateVariable { iter: 0, queries: fri_queries }
+ FriVerificationStateVariable { iter: 0, queries: fri_queries } // the initial queries
)
fri_verify_step(stateConstant, stateVariable, witness, settings)
.
stateVariable.iter <= stateConstant.n_layers
iter
counterstateVariable.iter <= stateConstant.n_layers
.iter
counter.fri_verify_final(stateConstant, stateVariable, last_layer_coefficients)
.
iter == n_layers
)iter == n_layers
).last_layer_coefficient
matches the hash contained in the state (TODO: only relevant if we created that hash in the first function).fn fri_verify_final(
stateConstant: FriVerificationStateConstant,
stateVariable: FriVerificationStateVariable,
@@ -663,16 +654,17 @@ Full Protocol
Test Vectors
-TKTK
+Refer to the reference implementation for test vectors.
Security Considerations
+The current way to compute the bit security is to compute the following formula:
+n_queries * log_n_cosets + proof_of_work_bits
+
+Where:
-- number of queries?
-- size of domain?
-- proof of work stuff?
+n_queries
is the number of queries generates
-security bits: n_queries * log_n_cosets + proof_of_work_bits