Skip to content

Commit

Permalink
moving things around + hash-based commitment spec
Browse files Browse the repository at this point in the history
  • Loading branch information
mimoo committed Oct 16, 2024
1 parent 5efdb03 commit a7a7ff0
Show file tree
Hide file tree
Showing 4 changed files with 98 additions and 104 deletions.
3 changes: 2 additions & 1 deletion source/starknet/channel.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
---
title: "Starknet Channel"
title: "Starknet Channels for Fiat-Shamir Instantiation"
abstract: "TKTK"
sotd: "none"
shortName: "starknet-channel"
editor: "David Wong"
tags: ["starknet", "fiat-shamir"]
---

## Overview
Expand Down
95 changes: 18 additions & 77 deletions source/starknet/fri.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ abstract: "<p>The <strong>Fast Reed-Solomon Interactive Oracle Proofs of Proximi
sotd: "none"
shortName: "starknet-fri"
editor: "David Wong"
tags: ["starknet", "fri"]
---

## Overview
Expand Down Expand Up @@ -331,13 +332,11 @@ TODO: why the alternate use of hash functions?

See the [Channel](channel.html) specification for more details.

### Evaluation of the first FRI layer
### Evaluations of the first FRI layer

As part of the protocol, the prover must provide a number of evaluations of the first layer polynomial $p_0$. This is abstracted in this specification as the function `eval_oods_polynomial` which acts as an oracle from FRI's perspective.
As part of the protocol, the prover must provide a number of evaluations of the first layer polynomial $p_0$ (based on the FRI queries that the verifier generates).

TODO: not a very satisfying explanation

Note that this function is not fixed here, as the polynomial being "tested" could be computed in different ways. See the [Starknet STARK verifier specification](stark.html) for a concrete example (and for an explanation of why the function is named this way).
We abstract this here as an oracle that magically provides evaluations, it is the responsibility of the user of this protocol to ensure that the evaluations are correct. See the [Starknet STARK verifier specification](stark.html) for a concrete usage example.

## Constants

Expand Down Expand Up @@ -450,64 +449,6 @@ TODO: validate(cfg, log_n_cosets, n_verified_friendly_commitment_layers):
* TODO: why is log_n_cosets passed? and what is it? (number of additional cosets with the blowup factor?)
* where `log_expected_input_degree = sum_of_step_sizes + log_last_layer_degree_bound`

## Commitments

Commitments of polynomials are done using [Merkle trees](). The Merkle trees can be configured to hash some parameterized number of the lower layers using a circuit-friendly hash function (Poseidon).

* TODO: why montgomery form?

### Table commitments

A table commitment in this context is a vector commitment where leaves are potentially hashes of several values (tables of multiple columns and a single row).

### Vector commitments

A vector commitment is simply a Merkle tree.

TODO: diagram.

![vector commit](/img/starknet/fri/vector_commit.png)

### Vector membership proofs

A vector decommitment/membership proof must provide a witness (the neighbor nodes missing to compute the root of the Merkle tree) ordered in a specific way. The following algorithm dictates in which order the nodes hash values provided in the proof are consumed:

![vector decommit](/img/starknet/fri/vector_decommit.png)

### Note on commitment multiple evaluations under the same leaf

* the following array contains all the 16-th roots of unity, handily ordered
* that is, the first represents the subgroup of order 1, the two first values represent the subgroup of order 2, the four first values represent the subgroup of order 4, and so on
* furthermore, these values are chosen in relation to how evaluations are ordered in a leaf of a commitment
* each value tells you exactly what to multiply to 1/(something*x) to obtain 1/(x)
* TODO: but wait, how is inv_x obtained... that doesn't make sense no?
* it seems like the following values are used to "correct" the x value depending on where x pointed at

```
array![
0x1,
0x800000000000011000000000000000000000000000000000000000000000000,
0x625023929a2995b533120664329f8c7c5268e56ac8320da2a616626f41337e3,
0x1dafdc6d65d66b5accedf99bcd607383ad971a9537cdf25d59e99d90becc81e,
0x63365fe0de874d9c90adb1e2f9c676e98c62155e4412e873ada5e1dee6feebb,
0x1cc9a01f2178b3736f524e1d06398916739deaa1bbed178c525a1e211901146,
0x3b912c31d6a226e4a15988c6b7ec1915474043aac68553537192090b43635cd,
0x446ed3ce295dda2b5ea677394813e6eab8bfbc55397aacac8e6df6f4bc9ca34,
0x5ec467b88826aba4537602d514425f3b0bdf467bbf302458337c45f6021e539,
0x213b984777d9556bac89fd2aebbda0c4f420b98440cfdba7cc83ba09fde1ac8,
0x5ce3fa16c35cb4da537753675ca3276ead24059dddea2ca47c36587e5a538d1,
0x231c05e93ca34c35ac88ac98a35cd89152dbfa622215d35b83c9a781a5ac730,
0x00b54759e8c46e1258dc80f091e6f3be387888015452ce5f0ca09ce9e571f52,
0x7f4ab8a6173b92fda7237f0f6e190c41c78777feabad31a0f35f63161a8e0af,
0x23c12f3909539339b83645c1b8de3e14ebfee15c2e8b3ad2867e3a47eba558c,
0x5c3ed0c6f6ac6dd647c9ba3e4721c1eb14011ea3d174c52d7981c5b8145aa75,
]
```

* that is, if x pointed at the beginning of a coset, then we don't need to correct it (the first evaluation committed to contains x)
* but if x pointed at the first value, it actually points to an evaluation of -x, so we need to correct the -x we have by multiplying with -1 again so that we get x (or -1/x becomes 1/x, same thing)
* if x points to the 2 value, then

## Protocol

The FRI protocol is split into two phases:
Expand Down Expand Up @@ -749,22 +690,22 @@ struct FriVerificationStateVariable {

We give more detail to each function below.

**`fri_commit(channel, cfg)`**.
**`fri_commit(channel)`**.

1. Take a channel with a prologue (See the [Channel](#channel) section). A prologue contains any context relevant to this proof.
1. Produce the FRI commits according to the [Commit Phase](#commit-phase) section.
2. Produce the proof of work according to the [Proof of Work](#proof-of-work) section.
3. Generate `n_queries` queries in the `eval_domain_size` according to the [Generating Queries](#generating-the-first-queries) section.
4. Convert the queries to evaluation points following the [Converting A Query To An Evaluation Point](#converting-a-query-to-an-evaluation-point) section, producing `points`.
5. Evaluate the first layer at the queried `points` using the external dependency (see [External Dependencies](#external-dependencies) section), producing `values`.
6. Produce the fri_decommitment as `FriDecommitment { values, points }`.

**`fri_verify_initial(queries, fri_commitment, decommitment)`**.

* enforce that the number of queries matches the number of values to decommit
* enforce that last layer has the right number of coefficients (TODO: how?)
* compute the first layer of queries `gather_first_layer_queries` as `FriLayerQuery { index, y_value, x_inv_value: 3 / x_value }` for each `x_value` and `y_value`
* initialize and return the two state objects
2. Produce the FRI commits according to the [Commit Phase](#commit-phase) section.
3. Produce the proof of work according to the [Proof of Work](#proof-of-work) section.
4. Generate `n_queries` queries in the `eval_domain_size` according to the [Generating Queries](#generating-the-first-queries) section.
5. Convert the queries to evaluation points following the [Converting A Query To An Evaluation Point](#converting-a-query-to-an-evaluation-point) section, producing `points`.
6. Evaluate the first layer at the queried `points` using the external dependency (see [External Dependencies](#external-dependencies) section), producing `values`.
7. Produce the fri_decommitment as `FriDecommitment { values, points }`.

**`fri_verify_initial(queries, fri_commitment, decommitment)`**. Takes the FRI queries, the FRI commitments (each layer's committed polynomial), as well as the evaluation points and their associated evaluations of the first layer (in `decommitment`).

* Enforce that for each query there is a matching derived evaluation point and evaluation at that point on the first layer contained in the given `decommitment`.
* Enforce that last layer has the right number of coefficients as expected by the FRI configuration (see the [FRI Configuration](#fri-configuration) section).
* Compute the first layer of queries as `FriLayerQuery { index, y_value, x_inv_value: 3 / x_value }` for each `x_value` and `y_value` given in the `decommitment`
* Initialize and return the two state objects

```rust
(
Expand Down
66 changes: 66 additions & 0 deletions source/starknet/polynomial_commitment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
title: "Starknet Merkle Tree Polynomial Commitments"
abstract: "TKTK"
sotd: "none"
shortName: "starknet-commit"
editor: "David Wong"
tags: ["starknet", "PCS", "Merkle tree", "hash-based commitments"]
---

## Overview

Commitments of polynomials are done using [Merkle trees](). The Merkle trees can be configured to hash some parameterized number of the lower layers using a circuit-friendly hash function (Poseidon).

* TODO: why montgomery form?

### Table commitments

A table commitment in this context is a vector commitment where leaves are potentially hashes of several values (tables of multiple columns and a single row).

### Vector commitments

A vector commitment is simply a Merkle tree.

TODO: diagram.

![vector commit](/img/starknet/fri/vector_commit.png)

### Vector membership proofs

A vector decommitment/membership proof must provide a witness (the neighbor nodes missing to compute the root of the Merkle tree) ordered in a specific way. The following algorithm dictates in which order the nodes hash values provided in the proof are consumed:

![vector decommit](/img/starknet/fri/vector_decommit.png)

### Note on commitment multiple evaluations under the same leaf

* the following array contains all the 16-th roots of unity, handily ordered
* that is, the first represents the subgroup of order 1, the two first values represent the subgroup of order 2, the four first values represent the subgroup of order 4, and so on
* furthermore, these values are chosen in relation to how evaluations are ordered in a leaf of a commitment
* each value tells you exactly what to multiply to 1/(something*x) to obtain 1/(x)
* TODO: but wait, how is inv_x obtained... that doesn't make sense no?
* it seems like the following values are used to "correct" the x value depending on where x pointed at

```
array![
0x1,
0x800000000000011000000000000000000000000000000000000000000000000,
0x625023929a2995b533120664329f8c7c5268e56ac8320da2a616626f41337e3,
0x1dafdc6d65d66b5accedf99bcd607383ad971a9537cdf25d59e99d90becc81e,
0x63365fe0de874d9c90adb1e2f9c676e98c62155e4412e873ada5e1dee6feebb,
0x1cc9a01f2178b3736f524e1d06398916739deaa1bbed178c525a1e211901146,
0x3b912c31d6a226e4a15988c6b7ec1915474043aac68553537192090b43635cd,
0x446ed3ce295dda2b5ea677394813e6eab8bfbc55397aacac8e6df6f4bc9ca34,
0x5ec467b88826aba4537602d514425f3b0bdf467bbf302458337c45f6021e539,
0x213b984777d9556bac89fd2aebbda0c4f420b98440cfdba7cc83ba09fde1ac8,
0x5ce3fa16c35cb4da537753675ca3276ead24059dddea2ca47c36587e5a538d1,
0x231c05e93ca34c35ac88ac98a35cd89152dbfa622215d35b83c9a781a5ac730,
0x00b54759e8c46e1258dc80f091e6f3be387888015452ce5f0ca09ce9e571f52,
0x7f4ab8a6173b92fda7237f0f6e190c41c78777feabad31a0f35f63161a8e0af,
0x23c12f3909539339b83645c1b8de3e14ebfee15c2e8b3ad2867e3a47eba558c,
0x5c3ed0c6f6ac6dd647c9ba3e4721c1eb14011ea3d174c52d7981c5b8145aa75,
]
```

* that is, if x pointed at the beginning of a coset, then we don't need to correct it (the first evaluation committed to contains x)
* but if x pointed at the first value, it actually points to an evaluation of -x, so we need to correct the -x we have by multiplying with -1 again so that we get x (or -1/x becomes 1/x, same thing)
* if x points to the 2 value, then
38 changes: 12 additions & 26 deletions source/starknet/stark.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ abstract: "In this document we specify the STARK verifier used in Starknet."
sotd: "none"
shortName: "starknet-stark"
editor: "David Wong"
tags: ["starknet", "stark", "ethSTARK"]
---

## Overview
Expand All @@ -26,7 +27,7 @@ This protocol is instantiated in several places to our knowledge:

TKTK

### Interactive Arithemtization
### Interactive Arithmetization

TKTK

Expand Down Expand Up @@ -109,8 +110,6 @@ To validate:
}
```



## Main STARK functions / Buiding blocks

```rust
Expand Down Expand Up @@ -139,6 +138,8 @@ TODO: StarkDomainsImpl::new()

### STARK commit

The goal of the STARK commit is to process all of the commitments produced by the prover during the protocol (including the FRI commitments), as well as produce the verifier challenges:

1. Absorb the original table with the channel.
2. Sample the interaction challenges (e.g. z and alpha for the memory check argument (different alpha called memory_alpha to distinguish it from the alpha used to aggregate the different constraints into the composition polynomial)).
3. Absorb the interaction table with the channel.
Expand All @@ -152,32 +153,18 @@ TODO: StarkDomainsImpl::new()

### STARK verify (TODO: consolidate with above)

in `src/stark/stark_verify.cairo`:

stark_verify takes these inputs:
The goal of STARK verify is to verify evaluation queries (by checking that evaluations exist in the committed polynomials) and the FRI queries (by running the FRI verification).

* queries (array of FE)
* commitment
* witness
* stark_domains
To do this, we simply call the `fri_verify_initial` function contained in the FRI specification, and giving it the following oracle:

algorithm:
The oracle should provide the evaluations, under the same set of FRI queries (and specifically the point they are requesting the evaluations at) of the following polynomials:

1. traces_decommit()
2. table_decommit() (different depending on layout)
3. points = queries_to_points(queries, stark_domains)
4. eval_oods_boundary_poly_at_points()
5. fri_verify()
* the traces polynomials, which include both the original trace polynomial and the interaction trace polynomial)
* the composition column polynomials

actually, this is wrapped into StarKProofImpl::verify:
In addition the oracle should verify decommitment proofs (Merkle membership proofs) for each of these evaluations. We refer to the [Merkle Tree Polynomial Commitments specification](polynomial_commitment.html) on how to verify evaluation proofs.

1. cfg.validate(security_bits)
2. cfg.public_input.validate(stark_domains)
3. digest = get_public_input_hash(public_input) <-- what is the public input exactly? (should be program + inputs (+outputs?))
4. channel = ChannelImpl::new(digest) <-- statement is a digest of the public_input
5. stark_commitment = stark_commit()
6. queries = generate_queries()
7. stark_verify()
<aside class="warning">The logic of the oracle must be implemented as part of the verification. The term "oracle" simply refers to an opaque callback function from the FRI protocol's perspective.</aside>

## Full Protocol

Expand Down Expand Up @@ -206,6 +193,5 @@ The verify initial function is defined as:
1. Validate the public input (TODO: specify an external function for that?).
1. Compute the initial digest as `get_public_input_hash(public_input, cfg.n_verifier_friendly_commitment_layers, settings)` (TODO: define external function for that).
1. Initialize the channel using the digest as defined in the [Channel](#channel) section.
1. Call stark commit as defined in the [STARK commit](#stark-commit) section.
1. Call fri_commit as defined in the [FRI](#fri) section.
1. Call STARK commit as defined in the [STARK commit](#stark-commit) section.
1. Call STARK verify as defined in the [STARK verify](#stark-verify) section.

0 comments on commit a7a7ff0

Please sign in to comment.