Skip to content

Commit

Permalink
overlay transition (#244)
Browse files Browse the repository at this point in the history
* overlay transition

Fix some bugs identified in the code review

Co-authored-by: Ignacio Hagopian <[email protected]>

Include base -> overlay key-values migration logic (#199)

* mod: add go-verkle version with key-value migration new apis

Signed-off-by: Ignacio Hagopian <[email protected]>

* core/stateprocessor: use constant for max number of migrated key-values

Signed-off-by: Ignacio Hagopian <[email protected]>

* core: add base->overlay key-values migration logic

Signed-off-by: Ignacio Hagopian <[email protected]>

* core: fix some compiler errors

Signed-off-by: Ignacio Hagopian <[email protected]>

* trie: consider removing transition trie api in the future

Signed-off-by: Ignacio Hagopian <[email protected]>

* mod: use latest go-verkle

Signed-off-by: Ignacio Hagopian <[email protected]>

---------

Signed-off-by: Ignacio Hagopian <[email protected]>

fix some unit tests errors

get convresion block from file

fix compilation issues

fix initialization issue in migrator

fix: changes needed to run the first 28 blocks

important sutff: fix the banner

fix: use nonce instead of balance in nonce leaf (#202)

fixes for performing the overlay transition (#203)

* fixes for performing the overlay transition

* fixes for the full replay

* fix: deletion-and-recreation of EoA

* fixes to replay 2M+ blocks

* upgrade to go-verkle@master

* fix: proper number of chunk evals

* rewrite conversion loop to fix known issues

changes to make replay work with the overlay method (#216)

* fixes for performing the overlay transition

fixes for the full replay

fix: deletion-and-recreation of EoA

fixes to replay 2M+ blocks

upgrade to go-verkle@master

fix: proper number of chunk evals

rewrite conversion loop to fix known issues

changes to make replay work with the overlay method

fixes to replay 2M+ blocks

update to latest go-verkle@master

* use a PBSS-like scheme for internal nodes (#221)

* use a PBSS-like scheme for internal nodes

* a couple of fixes coming from debugging replay

* fix: use an error to notify the transition tree that a deleted account was found in the overlay tree (#222)

* fixes for pbss replay (#227)

* fixes for pbss replay

* trie/verkle: use capped batch size (#229)

* trie/verkle: use capped batch size

Signed-off-by: Ignacio Hagopian <[email protected]>

* trie/verkle: avoid path variable allocation per db.Put

Signed-off-by: Ignacio Hagopian <[email protected]>

* don't keep more than 32 state root conversions in RAM (#230)

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Guillaume Ballet <[email protected]>

* cleanup some code

* mod: update go-verkle

Signed-off-by: Ignacio Hagopian <[email protected]>

* re-enable snapshot (#231)

* re-enable cancun block / snapshot (#226)

* clear storage conversion key upon translating account (#234)

* clear storage conversion key upon translating account

* mod: use latest go-verkle

Signed-off-by: Ignacio Hagopian <[email protected]>

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Ignacio Hagopian <[email protected]>

* fix: self-deadlock with translated root map mutex (#236)

* return compressed commitment as root commitment (#237)

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Ignacio Hagopian <[email protected]>

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Ignacio Hagopian <[email protected]>

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Ignacio Hagopian <[email protected]>

fix first panic in *TransitionTrie.Copy()

upgrade go-verkle to latest master

mod: update go-verkle (#239)

Signed-off-by: Ignacio Hagopian <[email protected]>

core: print state root every 100 blocks (#240)

Signed-off-by: Ignacio Hagopian <[email protected]>

fix: only Commit the account trie (#242)

fixes to get TestProcessVerkle to work with the overlay branch (#238)

* fixes to get TestProcessVerkle to work with the overlay branch

* fix all panics in verkle state processor test

* fix proof verification

move transition management to cachingDB

* fix: mark the verkle transition as started if it's ended without being started

* fix the verkle state processing test

* fix linter errors

* Add a function to clear verkle params for replay

* fix: handle TransitionTrie in OpenStorageTrie

* fix linter issue

* fix the deleted account error (#247)

* code cleanup (#248)

* fix: don't error on a missing conversion.txt (#249)

* Overlay Tree preimages exporting and usage (#246)

* export overlay preimages tool

Signed-off-by: Ignacio Hagopian <[email protected]>

* use preimages flat file in overlay tree migration logic

Signed-off-by: Ignacio Hagopian <[email protected]>

* cmd/geth: add --roothash to overlay tree preimage exporting command

Signed-off-by: Ignacio Hagopian <[email protected]>

* cleanup

Signed-off-by: Ignacio Hagopian <[email protected]>

* review feedback

Signed-off-by: Ignacio Hagopian <[email protected]>

---------

Signed-off-by: Ignacio Hagopian <[email protected]>

* fix: reduce the PR footprint (#250)

* fix: don't fail when preimages.bin is missing (#251)

* fix: don't fail when preimages.bin is missing

* fix: don't open the preimages file when outside of transition

---------

Signed-off-by: Ignacio Hagopian <[email protected]>
Co-authored-by: Ignacio Hagopian <[email protected]>
  • Loading branch information
gballet and jsign authored Aug 2, 2023
1 parent 0a7efe1 commit 09494c6
Show file tree
Hide file tree
Showing 23 changed files with 1,210 additions and 115 deletions.
38 changes: 38 additions & 0 deletions cmd/geth/chaincmd.go
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,17 @@ It's deprecated, please use "geth db import" instead.
Description: `
The export-preimages command exports hash preimages to an RLP encoded stream.
It's deprecated, please use "geth db export" instead.
`,
}
exportOverlayPreimagesCommand = &cli.Command{
Action: exportOverlayPreimages,
Name: "export-overlay-preimages",
Usage: "Export the preimage in overlay tree migration order",
ArgsUsage: "<dumpfile>",
Flags: flags.Merge([]cli.Flag{utils.TreeRootFlag}, utils.DatabasePathFlags),
Description: `
The export-overlay-preimages command exports hash preimages to a flat file, in exactly
the expected order for the overlay tree migration.
`,
}
dumpCommand = &cli.Command{
Expand Down Expand Up @@ -394,6 +405,33 @@ func exportPreimages(ctx *cli.Context) error {
return nil
}

// exportOverlayPreimages dumps the preimage data to a flat file.
func exportOverlayPreimages(ctx *cli.Context) error {
if ctx.Args().Len() < 1 {
utils.Fatalf("This command requires an argument.")
}
stack, _ := makeConfigNode(ctx)
defer stack.Close()

chain, _ := utils.MakeChain(ctx, stack)

var root common.Hash
if ctx.String(utils.TreeRootFlag.Name) != "" {
rootBytes := common.FromHex(ctx.String(utils.StartKeyFlag.Name))
if len(rootBytes) != common.HashLength {
return fmt.Errorf("invalid root hash length")
}
root = common.BytesToHash(rootBytes)
}

start := time.Now()
if err := utils.ExportOverlayPreimages(chain, ctx.Args().First(), root); err != nil {
utils.Fatalf("Export error: %v\n", err)
}
fmt.Printf("Export done in %v\n", time.Since(start))
return nil
}

func parseDumpConfig(ctx *cli.Context, stack *node.Node) (*state.DumpConfig, ethdb.Database, common.Hash, error) {
db := utils.MakeChainDatabase(ctx, stack, true)
var header *types.Header
Expand Down
1 change: 1 addition & 0 deletions cmd/geth/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ func init() {
exportCommand,
importPreimagesCommand,
exportPreimagesCommand,
exportOverlayPreimagesCommand,
removedbCommand,
dumpCommand,
dumpGenesisCommand,
Expand Down
13 changes: 6 additions & 7 deletions cmd/geth/verkle.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,14 +130,13 @@ func convertToVerkle(ctx *cli.Context) error {
vRoot = verkle.New().(*verkle.InternalNode)
)

saveverkle := func(node verkle.VerkleNode) {
comm := node.Commit()
saveverkle := func(path []byte, node verkle.VerkleNode) {
node.Commit()
s, err := node.Serialize()
if err != nil {
panic(err)
}
commB := comm.Bytes()
if err := chaindb.Put(commB[:], s); err != nil {
if err := chaindb.Put(path, s); err != nil {
panic(err)
}
}
Expand Down Expand Up @@ -330,7 +329,7 @@ func checkChildren(root verkle.VerkleNode, resolver verkle.NodeResolverFn) error
return fmt.Errorf("could not find child %x in db: %w", childC, err)
}
// depth is set to 0, the tree isn't rebuilt so it's not a problem
childN, err := verkle.ParseNode(childS, 0, childC[:])
childN, err := verkle.ParseNode(childS, 0)
if err != nil {
return fmt.Errorf("decode error child %x in db: %w", child.Commitment().Bytes(), err)
}
Expand Down Expand Up @@ -390,7 +389,7 @@ func verifyVerkle(ctx *cli.Context) error {
if err != nil {
return err
}
root, err := verkle.ParseNode(serializedRoot, 0, rootC[:])
root, err := verkle.ParseNode(serializedRoot, 0)
if err != nil {
return err
}
Expand Down Expand Up @@ -439,7 +438,7 @@ func expandVerkle(ctx *cli.Context) error {
if err != nil {
return err
}
root, err := verkle.ParseNode(serializedRoot, 0, rootC[:])
root, err := verkle.ParseNode(serializedRoot, 0)
if err != nil {
return err
}
Expand Down
84 changes: 84 additions & 0 deletions cmd/utils/cmd.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,20 +26,23 @@ import (
"os"
"os/signal"
"runtime"
"runtime/pprof"
"strings"
"syscall"
"time"

"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state/snapshot"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/urfave/cli/v2"
)
Expand Down Expand Up @@ -173,6 +176,18 @@ func ImportChain(chain *core.BlockChain, fn string) error {
return err
}
}
cpuProfile, err := os.Create("cpu.out")
if err != nil {
return fmt.Errorf("Error creating CPU profile: %v", err)
}
defer cpuProfile.Close()
err = pprof.StartCPUProfile(cpuProfile)
if err != nil {
return fmt.Errorf("Error starting CPU profile: %v", err)
}
defer pprof.StopCPUProfile()
params.ClearVerkleWitnessCosts()

stream := rlp.NewStream(reader, 0)

// Run actual the import.
Expand Down Expand Up @@ -365,6 +380,75 @@ func ExportPreimages(db ethdb.Database, fn string) error {
return nil
}

// ExportOverlayPreimages exports all known hash preimages into the specified file,
// in the same order as expected by the overlay tree migration.
func ExportOverlayPreimages(chain *core.BlockChain, fn string, root common.Hash) error {
log.Info("Exporting preimages", "file", fn)

fh, err := os.OpenFile(fn, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.ModePerm)
if err != nil {
return err
}
defer fh.Close()

writer := bufio.NewWriter(fh)
defer writer.Flush()

statedb, err := chain.State()
if err != nil {
return fmt.Errorf("failed to open statedb: %w", err)
}

if root == (common.Hash{}) {
root = chain.CurrentBlock().Root()
}

accIt, err := statedb.Snaps().AccountIterator(root, common.Hash{})
if err != nil {
return err
}
defer accIt.Release()

count := 0
for accIt.Next() {
acc, err := snapshot.FullAccount(accIt.Account())
if err != nil {
return fmt.Errorf("invalid account encountered during traversal: %s", err)
}
addr := rawdb.ReadPreimage(statedb.Database().DiskDB(), accIt.Hash())
if len(addr) != 20 {
return fmt.Errorf("addr len is zero is not 32: %d", len(addr))
}
if _, err := writer.Write(addr); err != nil {
return fmt.Errorf("failed to write addr preimage: %w", err)
}

if acc.HasStorage() {
stIt, err := statedb.Snaps().StorageIterator(root, accIt.Hash(), common.Hash{})
if err != nil {
return fmt.Errorf("failed to create storage iterator: %w", err)
}
for stIt.Next() {
slotnr := rawdb.ReadPreimage(statedb.Database().DiskDB(), stIt.Hash())
if len(slotnr) != 32 {
return fmt.Errorf("slotnr not 32 len")
}
if _, err := writer.Write(slotnr); err != nil {
return fmt.Errorf("failed to write slotnr preimage: %w", err)
}
}
stIt.Release()
}
count++
if count%100000 == 0 {
log.Info("Last exported account", "account", accIt.Hash())
}
}

log.Info("Exported preimages", "file", fn)
return nil
}

// exportHeader is used in the export/import flow. When we do an export,
// the first element we output is the exportHeader.
// Whenever a backwards-incompatible change is made, the Version header
Expand Down
5 changes: 5 additions & 0 deletions cmd/utils/flags.go
Original file line number Diff line number Diff line change
Expand Up @@ -219,6 +219,11 @@ var (
Usage: "Max number of elements (0 = no limit)",
Value: 0,
}
TreeRootFlag = &cli.StringFlag{
Name: "roothash",
Usage: "Root hash of the tree (if empty, use the latest)",
Value: "",
}

defaultSyncMode = ethconfig.Defaults.SyncMode
SyncModeFlag = &flags.TextMarshalerFlag{
Expand Down
31 changes: 18 additions & 13 deletions core/block_validator.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,17 @@ func (v *BlockValidator) ValidateBody(block *types.Block) error {
if hash := types.DeriveSha(block.Transactions(), trie.NewStackTrie(nil)); hash != header.TxHash {
return fmt.Errorf("transaction root hash mismatch: have %x, want %x", hash, header.TxHash)
}
if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {
if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {
return consensus.ErrUnknownAncestor
}
return consensus.ErrPrunedAncestor
}
// XXX I had to deactivate this check for replay to work: the block state root
// hash is the one of the overlay tree, but in replay mode, it's the hash of
// the base tree that takes precedence, as the chain would not otherwise be
// recognized.
// if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {
// if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {
// return consensus.ErrUnknownAncestor
// }
// fmt.Println("failure here")
// return consensus.ErrPrunedAncestor
// }
return nil
}

Expand All @@ -90,15 +95,15 @@ func (v *BlockValidator) ValidateState(block *types.Block, statedb *state.StateD
return fmt.Errorf("invalid bloom (remote: %x local: %x)", header.Bloom, rbloom)
}
// Tre receipt Trie's root (R = (Tr [[H1, R1], ... [Hn, Rn]]))
receiptSha := types.DeriveSha(receipts, trie.NewStackTrie(nil))
if receiptSha != header.ReceiptHash {
return fmt.Errorf("invalid receipt root hash (remote: %x local: %x)", header.ReceiptHash, receiptSha)
}
// receiptSha := types.DeriveSha(receipts, trie.NewStackTrie(nil))
// if receiptSha != header.ReceiptHash {
// return fmt.Errorf("invalid receipt root hash (remote: %x local: %x)", header.ReceiptHash, receiptSha)
// }
// Validate the state root against the received state root and throw
// an error if they don't match.
if root := statedb.IntermediateRoot(v.config.IsEIP158(header.Number)); header.Root != root {
return fmt.Errorf("invalid merkle root (remote: %x local: %x)", header.Root, root)
}
// if root := statedb.IntermediateRoot(v.config.IsEIP158(header.Number)); header.Root != root {
// return fmt.Errorf("invalid merkle root (remote: %x local: %x)", header.Root, root)
// }
return nil
}

Expand Down
55 changes: 55 additions & 0 deletions core/blockchain.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,16 @@
package core

import (
"bufio"
"errors"
"fmt"
"io"
"math"
"math/big"
"os"
"runtime"
"sort"
"strconv"
"strings"
"sync"
"sync/atomic"
Expand Down Expand Up @@ -1472,6 +1476,30 @@ func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
return bc.insertChain(chain, true, true)
}

func findVerkleConversionBlock() (uint64, error) {
if _, err := os.Stat("conversion.txt"); os.IsNotExist(err) {
return math.MaxUint64, nil
}

f, err := os.Open("conversion.txt")
if err != nil {
log.Error("Failed to open conversion.txt", "err", err)
return 0, err
}
defer f.Close()

scanner := bufio.NewScanner(f)
scanner.Scan()
conversionBlock, err := strconv.ParseUint(scanner.Text(), 10, 64)
if err != nil {
log.Error("Failed to parse conversionBlock", "err", err)
return 0, err
}
log.Info("Found conversion block info", "conversionBlock", conversionBlock)

return conversionBlock, nil
}

// insertChain is the internal implementation of InsertChain, which assumes that
// 1) chains are contiguous, and 2) The chain mutex is held.
//
Expand All @@ -1486,6 +1514,11 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals, setHead bool)
return 0, nil
}

conversionBlock, err := findVerkleConversionBlock()
if err != nil {
return 0, err
}

// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)
senderCacher.recoverFromBlocks(types.MakeSigner(bc.chainConfig, chain[0].Number()), chain)

Expand Down Expand Up @@ -1670,6 +1703,10 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals, setHead bool)
if parent == nil {
parent = bc.GetHeader(block.ParentHash(), block.NumberU64()-1)
}

if parent.Number.Uint64() == conversionBlock {
bc.StartVerkleTransition(parent.Root, emptyVerkleRoot, bc.Config(), parent.Number)
}
statedb, err := state.New(parent.Root, bc.stateCache, bc.snaps)
if err != nil {
return it.index, err
Expand Down Expand Up @@ -1706,6 +1743,10 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals, setHead bool)
return it.index, err
}

if statedb.Database().InTransition() || statedb.Database().Transitioned() {
bc.AddRootTranslation(block.Root(), statedb.IntermediateRoot(false))
}

// Update the metrics touched during block processing
accountReadTimer.Update(statedb.AccountReads) // Account reads are complete, we can mark them
storageReadTimer.Update(statedb.StorageReads) // Storage reads are complete, we can mark them
Expand Down Expand Up @@ -2287,6 +2328,8 @@ func (bc *BlockChain) skipBlock(err error, it *insertIterator) bool {
return false
}

var emptyVerkleRoot common.Hash

// indexBlocks reindexes or unindexes transactions depending on user configuration
func (bc *BlockChain) indexBlocks(tail *uint64, head uint64, done chan struct{}) {
defer func() { close(done) }()
Expand Down Expand Up @@ -2431,3 +2474,15 @@ func (bc *BlockChain) SetBlockValidatorAndProcessorForTesting(v Validator, p Pro
bc.validator = v
bc.processor = p
}

func (bc *BlockChain) StartVerkleTransition(originalRoot, translatedRoot common.Hash, chainConfig *params.ChainConfig, cancunBlock *big.Int) {
bc.stateCache.StartVerkleTransition(originalRoot, translatedRoot, chainConfig, cancunBlock)
}

func (bc *BlockChain) EndVerkleTransition() {
bc.stateCache.EndVerkleTransition()
}

func (bc *BlockChain) AddRootTranslation(originalRoot, translatedRoot common.Hash) {
bc.stateCache.AddRootTranslation(originalRoot, translatedRoot)
}
Loading

0 comments on commit 09494c6

Please sign in to comment.