You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reth panics when syncing from scratch to a full node. I've tried to re-sync from scratch twice, wiping the data between attempts, and both times it panics with the same "Data corruption detected" error, leading me to believe it's a software issue.
Steps to reproduce
Run the attached docker-compose file. It takes about 8h on my machine to get to the panic, docker-compose.txt
Node logs
2024-12-13T16:54:27.391513Z INFO Initialized tracing, debug log directory: /data/logs/bsc
2024-12-13T16:54:27.393250Z INFO Starting reth version="1.1.0 (14f63081)"
2024-12-13T16:54:27.393263Z INFO Opening database path="/data/db"
2024-12-13T16:54:27.412179Z INFO Configuration loaded path="/data/reth.toml"
2024-12-13T16:54:27.429347Z INFO Verifying storage consistency.
2024-12-13T16:54:27.473271Z INFO Database opened
2024-12-13T16:54:27.473280Z INFO Starting metrics endpoint at 0.0.0.0:9001
2024-12-13T16:54:27.473455Z INFO
Pre-merge hard forks (block based):
- Frontier @0
- Homestead @0
- Tangerine @0
- SpuriousDragon @0
- Byzantium @0
- Constantinople @0
- Petersburg @0
- Istanbul @0
- MuirGlacier @0
- Ramanujan @0
- Niels @0
- MirrorSync @5184000
- Bruno @13082000
- Euler @18907621
- Nano @21962149
- Moran @22107423
- Gibbs @23846001
- Planck @27281024
- Luban @29020050
- Plato @30720096
- Berlin @31302048
- London @31302048
- Hertz @31302048
- HertzFix @34140700
Post-merge hard forks (timestamp based):
- Shanghai @1705996800
- Kepler @1705996800
- Feynman @1713419340
- FeynmanFix @1713419340
- Cancun @1718863500
- Haber @1718863500
- HaberFix @1727316120
- Bohr @1727317200
2024-12-13T16:54:27.473768Z INFO Transaction pool initialized
2024-12-13T16:54:27.473935Z INFO Loading saved peers file=/data/known-peers.json
2024-12-13T16:54:27.476885Z INFO StaticFileProducer initialized
2024-12-13T16:54:27.477187Z INFO Pruner initialized prune_config=PruneConfig { block_interval: 5, recent_sidecars_kept_blocks: 0, segments: PruneModes { sender_recovery: Some(Full), transaction_lookup: None, receipts: Some(Distance(10064)), account_history: Some(Distance(10064)), storage_history: Some(Distance(10064)), receipts_log_filter: ReceiptsLogPruneConfig({}) } }
2024-12-13T16:54:27.477484Z INFO started listening to network block event
2024-12-13T16:54:27.477490Z INFO started fork choice notifier
2024-12-13T16:54:27.477497Z INFO started chain tracker notifier
2024-12-13T16:54:27.477841Z INFO Consensus engine initialized
2024-12-13T16:54:27.478194Z INFO Engine API handler initialized
2024-12-13T16:54:27.481009Z INFO RPC auth server started url=127.0.0.1:8551
2024-12-13T16:54:27.481322Z INFO RPC IPC server started path=/tmp/reth.ipc
2024-12-13T16:54:27.481330Z INFO RPC HTTP server started url=0.0.0.0:8545
2024-12-13T16:54:27.481761Z INFO Starting consensus engine
2024-12-13T16:54:27.481977Z INFO Preparing stage pipeline_stages=1/14 stage=Headers checkpoint=44817679 target=None
2024-12-13T16:54:27.481985Z INFO Target block already reached checkpoint=44817679 target=Hash(0x329ee78ce27bb5360b3185e86835d9beb73dc56a39469bca040158d96cd57c06)
2024-12-13T16:54:27.482020Z INFO Executing stage pipeline_stages=1/14 stage=Headers checkpoint=44817679 target=None
2024-12-13T16:54:27.482029Z INFO Finished stage pipeline_stages=1/14 stage=Headers checkpoint=44817679 target=None stage_progress=100.00%
2024-12-13T16:54:27.482052Z INFO Preparing stage pipeline_stages=2/14 stage=Bodies checkpoint=44817679 target=44817679
2024-12-13T16:54:27.482059Z INFO Executing stage pipeline_stages=2/14 stage=Bodies checkpoint=44817679 target=44817679
2024-12-13T16:54:27.482065Z INFO Finished stage pipeline_stages=2/14 stage=Bodies checkpoint=44817679 target=44817679 stage_progress=100.00%
2024-12-13T16:54:27.482089Z INFO Preparing stage pipeline_stages=3/14 stage=SenderRecovery checkpoint=29101898 target=44817679
2024-12-13T16:54:27.482095Z INFO Executing stage pipeline_stages=3/14 stage=SenderRecovery checkpoint=29101898 target=44817679
2024-12-13T16:54:27.482128Z INFO Recovering senders tx_range=4375123909..4380124073
thread 'reth-rayon-2' panicked at crates/primitives/src/compression/mod.rs:89:13:
Failed to decompress 131 bytes: Data corruption detected
stack backtrace:
0: 0x56140c36bb0b - <unknown>
1: 0x56140b85c60b - <unknown>
2: 0x56140c32ebc2 - <unknown>
3: 0x56140c36ce20 - <unknown>
4: 0x56140c36dd4f - <unknown>
5: 0x56140c36d815 - <unknown>
6: 0x56140c36d779 - <unknown>
7: 0x56140c36d764 - <unknown>
8: 0x561409cc3f42 - <unknown>
9: 0x56140c043e7c - <unknown>
10: 0x56140bcf4f44 - <unknown>
11: 0x56140afe7a27 - <unknown>
12: 0x561409ce3b05 - <unknown>
13: 0x56140ba35efc - <unknown>
14: 0x56140ba35cc8 - <unknown>
15: 0x56140c37006b - <unknown>
16: 0x7f82dbe81ac3 - <unknown>
17: 0x7f82dbf12a04 - clone
18: 0x0 - <unknown>
Rayon: detected unexpected panic; aborting
error: reth interrupted by SIGSEGV, printing backtrace
/usr/local/bin/bsc-reth(+0x2496677)[0x56140bc96677]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f82dbe2f520]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x178)[0x7f82dbe15898]
/usr/local/bin/bsc-reth(+0x2b2ed0a)[0x56140c32ed0a]
/usr/local/bin/bsc-reth(+0x54cd49)[0x561409d4cd49]
/usr/local/bin/bsc-reth(+0x2233fd9)[0x56140ba33fd9]
/usr/local/bin/bsc-reth(+0x17e8a61)[0x56140afe8a61]
/usr/local/bin/bsc-reth(+0x4e3b05)[0x561409ce3b05]
/usr/local/bin/bsc-reth(+0x2235efc)[0x56140ba35efc]
/usr/local/bin/bsc-reth(+0x2235cc8)[0x56140ba35cc8]
/usr/local/bin/bsc-reth(+0x2b7006b)[0x56140c37006b]
/lib/x86_64-linux-gnu/libc.so.6(+0x94ac3)[0x7f82dbe81ac3]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x44)[0x7f82dbf12a04]
note: we would appreciate a report at https://github.com/paradigmxyz/reth
Platform(s)
Linux (x86)
What version/commit are you on?
ghcr.io/bnb-chain/bsc-reth:v1.1.0
What database version are you on?
Docker container version
Which chain / network are you on?
bsc
What type of node are you running?
Full via --full flag
What prune config do you use, if any?
No response
If you've built Reth from source, provide the full command you used
No response
Code of Conduct
I agree to follow the Code of Conduct
The text was updated successfully, but these errors were encountered:
Hi @Robert-MacWha Reach out to the official Support Portal to report your request and get more details by initiating a live chat with an agent through the chat button on the platform. Access the portal here: BNB Support Page. Use this Ticket ID B7124x9 to submit your request.
Note: Click the chat button to start a conversation with an agent for assistance.
So this is a malicious comment, since that's definitely not a real support page & definitely not a real support account.
I've reported the monitor-hub user. Not sure who's in charge of this repo (@HSG88?), but you guys should probably either maintain this repo, or not link to it in BSC's official docs. This repo is currently one of the recommended ways of running full nodes for BSC, and given the 2/2 issues with scam comments, vulnerable users might end up falling for something.
Describe the bug
Reth panics when syncing from scratch to a full node. I've tried to re-sync from scratch twice, wiping the data between attempts, and both times it panics with the same "Data corruption detected" error, leading me to believe it's a software issue.
Steps to reproduce
Run the attached docker-compose file. It takes about 8h on my machine to get to the panic,
docker-compose.txt
Node logs
Platform(s)
Linux (x86)
What version/commit are you on?
ghcr.io/bnb-chain/bsc-reth:v1.1.0
What database version are you on?
Docker container version
Which chain / network are you on?
bsc
What type of node are you running?
Full via --full flag
What prune config do you use, if any?
No response
If you've built Reth from source, provide the full command you used
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: