Release v1.0.8
Risk Events
A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in
with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:
- while TiUP treats these TiKV nodes' state as
tombstone
by mistake, it would report an error that confuses the user. - Then the user would execute the command
tiup cluster display
to confirm the real state of the cluster, but thedisplay
command also displays these TiKV nodes are intombstone
state too; - what's worse, the
display
command will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.
To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.
- Fix the bug that ctl working directory is different with TiUP (#589)
- Introduce a more general way to config profile (#578)
- cluster: properly pass --wait-timeout to systemd operations (#585)
- Always match the newest store when matching by address (#579)
- Fix init config with check config (#583)
- Bugfix: patch can't overwrite twice (#558)
- Request remote while local manifest expired (#560)
- Encapsulate operation about meta file (#567)
- Playground: fix panic if failed to start tiflash (#543)
- Cluster: show message for impossible fix (#550)
- Fix scale-in of tiflash in playground (#541)
- Fix config of grafana (#535)