-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
qps drop more than 2 mins and also affect pitr and cdc lag when injection pd leader io delay 500ms or 1s or io hang due to a circuit breaker mechanism which is by design #8594
Comments
According to the log, the original leader pd-1 stepped down from leadership at 12:12:19 due to a failure in lease renewal.
During this period, the problem caused by io hang did not completely lead to the etcd leader stepping down, so the etcd leader was still on pd-1, and the pd leader was repeatedly elected on it:
3 consecutive times for "PD leader elected on the same etcd leader in a short period," triggered the circuit breaker mechanism, forcibly transferring the etcd leader:
Then, it wasn't until 12:14:12 that pd-2 became the etcd leader and subsequently became the pd leader at 12:14:13:
In summary, the above case is actually an expected scenario. Due to previous issues where the etcd leader remained unchanged but the PD leader was continuously unavailable, a circuit breaker mechanism was introduced. The trigger condition is when PD leader elections are repeatedly triggered three times on the same etcd leader within a short period. Therefore, this case can be considered as hitting one of our optimizations. Without this optimization, the unavailability time would only last longer. Related PR: #7301 |
Bug Report
What did you do?
1、run tpcc
2、inject pd leader io delay 500ms
What did you expect to see?
qps can recover within 2mins
What did you see instead?
qps drop last 4mins when injection pd leader io delay 500ms
clinic: https://clinic.pingcap.com.cn/portal/#/orgs/31/clusters/7370231614967615066?from=1716078044&to=1716079583
2024-05-19 08:31:01
{"container":"pd","level":"INFO","namespace":"endless-ha-test-oltp-pitr-tps-7539921-1-525","pod":"tc-pd-0","log":"[server.go:1816] ["no longer a leader because lease has expired, pd leader will step down"]"}
The PD-0 lost its PD leader at 08:31:01
2024-05-19 08:31:13
{"container":"pd","level":"INFO","namespace":"endless-ha-test-oltp-pitr-tps-7539921-1-525","pod":"tc-pd-0","log":"[server.go:1733] ["campaign PD leader ok"] [campaign-leader-name=tc-pd-0]"}
At 08:31:13, since PD-0 was still the etcd leader, it was re-elected as the PD leader
2024-05-19 08:31:28
{"container":"pd","level":"INFO","namespace":"endless-ha-test-oltp-pitr-tps-7539921-1-525","pod":"tc-pd-0","log":"[server.go:1816] ["no longer a leader because lease has expired, pd leader will step down"]"}
However, because io chaos continued, PD-0 dropped the PD leader again at 08:31:28, and then triggered the expulsion of the etcd leader mechanism after repeated three times:
2024-05-19 08:33:22
{"container":"pd","level":"ERROR","namespace":"endless-ha-test-oltp-pitr-tps-7539921-1-525","pod":"tc-pd-0","log":"[server.go:1713] ["campaign PD leader meets error due to etcd error"] [campaign-leader-name=tc-pd-0] [error="[PD:server:ErrLeaderFrequentlyChange]leader tc-pd-0 frequently changed, leader-key is [/pd/7370231614967615066/leader]"]"}
2024-05-19 08:33:20
{"namespace":"endless-ha-test-oltp-pitr-tps-7539921-1-525","pod":"tc-pd-1","log":"[server.go:1733] ["campaign PD leader ok"] [campaign-leader-name=tc-pd-1]","level":"INFO","container":"pd"}
At 08:33:22, PD took the initiative to oust the etcd leader, and PD-1 was elected etcd and PD leader
If the etcd leader does not actively switch, the PD can only passively switch the etcd leader after three consecutive pd leader election failures
What version of PD are you using (
pd-server -V
)?v8.1.0
githash: fca469c
The text was updated successfully, but these errors were encountered: