Messages in this thread | | | Date | Wed, 16 Aug 2023 10:12:52 +0530 | From | "Gautham R. Shenoy" <> | Subject | Re: [RFC PATCH 1/1] sched/fair: ratelimit update to tg->load_avg |
| |
Hello Aaron, (Adding David Vernet)
On Wed, Aug 16, 2023 at 10:48:31AM +0800, Aaron Lu wrote: > When using sysbench to benchmark Postgres in a single docker instance > with sysbench's nr_threads set to nr_cpu, it is observed there are times > update_cfs_group() and update_load_avg() shows noticeable overhead on > a 2sockets/112core/224cpu Intel Sapphire Rapids(SPR): > > 13.75% 13.74% [kernel.vmlinux] [k] update_cfs_group > 10.63% 10.04% [kernel.vmlinux] [k] update_load_avg > > Annotate shows the cycles are mostly spent on accessing tg->load_avg > with update_load_avg() being the write side and update_cfs_group() being > the read side. tg->load_avg is per task group and when different tasks > of the same taskgroup running on different CPUs frequently access > tg->load_avg, it can be heavily contended.
Interestingly I observed this contention on 2 socket EPYC servers (Zen3 and Zen4) while running tbench and netperf with David Vernet's shared-runqueue v3 patches. This contention was observed only when running with the shared-runqueue enabled but not otherwise.
Overhead Command Shared Object Symbol + 20.54% tbench [kernel.vmlinux] [k] update_cfs_group + 15.78% tbench [kernel.vmlinux] [k] update_load_avg
This was causing the tbench (and netperf) to not scale beyond 32 clients when shared-runqueue was enabled.
> > The frequent access to tg->load_avg is due to task migration on wakeup > path, e.g. when running postgres_sysbench on a 2sockets/112cores/224cpus > Intel Sappire Rapids, during a 5s window, the wakeup number is 14millions > and migration number is 11millions and with each migration, the task's > load will transfer from src cfs_rq to target cfs_rq and each change > involves an update to tg->load_avg.
With the shared-runqueue patches, we see a lot more task migrations since the newidle_balance() path would pull tasks from the shared-runqueue. While the read of tg->load_avg is via READ_ONCE on x86, the write is atomic.
> Since the workload can trigger as many > wakeups and migrations, the access(both read and write) to tg->load_avg > can be unbound. As a result, the two mentioned functions showed noticeable > overhead. With netperf/nr_client=nr_cpu/UDP_RR, the problem is worse: > during a 5s window, wakeup number is 21millions and migration number is > 14millions; update_cfs_group() costs ~25% and update_load_avg() costs ~16%. > > Reduce the overhead by limiting updates to tg->load_avg to at most once > per ms. After this change, the cost of accessing tg->load_avg is greatly > reduced and performance improved. Detailed test results below.
I will try this patch on with David's series today.
-- Thanks and Regards gautham.
| |