lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH -next] sched/cputime: Fix the time backward issue about /proc/stat
Date
From: Zheng Zucheng <zhengzucheng@huawei.com>

The cputime of cpuN read from /proc/stat has an issue of cputime descent.
For example, the phenomenon of cputime descent of user is as followed:

The value read first is 319, and the value read again is 318. As follows:
first:
cat /proc/stat | grep cpu1
cpu1 319 0 496 41665 0 0 0 0 0 0
again:
cat /proc/stat | grep cpu1
cpu1 318 0 497 41674 0 0 0 0 0 0

The value read from /proc/stat should be monotonically increasing. Otherwise
user may get incorrect CPU usage.

The root cause of this problem is that, in the implementation of
kcpustat_cpu_fetch_vtime, vtime->utime + delta is added to the stack variable
cpustat instantaneously. If the task is switched between two times, the value
added to cpustat for the second time may be smaller than that for the first time.

CPU0 CPU1
First:
show_stat()
->kcpustat_cpu_fetch()
->kcpustat_cpu_fetch_vtime()
->cpustat[CPUTIME_USER] = kcpustat_cpu(cpu) + vtime->utime + delta rq->curr is task A
A switch to B,and A->vtime->utime is less than 1 tick
Then:
show_stat()
->kcpustat_cpu_fetch()
->kcpustat_cpu_fetch_vtime()
->cpustat[CPUTIME_USER] = kcpustat_cpu(cpu) + vtime->utime + delta; rq->curr is task B

Fixes: 74722bb223d0 ("sched/vtime: Bring up complete kcpustat accessor")
Signed-off-by: Li Hua <hucool.lihua@huawei.com>
Signed-off-by: Zheng Zucheng <zhengzucheng@huawei.com>
---
kernel/sched/cputime.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 95fc77853743..c7a812ff1fb7 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -1060,9 +1060,17 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
return 0;
}

+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat_reverse);
+DEFINE_PER_CPU(raw_spinlock_t, kernel_cpustat_reverse_lock);
+
void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
{
const struct kernel_cpustat *src = &kcpustat_cpu(cpu);
+ struct kernel_cpustat *reverse = &per_cpu(kernel_cpustat_reverse, cpu);
+ raw_spinlock_t *cpustat_lock = &per_cpu(kernel_cpustat_reverse_lock, cpu);
+ u64 *dstat = dst->cpustat;
+ u64 *restat = reverse->cpustat;
+ unsigned long flags;
struct rq *rq;
int err;

@@ -1087,8 +1095,22 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
err = kcpustat_cpu_fetch_vtime(dst, src, curr, cpu);
rcu_read_unlock();

- if (!err)
+ if (!err) {
+ raw_spin_lock_irqsave(cpustat_lock, flags);
+ if (dstat[CPUTIME_USER] < restat[CPUTIME_USER])
+ dstat[CPUTIME_USER] = restat[CPUTIME_USER];
+ if (dstat[CPUTIME_SYSTEM] < restat[CPUTIME_SYSTEM])
+ dstat[CPUTIME_SYSTEM] = restat[CPUTIME_SYSTEM];
+ if (dstat[CPUTIME_NICE] < restat[CPUTIME_NICE])
+ dstat[CPUTIME_NICE] = restat[CPUTIME_NICE];
+ if (dstat[CPUTIME_GUEST] < restat[CPUTIME_GUEST])
+ dstat[CPUTIME_GUEST] = restat[CPUTIME_GUEST];
+ if (dstat[CPUTIME_GUEST_NICE] < restat[CPUTIME_GUEST_NICE])
+ dstat[CPUTIME_GUEST_NICE] = restat[CPUTIME_GUEST_NICE];
+ *reverse = *dst;
+ raw_spin_unlock_irqrestore(cpustat_lock, flags);
return;
+ }

cpu_relax();
}
--
2.18.0.huawei.25
\
 
 \ /
  Last update: 2022-09-28 05:38    [W:0.106 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site