lkml.org 
[lkml]   [2020]   [May]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] sched/cputime: make scale_stime() more precise
ping...

Peter, could you comment?

On 01/27, Oleg Nesterov wrote:
>
> People report that utime and stime from /proc/<pid>/stat become very
> wrong when the numbers are big enough, especially if you watch these
> counters incrementally.
>
> Say, if the monitored process runs 100 days 50/50 in user/kernel mode
> it looks as if it runs 20 minutes entirely in kernel mode, then 20
> minutes in user mode. See the test-case which tries to demonstrate this
> behaviour:
>
> https://lore.kernel.org/lkml/20200124154215.GA14714@redhat.com/
>
> The new implementation does the additional div64_u64_rem() but according
> to my naive measurements it is faster on x86_64, much faster if rtime/etc
> are big enough. See
>
> https://lore.kernel.org/lkml/20200123130541.GA30620@redhat.com/
>
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>
> ---
> kernel/sched/cputime.c | 65 +++++++++++++++++++++++++-------------------------
> 1 file changed, 32 insertions(+), 33 deletions(-)
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index d43318a..ae1ea09 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -528,42 +528,41 @@ void account_idle_ticks(unsigned long ticks)
> */
> static u64 scale_stime(u64 stime, u64 rtime, u64 total)
> {
> - u64 scaled;
> + u64 res = 0, div, rem;
> + int shift;
>
> - for (;;) {
> - /* Make sure "rtime" is the bigger of stime/rtime */
> - if (stime > rtime)
> - swap(rtime, stime);
> -
> - /* Make sure 'total' fits in 32 bits */
> - if (total >> 32)
> - goto drop_precision;
> -
> - /* Does rtime (and thus stime) fit in 32 bits? */
> - if (!(rtime >> 32))
> - break;
> -
> - /* Can we just balance rtime/stime rather than dropping bits? */
> - if (stime >> 31)
> - goto drop_precision;
> -
> - /* We can grow stime and shrink rtime and try to make them both fit */
> - stime <<= 1;
> - rtime >>= 1;
> - continue;
> -
> -drop_precision:
> - /* We drop from rtime, it has more bits than stime */
> - rtime >>= 1;
> - total >>= 1;
> + /* can stime * rtime overflow ? */
> + if (ilog2(stime) + ilog2(rtime) > 62) {
> + /*
> + * (rtime * stime) / total is equal to
> + *
> + * (rtime / total) * stime +
> + * (rtime % total) * stime / total
> + *
> + * if nothing overflows. Can the 1st multiplication
> + * overflow? Yes, but we do not care: this can only
> + * happen if the end result can't fit in u64 anyway.
> + *
> + * So the code below does
> + *
> + * res = (rtime / total) * stime;
> + * rtime = rtime % total;
> + */
> + div = div64_u64_rem(rtime, total, &rem);
> + res = div * stime;
> + rtime = rem;
> +
> + shift = ilog2(stime) + ilog2(rtime) - 62;
> + if (shift > 0) {
> + /* drop precision */
> + rtime >>= shift;
> + total >>= shift;
> + if (!total)
> + return res;
> + }
> }
>
> - /*
> - * Make sure gcc understands that this is a 32x32->64 multiply,
> - * followed by a 64/32->64 divide.
> - */
> - scaled = div_u64((u64) (u32) stime * (u64) (u32) rtime, (u32)total);
> - return scaled;
> + return res + div64_u64(stime * rtime, total);
> }
>
> /*
> --
> 2.5.0
>

\
 
 \ /
  Last update: 2020-05-15 19:25    [W:0.245 / U:1.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site