lkml.org 
[lkml]   [2013]   [Apr]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [tip:sched/core] sched: Lower chances of cputime scaling overflow
From
Date
On Thu, 2013-04-11 at 08:38 -0700, Linus Torvalds wrote:
> On Thu, Apr 11, 2013 at 6:45 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Tue, 2013-03-26 at 15:01 +0100, Stanislaw Gruszka wrote:
> >> Thoughts?
> >
> > Would something like the below work?
>
> Ugh, this is hard to think about, it's also fairly inefficient.
>
> > static cputime_t scale_stime(u64 stime, u64 rtime, u64 total)
> > {
> > - u64 rem, res, scaled;
> > + int stime_fls = fls64(stime);
> > + int total_fls = fls64(total);
> > + int rtime_fls = fls64(rtime);
>
> Doing "fls64()" unconditionally is quite expensive on some
> architectures,

Oh, I (wrongly it appears) assumed that fls was something cheap :/

> and if I am not mistaken, the *common* case (by far) is
> that all these values fit in 32 bits, no?

It depends on if we use cputime_jiffies.h or cputime_nsec.h and I'm
completely lost as to which we default to atm. But we sure can reduce
to 32 bits in most cases without too much problems.

But that would mean fls() and shifting again for nsec based cputime.

I'll have a better read and think about the rest of your email but
that'll have to be tomorrow :/



\
 
 \ /
  Last update: 2013-04-11 20:41    [W:1.853 / U:0.496 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site