lkml.org 
[lkml]   [2018]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v5 09/14] sched: Add over-utilization/tipping point indicator
    On Fri, 3 Aug 2018 at 15:49, Vincent Guittot <vincent.guittot@linaro.org> wrote:
    >
    > On Fri, 3 Aug 2018 at 10:18, Quentin Perret <quentin.perret@arm.com> wrote:
    > >
    > > On Friday 03 Aug 2018 at 09:48:47 (+0200), Vincent Guittot wrote:
    > > > On Thu, 2 Aug 2018 at 18:59, Quentin Perret <quentin.perret@arm.com> wrote:
    > > > I'm not really concerned about re-enabling load balance but more that
    > > > the effort of packing of tasks in few cpus/clusters that EAS tries to
    > > > do can be broken for every new task.
    > >
    > > Well, re-enabling load balance immediately would break the nice placement
    > > that EAS found, because it would shuffle all tasks around and break the
    > > packing strategy. Letting that sole new task go in find_idlest_cpu()
    >
    > Sorry I was not clear in my explanation. Re enabling load balance
    > would be a problem of course. I wanted to say that there is few chance
    > that this will re-enable the load balance immediately and break EAS
    > and I'm not worried by this case. But i'm only concerned by the new
    > task being put outside EAS policy.
    >
    > For example, if you run on hikey960 the simple script below, which
    > can't really be seen as a fork bomb IMHO, you will see threads
    > scheduled on big cores every 0.5 seconds whereas everything should be
    > packed on little core :
    > for i in {0..10}; do
    > echo "t"$i
    > sleep 0.5
    > done
    >
    > > shouldn't impact the placement of existing tasks. That might have an energy
    > > cost for that one task, yes, but it's really hard to do anything smarter
    > > with new tasks IMO ... EAS simply can't work without a utilization value.
    > >
    > > > So I wonder what is better for EAS : Make sure to efficiently spread
    > > > newly created tasks in cas of fork bomb or try to not break EAS task
    > > > placement with every newly created tasks
    > >
    > > That shouldn't break the placement per se, we're just making one
    > > temporary exception for new tasks. What do you think 'the right thing'
    > > to do is ? To just put new tasks on prev_cpu or something like that ?
    >
    > I think that EAS, which is about saving power, could be a bit power
    > friendly when it has to make some assumptions about new task.
    >
    > >
    > > That might help some use-cases I suppose, but will probably harm others ...
    > > I'm just not too keen on making assumptions about the size of new tasks,
    >
    > But you are already doing some assumptions by letting the default
    > mode, which use load_avg, selecting the task for you. The comment of

    s/selecting the task/selecting the cpu/

    > the init function of load_avg states:
    >
    > void init_entity_runnable_average()
    > {
    > ...
    > /*
    > * Tasks are intialized with full load to be seen as heavy tasks until
    > * they get a chance to stabilize to their real load level.
    > * Group entities are intialized with zero load to reflect the fact that
    > * nothing has been attached to the task group yet.
    > */
    >
    > So it means that EAS makes the assumption that new task are heavy
    > tasks until they get a chance to stabilize
    >
    > Regards,
    > Vincent
    >
    > > that's all. But I'm definitely open to ideas if there is something
    > > smarter we can do.
    > >
    > > Thanks,
    > > Quentin

    \
     
     \ /
      Last update: 2018-08-03 16:22    [W:2.997 / U:0.224 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site