lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 09/23] sched/fair: Use task-class performance score to pick the busiest group
On Fri, Sep 09, 2022 at 04:11:51PM -0700, Ricardo Neri wrote:
> update_sd_pick_busiest() keeps on selecting as the busiest group scheduling
> groups of identical priority. Since both groups have the same priority,
> either group is a good choice. The classes of tasks in the scheduling
> groups can break this tie.
>
> Pick as busiest the scheduling group that yields a higher task-class
> performance score after load balancing.

> +/**
> + * sched_asym_class_pick - Select a sched group based on classes of tasks
> + * @a: A scheduling group
> + * @b: A second scheduling group
> + * @a_stats: Load balancing statistics of @a
> + * @b_stats: Load balancing statistics of @b
> + *
> + * Returns: true if @a has the same priority and @a has classes of tasks that
> + * yield higher overall throughput after load balance. Returns false otherwise.
> + */
> +static bool sched_asym_class_pick(struct sched_group *a,
> + struct sched_group *b,
> + struct sg_lb_stats *a_stats,
> + struct sg_lb_stats *b_stats)
> +{
> + /*
> + * Only use the class-specific preference selection if both sched
> + * groups have the same priority.
> + */
> + if (arch_asym_cpu_priority(a->asym_prefer_cpu) !=
> + arch_asym_cpu_priority(b->asym_prefer_cpu))
> + return false;
> +
> + return sched_asym_class_prefer(a_stats, b_stats);
> +}
> +
> #else /* CONFIG_SCHED_TASK_CLASSES */
> static void update_rq_task_classes_stats(struct sg_lb_task_class_stats *class_sgs,
> struct rq *rq)

> @@ -9049,6 +9111,12 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> /* Prefer to move from lowest priority CPU's work */
> if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
> return false;
> +
> + /* @sg and @sds::busiest have the same priority. */
> + if (sched_asym_class_pick(sds->busiest, sg, &sds->busiest_stat, sgs))
> + return false;
> +
> + /* @sg has lower priority than @sds::busiest. */
> break;
>
> case group_misfit_task:

So why does only this one instance of asym_prefer() require tie
breaking?

I must also re-iterate how much I hate having two different means of
dealing with big-little topologies.

And while looking through this, I must ask about the comment that goes
with sched_set_itmt_core_prio() vs the sg->asym_prefer_cpu assignment in
init_sched_groups_capacity(), what-up ?!


\
 
 \ /
  Last update: 2022-09-27 13:04    [W:0.493 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site