lkml.org 
[lkml]   [2022]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
> Should I still need to copy groupc->tasks[] out for the current_cpu as you
> suggested before?

It'd be my preference as well. This way the resched logic can be
consolidated into a single block of comment + code at the end of the
function.

> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
> u32 *pchanged_states)
> {
> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
> + int current_cpu = raw_smp_processor_id();
> + bool reschedule;
> u64 now, state_start;
> enum psi_states s;
> unsigned int seq;
> @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
> memcpy(times, groupc->times, sizeof(groupc->times));
> state_mask = groupc->state_mask;
> state_start = groupc->state_start;
> + if (cpu == current_cpu)
> + reschedule = groupc->tasks[NR_RUNNING] +
> + groupc->tasks[NR_IOWAIT] +
> + groupc->tasks[NR_MEMSTALL] > 1;
> } while (read_seqcount_retry(&groupc->seq, seq));

This also matches psi_show() and the poll worker. They don't currently
use the flag, but it's somewhat fragile and confusing. Add a test for
current_work() == &group->avgs_work?

\
 
 \ /
  Last update: 2022-10-13 17:53    [W:0.137 / U:0.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site