lkml.org 
[lkml]   [2022]   [Oct]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
From
On 2022/10/12 01:00, Suren Baghdasaryan wrote:
> On Mon, Oct 10, 2022 at 5:07 PM Chengming Zhou
> <zhouchengming@bytedance.com> wrote:
>>
>> Hello,
>>
>> On 2022/10/11 05:21, Suren Baghdasaryan wrote:
>>> On Mon, Oct 10, 2022 at 3:42 AM Chengming Zhou
>>> <zhouchengming@bytedance.com> wrote:
>>>>
>>>> Pavan reported a problem that PSI avgs_work idle shutoff is not
>>>> working at all. Because PSI_NONIDLE condition would be observed in
>>>> psi_avgs_work()->collect_percpu_times()->get_recent_times() even if
>>>> only the kworker running avgs_work on the CPU.
>>>>
>>>> Although commit 1b69ac6b40eb ("psi: fix aggregation idle shut-off")
>>>> avoided the ping-pong wake problem when the worker sleep, psi_avgs_work()
>>>> still will always re-arm the avgs_work, so shutoff is not working.
>>>>
>>>> This patch changes to consider current CPU groupc as IDLE if the
>>>> kworker running avgs_work is the only task running and no IOWAIT
>>>> or MEMSTALL sleep tasks, in which case we will shut off the avgs_work
>>>> if other CPUs' groupc are also IDLE.
>>>>
>>>> One potential problem is that the brief period of non-idle time
>>>> incurred between the aggregation run and the kworker's dequeue will
>>>> be stranded in the per-cpu buckets until avgs_work run next time.
>>>> The buckets can hold 4s worth of time, and future activity will wake
>>>> the avgs_work with a 2s delay, giving us 2s worth of data we can leave
>>>> behind when shut off the avgs_work. If the kworker run other works after
>>>> avgs_work shut off and doesn't have any scheduler activities for 2s,
>>>> this maybe a problem.
>>>>
>>>> Reported-by: Pavan Kondeti <quic_pkondeti@quicinc.com>
>>>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
>>>
>>> Copying my comments from
>>> https://lore.kernel.org/all/CAJuCfpHyYMak-mfVmtEN9Z-hGYQ6Wko57TLjukz9HaN26EDAuA@mail.gmail.com/
>>> in case you want to continue the discussion here...
>>>
>>>> ---
>>>> kernel/sched/psi.c | 15 +++++++++++++++
>>>> 1 file changed, 15 insertions(+)
>>>>
>>>> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
>>>> index ee2ecc081422..f4cdf6f184ba 100644
>>>> --- a/kernel/sched/psi.c
>>>> +++ b/kernel/sched/psi.c
>>>> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>>> u32 *pchanged_states)
>>>> {
>>>> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
>>>> + int current_cpu = raw_smp_processor_id();
>>>> + bool only_avgs_work = false;
>>>> u64 now, state_start;
>>>> enum psi_states s;
>>>> unsigned int seq;
>>>> @@ -256,6 +258,15 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>>> memcpy(times, groupc->times, sizeof(groupc->times));
>>>> state_mask = groupc->state_mask;
>>>> state_start = groupc->state_start;
>>>> + /*
>>>> + * This CPU has only avgs_work kworker running, snapshot the
>>>> + * newest times then don't need to re-arm for this groupc.
>>>> + * Normally this kworker will sleep soon and won't wake
>>>> + * avgs_work back up in psi_group_change().
>>>> + */
>>>> + if (current_cpu == cpu && groupc->tasks[NR_RUNNING] == 1 &&
>>>> + !groupc->tasks[NR_IOWAIT] && !groupc->tasks[NR_MEMSTALL])
>>>> + only_avgs_work = true;
>>>
>>> Why do you determine only_avgs_work while taking a snapshot? The
>>> read_seqcount_retry() might fail and the loop gets retried, which
>>> might lead to a wrong only_avgs_work value if the state changes
>>> between retries. I think it's safer to do this after the snapshot was
>>> taken and to use tasks[NR_RUNNING] instead of roupc->tasks.
>>
>> Ah, you are right, coping groupc->tasks[NR_RUNNING] and tasks[NR_IOWAIT], tasks[NR_MEMSTALL]
>> is ok for me. (Maybe we only need to copy groupc->tasks[NR_RUNNING]?)
>>
>> Another way is to add an else branch:
>>
>> if (current_cpu == cpu && groupc->tasks[NR_RUNNING] == 1 &&
>> !groupc->tasks[NR_IOWAIT] && !groupc->tasks[NR_MEMSTALL])
>> only_avgs_work = true;
>> else
>> only_avgs_work = false;
>>
>> Both are ok for me.
>
> Let's use the simple way and use the tasks[] after the snapshot is taken.

Ok, I changed like this:

struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
+ int current_cpu = raw_smp_processor_id();
+ unsigned int tasks[NR_PSI_TASK_COUNTS];
u64 now, state_start;
enum psi_states s;
unsigned int seq;
@@ -256,6 +258,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
memcpy(times, groupc->times, sizeof(groupc->times));
state_mask = groupc->state_mask;
state_start = groupc->state_start;
+ if (cpu == current_cpu)
+ memcpy(tasks, groupc->tasks, sizeof(groupc->tasks));
} while (read_seqcount_retry(&groupc->seq, seq));

>
>>
>>>
>>>> } while (read_seqcount_retry(&groupc->seq, seq));
>>>>
>>>> /* Calculate state time deltas against the previous snapshot */
>>>> @@ -280,6 +291,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>>> if (delta)
>>>> *pchanged_states |= (1 << s);
>>>> }
>>>> +
>>>> + /* Clear PSI_NONIDLE so avgs_work won't be re-armed for this groupc */
>>>> + if (only_avgs_work)
>>>> + *pchanged_states &= ~(1 << PSI_NONIDLE);
>>>
>>> This seems to be safe because changed_states&(1<< PSI_NONIDLE) is used
>>> only for re-arming psi_avgs_work, however semantically this is
>>> incorrect. The CPU was not idle when it was executing psi_avgs_work.
>>> IMO a separate flag would avoid this confusion.
>>
>> Yes, it's safe, but has this confusion. Use a separate flag looks better, like PSI_ONLY_AVGS_WORK.
>> But then in collect_percpu_times() we still have to clear PSI_NONIDLE of this CPU if PSI_ONLY_AVGS_WORK
>> has been set.
>>
>> for_each_possible_cpu(cpu) {
>> u32 times[NR_PSI_STATES];
>> u32 nonidle;
>> u32 cpu_changed_states;
>>
>> get_recent_times(group, cpu, aggregator, times,
>> &cpu_changed_states);
>> changed_states |= cpu_changed_states;
>>
>> cpu_changed_states should clear PSI_NONIDLE if PSI_ONLY_AVGS_WORK already set.
>
> No, PSI_NONIDLE should not be affected by PSI_ONLY_AVGS_WORK. These
> flags should be independent and aggregated into changed_states without
> affecting each other. Something similar to how I suggested with
> PSI_STATE_WAKE_CLOCK in
> https://lore.kernel.org/all/CAJuCfpFr3JfwkWbDqkU=NUJbCYuCWGySwNusMCdmS3z95WD2AQ@mail.gmail.com/#t.

If we don't clear PSI_NONIDLE of this CPU, changed_states |= cpu_changed_states;
so changed_states has PSI_NONIDLE, and we won't know if other CPUs are IDLE or not
in psi_avgs_work().

Thanks.

\
 
 \ /
  Last update: 2022-10-12 04:12    [W:0.080 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site