lkml.org 
[lkml]   [2021]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC][PATCH] sched: Use lightweight hazard pointers to grab lazy mms
    On Thu, Jun 17, 2021 at 11:08:03AM +0200, Peter Zijlstra wrote:
    > On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote:

    > --- a/arch/x86/include/asm/mmu.h
    > +++ b/arch/x86/include/asm/mmu.h
    > @@ -66,4 +66,9 @@ typedef struct {
    > void leave_mm(int cpu);
    > #define leave_mm leave_mm
    >
    > +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */
    > +#define for_each_possible_lazymm_cpu(cpu, mm) \
    > + for_each_cpu((cpu), mm_cpumask((mm)))
    > +
    > +
    > #endif /* _ASM_X86_MMU_H */

    > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    > index 8ac693d542f6..e102ec53c2f6 100644
    > --- a/kernel/sched/core.c
    > +++ b/kernel/sched/core.c
    > @@ -19,6 +19,7 @@
    >

    > +
    > +#ifndef for_each_possible_lazymm_cpu
    > +#define for_each_possible_lazymm_cpu(cpu, mm) for_each_online_cpu((cpu))
    > +#endif
    > +

    Why can't the x86 implementation be the default? IIRC the problem with
    mm_cpumask() is that (some) architectures don't clear bits, but IIRC
    they all should be setting bits, or were there archs that didn't even do
    that?

    \
     
     \ /
      Last update: 2021-06-17 11:11    [W:2.417 / U:0.092 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site