lkml.org 
[lkml]   [2023]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC PATCH 45/86] preempt: ARCH_NO_PREEMPT only preempts lazily
    Date
    Note: this commit is badly broken. Only here for discussion.

    Configurations with ARCH_NO_PREEMPT support preempt_count, but might
    not be tested well enough under PREEMPTION to support it might not
    be demarcating the necessary non-preemptible sections.

    One way to handle this is by limiting them to PREEMPT_NONE mode, not
    doing any tick enforcement and limiting preemption to happen only at
    user boundary.

    Unfortunately, this is only a partial solution because eager
    rescheduling could still happen (say, due to RCU wanting an
    expedited quiescent period.) And, because we do not trust the
    preempt_count accounting, this would mean preemption inside an
    unmarked critical section.

    I suppose we could disable that (say by selecting PREEMPTION=n),
    but then the only avenue for driving scheduling between kernel
    contexts (when there is no ongoing userspace work) would be
    explicit calls to schedule().

    Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
    ---
    kernel/sched/core.c | 12 ++++++++++--
    kernel/sched/features.h | 7 +++++++
    2 files changed, 17 insertions(+), 2 deletions(-)

    diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    index 3fa78e8afb7d..bf5df2b866df 100644
    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -1059,6 +1059,14 @@ void __resched_curr(struct rq *rq, resched_t rs)
    trace_sched_wake_idle_without_ipi(cpu);
    }

    +#ifndef CONFIG_ARCH_NO_PREEMPT
    +#define force_preempt() sched_feat(FORCE_PREEMPT)
    +#define preempt_priority() sched_feat(PREEMPT_PRIORITY)
    +#else
    +#define force_preempt() false
    +#define preempt_priority() false
    +#endif
    +
    /*
    * resched_curr - mark rq's current task 'to be rescheduled' eagerly
    * or lazily according to the current policy.
    @@ -1084,7 +1092,7 @@ void resched_curr(struct rq *rq, bool above)
    resched_t rs = RESCHED_lazy;
    int context;

    - if (sched_feat(FORCE_PREEMPT) ||
    + if (force_preempt() ||
    (rq->curr->sched_class == &idle_sched_class)) {
    rs = RESCHED_eager;
    goto resched;
    @@ -1115,7 +1123,7 @@ void resched_curr(struct rq *rq, bool above)
    goto resched;
    }

    - if (sched_feat(PREEMPT_PRIORITY) && above)
    + if (preempt_priority() && above)
    rs = RESCHED_eager;

    resched:
    diff --git a/kernel/sched/features.h b/kernel/sched/features.h
    index 9bf30732b03f..2575d018b181 100644
    --- a/kernel/sched/features.h
    +++ b/kernel/sched/features.h
    @@ -90,6 +90,12 @@ SCHED_FEAT(LATENCY_WARN, false)

    SCHED_FEAT(HZ_BW, true)

    +#ifndef CONFIG_ARCH_NO_PREEMPT
    +/*
    + * Architectures with CONFIG_ARCH_NO_PREEMPT cannot safely preempt.
    + * So even though they enable CONFIG_PREEMPTION, they never have the
    + * option to dynamically switch preemption models.
    + */
    #if defined(CONFIG_PREEMPT)
    SCHED_FEAT(FORCE_PREEMPT, true)
    SCHED_FEAT(PREEMPT_PRIORITY, true)
    @@ -100,3 +106,4 @@ SCHED_FEAT(PREEMPT_PRIORITY, true)
    SCHED_FEAT(FORCE_PREEMPT, false)
    SCHED_FEAT(PREEMPT_PRIORITY, false)
    #endif
    +#endif
    --
    2.31.1
    \
     
     \ /
      Last update: 2023-11-20 13:52    [W:4.163 / U:0.000 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site