lkml.org 
[lkml]   [2022]   [May]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[ANNOUNCE] v5.18-rt11
Dear RT folks!

I'm pleased to announce the v5.18-rt11 patch set.

Changes since v5.18-rt10:

- Dropping preempt_check_resched_rt() checks. The checks were added to
ensure a possible wake up which could be missed if the wakeup
happens on the same CPU with disabled interrupts. This has been
reduced to a ksoftirqd wake and is no longer needed because a
softirq-raise won't wake ksoftirqd if the caller has BH disabled.
The remaining two caller (based on audit, htb_work_func() and
dev_cpu_dead()) acquire/release a lock "soon" which provides the
needed scheduling point.

Known issues
- Valentin Schneider reported a few splats on ARM64, see
https://lkml.kernel.org/r/20210810134127.1394269-1-valentin.schneider@arm.com

The delta patch against v5.18-rt10 is appended below and can be found here:

https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.18/incr/patch-5.18-rt10-rt11.patch.xz

You can get this release via the git tree at:

git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.18-rt11

The RT patch against v5.18 can be found here:

https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.18/older/patch-5.18-rt11.patch.xz

The split quilt queue is available at:

https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.18/older/patches-5.18-rt11.tar.xz

Sebastian

diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index c3cb3fcbee8c3..873a5dac54e0e 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -232,12 +232,6 @@ do { \

#define preempt_enable_no_resched() sched_preempt_enable_no_resched()

-#ifndef CONFIG_PREEMPT_RT
-# define preempt_check_resched_rt() barrier();
-#else
-# define preempt_check_resched_rt() preempt_check_resched()
-#endif
-
#define preemptible() (preempt_count() == 0 && !irqs_disabled())

#ifdef CONFIG_PREEMPTION
@@ -324,7 +318,6 @@ do { \
#define preempt_disable_notrace() barrier()
#define preempt_enable_no_resched_notrace() barrier()
#define preempt_enable_notrace() barrier()
-#define preempt_check_resched_rt() barrier()
#define preemptible() 0

#define preempt_lazy_disable() barrier()
diff --git a/localversion-rt b/localversion-rt
index d79dde624aaac..05c35cb580779 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt10
+-rt11
diff --git a/net/core/dev.c b/net/core/dev.c
index 0b81439394b07..2771fd22dc6ae 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3024,7 +3024,6 @@ static void __netif_reschedule(struct Qdisc *q)
sd->output_queue_tailp = &q->next_sched;
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_restore(flags);
- preempt_check_resched_rt();
}

void __netif_schedule(struct Qdisc *q)
@@ -3087,7 +3086,6 @@ void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason)
__this_cpu_write(softnet_data.completion_queue, skb);
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_restore(flags);
- preempt_check_resched_rt();
}
EXPORT_SYMBOL(__dev_kfree_skb_irq);

@@ -5809,14 +5807,12 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
sd->rps_ipi_list = NULL;

local_irq_enable();
- preempt_check_resched_rt();

/* Send pending IPI's to kick RPS processing on remote cpus. */
net_rps_send_ipi(remsd);
} else
#endif
local_irq_enable();
- preempt_check_resched_rt();
}

static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
@@ -5892,7 +5888,6 @@ void __napi_schedule(struct napi_struct *n)
local_irq_save(flags);
____napi_schedule(this_cpu_ptr(&softnet_data), n);
local_irq_restore(flags);
- preempt_check_resched_rt();
}
EXPORT_SYMBOL(__napi_schedule);

@@ -11001,7 +10996,6 @@ static int dev_cpu_dead(unsigned int oldcpu)

raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_enable();
- preempt_check_resched_rt();

#ifdef CONFIG_RPS
remsd = oldsd->rps_ipi_list;
\
 
 \ /
  Last update: 2022-05-25 19:25    [W:0.040 / U:0.368 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site