lkml.org 
[lkml]   [2013]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: rcu: Avoid irq disable in rcu_cpu_kthread
On Thu, Dec 05, 2013 at 09:06:55PM +0000, Christoph Lameter wrote:
> Once we have the per cpu patchset merged we could do the following [it
> even works without that patchset but the __this_cpu ops will not do
> preemption checks]. Would this work?

Looks plausible at first glance. But are you really seeing performance
issues with this code? It is only compiled into the kernel when you build
with CONFIG_RCU_BOOST=y -- are you actually using that for your workloads?

Thanx, Paul

> Subject: rcu: Avoid irq disable in rcu_cpu_kthread
>
> The use of this_cpu ops avoids numerous address calculations
> and allows to avoid the irq enable/disable sequence through a
> low latency non locking this_cpu_xchg.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/kernel/rcu/tree_plugin.h
> ===================================================================
> --- linux.orig/kernel/rcu/tree_plugin.h 2013-12-03 11:32:23.322999660 -0600
> +++ linux/kernel/rcu/tree_plugin.h 2013-12-03 11:32:23.312999941 -0600
> @@ -1417,33 +1417,29 @@ static int rcu_cpu_kthread_should_run(un
> */
> static void rcu_cpu_kthread(unsigned int cpu)
> {
> - unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
> - char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
> + char work;
> int spincnt;
>
> for (spincnt = 0; spincnt < 10; spincnt++) {
> trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
> local_bh_disable();
> - *statusp = RCU_KTHREAD_RUNNING;
> - this_cpu_inc(rcu_cpu_kthread_loops);
> - local_irq_disable();
> - work = *workp;
> - *workp = 0;
> - local_irq_enable();
> + __this_cpu_write(rcu_cpu_kthread_status, RCU_KTHREAD_RUNNING);
> + __this_cpu_inc(rcu_cpu_kthread_loops);
> + work = this_cpu_xchg(rcu_cpu_has_work, 0);
> if (work)
> rcu_kthread_do_work();
> local_bh_enable();
> - if (*workp == 0) {
> + if (__this_cpu_read(rcu_cpu_has_work) == 0) {
> trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
> - *statusp = RCU_KTHREAD_WAITING;
> + __this_cpu_write(rcu_cpu_kthread_status, RCU_KTHREAD_WAITING);
> return;
> }
> }
> - *statusp = RCU_KTHREAD_YIELDING;
> + __this_cpu_write(rcu_cpu_kthread_status, RCU_KTHREAD_YIELDING);
> trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
> schedule_timeout_interruptible(2);
> trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
> - *statusp = RCU_KTHREAD_WAITING;
> + __this_cpu_write(rcu_cpu_kthread_status, RCU_KTHREAD_WAITING);
> }
>
> /*
>



\
 
 \ /
  Last update: 2013-12-10 01:41    [W:0.293 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site