lkml.org 
[lkml]   [2014]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks
On Wed, Jul 30, 2014 at 05:39:35PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>
> It turns out to be easier to add the synchronous grace-period waiting
> functions to RCU-tasks than to work around their absense in rcutorture,
> so this commit adds them. The key point is that the existence of
> call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

With rcu_barrier_tasks being a trivial wrapper, why not just let
rcutorture call synchronize_rcu_tasks directly?

> include/linux/rcupdate.h | 2 ++
> kernel/rcu/update.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 57 insertions(+)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 3299ff98ad03..17c7e25c38be 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -216,6 +216,8 @@ void synchronize_sched(void);
> * memory ordering guarantees.
> */
> void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> +void synchronize_rcu_tasks(void);
> +void rcu_barrier_tasks(void);
>
> #ifdef CONFIG_PREEMPT_RCU
>
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index b92268647a01..c8d304dc6d8a 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -387,6 +387,61 @@ void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
> }
> EXPORT_SYMBOL_GPL(call_rcu_tasks);
>
> +/**
> + * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
> + *
> + * Control will return to the caller some time after a full rcu-tasks
> + * grace period has elapsed, in other words after all currently
> + * executing rcu-tasks read-side critical sections have elapsed. These
> + * read-side critical sections are delimited by calls to schedule(),
> + * cond_resched_rcu_qs(), idle execution, userspace execution, calls
> + * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
> + *
> + * This is a very specialized primitive, intended only for a few uses in
> + * tracing and other situations requiring manipulation of function
> + * preambles and profiling hooks. The synchronize_rcu_tasks() function
> + * is not (yet) intended for heavy use from multiple CPUs.
> + *
> + * Note that this guarantee implies further memory-ordering guarantees.
> + * On systems with more than one CPU, when synchronize_rcu_tasks() returns,
> + * each CPU is guaranteed to have executed a full memory barrier since the
> + * end of its last RCU-tasks read-side critical section whose beginning
> + * preceded the call to synchronize_rcu_tasks(). In addition, each CPU
> + * having an RCU-tasks read-side critical section that extends beyond
> + * the return from synchronize_rcu_tasks() is guaranteed to have executed
> + * a full memory barrier after the beginning of synchronize_rcu_tasks()
> + * and before the beginning of that RCU-tasks read-side critical section.
> + * Note that these guarantees include CPUs that are offline, idle, or
> + * executing in user mode, as well as CPUs that are executing in the kernel.
> + *
> + * Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
> + * to its caller on CPU B, then both CPU A and CPU B are guaranteed
> + * to have executed a full memory barrier during the execution of
> + * synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
> + * (but again only if the system has more than one CPU).
> + */
> +void synchronize_rcu_tasks(void)
> +{
> + /* Complain if the scheduler has not started. */
> + rcu_lockdep_assert(!rcu_scheduler_active,
> + "synchronize_rcu_tasks called too soon");
> +
> + /* Wait for the grace period. */
> + wait_rcu_gp(call_rcu_tasks);
> +}
> +
> +/**
> + * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> + *
> + * Although the current implementation is guaranteed to wait, it is not
> + * obligated to, for example, if there are no pending callbacks.
> + */
> +void rcu_barrier_tasks(void)
> +{
> + /* There is only one callback queue, so this is easy. ;-) */
> + synchronize_rcu_tasks();
> +}
> +
> /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> static int __noreturn rcu_tasks_kthread(void *arg)
> {
> --
> 1.8.1.5
>


\
 
 \ /
  Last update: 2014-08-01 12:01    [W:0.262 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site