lkml.org 
[lkml]   [2021]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] sched: rename __prepare_to_swait() to add_swait_queue_locked()
From
Date
On Tue, 2021-03-16 at 19:59 +0800, Wang Qing wrote:
> This function just puts wait into queue, and does not do an operation similar
> to prepare_to_wait() in wait.c.
> And during the operation, the caller needs to hold the lock to protect.

I see zero benefit to churn like this. You're taking a dinky little
file that's perfectly clear (and pretty), and restating the obvious.

>
> Signed-off-by: Wang Qing <wangqing@vivo.com>
> ---
> kernel/sched/completion.c | 2 +-
> kernel/sched/sched.h | 2 +-
> kernel/sched/swait.c | 6 +++---
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
> index a778554..3d28a5a
> --- a/kernel/sched/completion.c
> +++ b/kernel/sched/completion.c
> @@ -79,7 +79,7 @@ do_wait_for_common(struct completion *x,
> timeout = -ERESTARTSYS;
> break;
> }
> - __prepare_to_swait(&x->wait, &wait);
> + add_swait_queue_locked(&x->wait, &wait);
> __set_current_state(state);
> raw_spin_unlock_irq(&x->wait.lock);
> timeout = action(timeout);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522..0516f50
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2719,4 +2719,4 @@ static inline bool is_per_cpu_kthread(struct task_struct *p)
> #endif
>
> void swake_up_all_locked(struct swait_queue_head *q);
> -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
> +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait);
> diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
> index 7a24925..f48a544
> --- a/kernel/sched/swait.c
> +++ b/kernel/sched/swait.c
> @@ -82,7 +82,7 @@ void swake_up_all(struct swait_queue_head *q)
> }
> EXPORT_SYMBOL(swake_up_all);
>
> -void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait)
> +void add_swait_queue_locked(struct swait_queue_head *q, struct swait_queue *wait)
> {
> wait->task = current;
> if (list_empty(&wait->task_list))
> @@ -94,7 +94,7 @@ void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue *
> unsigned long flags;
>
> raw_spin_lock_irqsave(&q->lock, flags);
> - __prepare_to_swait(q, wait);
> + add_swait_queue_locked(q, wait);
> set_current_state(state);
> raw_spin_unlock_irqrestore(&q->lock, flags);
> }
> @@ -114,7 +114,7 @@ long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait
> list_del_init(&wait->task_list);
> ret = -ERESTARTSYS;
> } else {
> - __prepare_to_swait(q, wait);
> + add_swait_queue_locked(q, wait);
> set_current_state(state);
> }
> raw_spin_unlock_irqrestore(&q->lock, flags);

\
 
 \ /
  Last update: 2021-03-17 06:15    [W:0.056 / U:0.592 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site