lkml.org 
[lkml]   [2012]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average
From
Date
On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote:
> +++ b/kernel/sched/core.c
> @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct task_struct *p, int flags)
> void activate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> if (task_contributes_to_load(p))
> - rq->nr_uninterruptible--;
> + cpu_rq(p->on_cpu_uninterruptible)->nr_uninterruptible--;
>
> enqueue_task(rq, p, flags);
> }

That's completely broken, you cannot do non-atomic cross-cpu
modifications like that. Also, adding an atomic op to the wakeup/sleep
paths isn't going to be popular at all.

> void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> - if (task_contributes_to_load(p))
> - rq->nr_uninterruptible++;
> + if (task_contributes_to_load(p)) {
> + task_rq(p)->nr_uninterruptible++;
> + p->on_cpu_uninterruptible = task_cpu(p);
> + }
>
> dequeue_task(rq, p, flags);
> }

This looks pointless, at deactivate time task_rq() had better be rq or
something is terribly broken.


\
 
 \ /
  Last update: 2012-10-04 11:41    [W:0.090 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site