lkml.org 
[lkml]   [2019]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] sched: Do not re-read h_load_next during hierarchical load calculation
On Tue, Mar 19, 2019 at 09:35:18AM +0000, Mel Gorman wrote:
> A NULL pointer dereference bug was reported on a distribution kernel but
> the same issue should be present on mainline kernel. It occured on s390
> but should not be arch-specific. A partial oops looks like
>
> [775277.408564] Unable to handle kernel pointer dereference in virtual kernel address space
> ...
> [775277.408759] Call Trace:
> [775277.408763] ([<0002c11c56899c61>] 0x2c11c56899c61)
> [775277.408766] [<0000000000177bb4>] try_to_wake_up+0xfc/0x450
> [775277.408773] [<000003ff81ede872>] vhost_poll_wakeup+0x3a/0x50 [vhost]
> [775277.408777] [<0000000000194ae4>] __wake_up_common+0xbc/0x178
> [775277.408779] [<0000000000194f86>] __wake_up_common_lock+0x9e/0x160
> [775277.408780] [<00000000001950de>] __wake_up_sync_key+0x4e/0x60
> [775277.408785] [<00000000005d911e>] sock_def_readable+0x5e/0x98
>
> The bug hits any time between 1 hour to 3 days. The dereference occurs
> in update_cfs_rq_h_load when accumulating h_load. The problem is that
> cfq_rq->h_load_next is not protected by any locking and can be updated
> by parallel calls to task_h_load.

Hurpmh, right.

> Depending on the compiler, code may be
> generated that re-reads cfq_rq->h_load_next after the check for NULL and
> then oops when reading se->avg.load_avg. The dissassembly showed that it
> was possible to reread h_load_next after the check for NULL.
>
> While this does not appear to be an issue for later compilers, it's still
> an accident if the correct code is generated. Full locking in this path
> would have high overhead so this patch uses READ_ONCE to read h_load_next
> only once and check for NULL before dereferencing. It was confirmed that
> there were no further oops after 10 days of testing.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 310d0637fe4b..34aeb40e69d2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7726,7 +7726,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
> cfs_rq->last_h_load_update = now;
> }
>
> - while ((se = cfs_rq->h_load_next) != NULL) {
> + while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
> load = cfs_rq->h_load;
> load = div64_ul(load * se->avg.load_avg,
> cfs_rq_load_avg(cfs_rq) + 1);

Where there is a READ_ONCE there should also be a corresponding
WRITE_ONCE(). Otherwise the compiler can still screw us over by doing
store-tearing.

So something like the below. But looking at this, we probably also want
ONCE treatment on cfs_rq->h_load itself, but that's another patch.

And I think we can do something with cfs_rq->last_h_load_update.

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fdab7eb6f351..40bd1e27b1b7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7784,10 +7784,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
if (cfs_rq->last_h_load_update == now)
return;

- cfs_rq->h_load_next = NULL;
+ WRITE_ONCE(cfs_rq->h_load_next, NULL);
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
- cfs_rq->h_load_next = se;
+ WRITE_ONCE(cfs_rq->h_load_next, se);
if (cfs_rq->last_h_load_update == now)
break;
}
@@ -7797,7 +7797,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
cfs_rq->last_h_load_update = now;
}

- while ((se = cfs_rq->h_load_next) != NULL) {
+ while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
load = cfs_rq->h_load;
load = div64_ul(load * se->avg.load_avg,
cfs_rq_load_avg(cfs_rq) + 1);
\
 
 \ /
  Last update: 2019-03-19 13:06    [W:2.125 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site