lkml.org 
[lkml]   [2015]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched/fair: fix mul overflow on 32-bit systems
On Fri, Dec 11, 2015 at 03:55:18PM +0300, Andrey Ryabinin wrote:
> Make 'r' 64-bit type to avoid overflow in 'r * LOAD_AVG_MAX'
> on 32-bit systems:
> UBSAN: Undefined behaviour in kernel/sched/fair.c:2785:18
> signed integer overflow:
> 87950 * 47742 cannot be represented in type 'int'
>
> Fixes: 9d89c257dfb9 ("sched/fair: Rewrite runnable load and utilization average tracking")
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
> kernel/sched/fair.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e3266eb..733f0b8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2780,14 +2780,14 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
> int decayed, removed = 0;
>
> if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> - long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> + s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> sa->load_avg = max_t(long, sa->load_avg - r, 0);
> sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);

This makes sense, because sched_avg::load_sum is u64.

> removed = 1;
> }
>
> if (atomic_long_read(&cfs_rq->removed_util_avg)) {
> - long r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0);
> + s64 r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0);
> sa->util_avg = max_t(long, sa->util_avg - r, 0);
> sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0);
> }

However sched_avg::util_sum is u32, so this is still wrecked.


\
 
 \ /
  Last update: 2015-12-11 14:41    [W:0.064 / U:0.740 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site