lkml.org 
[lkml]   [2013]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC patch 1/4] sched: change cfs_rq load avg to unsigned long
On 06/07/2013 05:07 PM, Vincent Guittot wrote:
> On 7 June 2013 09:29, Alex Shi <alex.shi@intel.com> wrote:
>> > Since the 'u64 runnable_load_avg, blocked_load_avg' in cfs_rq struct are
>> > smaller than 'unsigned long' cfs_rq->load.weight. We don't need u64
>> > vaiables to describe them. unsigned long is more efficient and convenience.
>> >
> Hi Alex,
>
> I just want to point out that we can't have more than 48388 tasks with
> highest priority on a runqueue with an unsigned long on a 32 bits
> system. I don't know if we can reach such kind of limit on a 32bits
> machine ? For sure, not on an embedded system.

Thanks question!
It should be a talked problem. I just remember the conclusion is when
you get the up bound task number, you already run out the memory space
on 32 bit.

Just for kernel resource for a process, it need 2 pages stack.
mm_struct, task_struct, task_stats, vm_area_struct, page table etc.
these are already beyond 4 pages. so 4 * 4k * 48388 = 774MB. plus user
level resources.

So, usually the limited task number in Linux is often far lower this
number: $ulimit -u.

Anyway, at least, the runnable_load_avg is smaller then load.weight. if
load.weight can use long type, runablle_load_avg is no reason can't.

--
Thanks
Alex


\
 
 \ /
  Last update: 2013-06-08 04:41    [W:0.127 / U:1.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site