Messages in this thread | | | From | Namhyung Kim <> | Subject | Re: [PATCH 06/18] sched: set initial load avg of new forked task as its load weight | Date | Fri, 21 Dec 2012 13:33:15 +0900 |
| |
On Mon, 10 Dec 2012 16:22:22 +0800, Alex Shi wrote: > New task has no runnable sum at its first runnable time, that make > burst forking just select few idle cpus to put tasks. > Set initial load avg of new forked task as its load weight to resolve > this issue. > > Signed-off-by: Alex Shi <alex.shi@intel.com> > --- > include/linux/sched.h | 1 + > kernel/sched/core.c | 2 +- > kernel/sched/fair.c | 13 +++++++++++-- > 3 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 5dafac3..093f9cd 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1058,6 +1058,7 @@ struct sched_domain; > #else > #define ENQUEUE_WAKING 0 > #endif > +#define ENQUEUE_NEWTASK 8 > > #define DEQUEUE_SLEEP 1 > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index e6533e1..96fa5f1 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -1648,7 +1648,7 @@ void wake_up_new_task(struct task_struct *p) > #endif > > rq = __task_rq_lock(p); > - activate_task(rq, p, 0); > + activate_task(rq, p, ENQUEUE_NEWTASK); > p->on_rq = 1; > trace_sched_wakeup_new(p, true); > check_preempt_curr(rq, p, WF_FORK); > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 1faf89f..61c8d24 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -1277,8 +1277,9 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable) > /* Add the load generated by se into cfs_rq's child load-average */ > static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, > struct sched_entity *se, > - int wakeup) > + int flags) > { > + int wakeup = flags & ENQUEUE_WAKEUP; > /* > * We track migrations using entity decay_count <= 0, on a wake-up > * migration we use a negative decay count to track the remote decays > @@ -1312,6 +1313,12 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, > update_entity_load_avg(se, 0); > } > > + /* > + * set the initial load avg of new task same as its load > + * in order to avoid brust fork make few cpu too heavier > + */ > + if (flags & ENQUEUE_NEWTASK) > + se->avg.load_avg_contrib = se->load.weight; > cfs_rq->runnable_load_avg += se->avg.load_avg_contrib; > /* we force update consideration on load-balancer moves */ > update_cfs_rq_blocked_load(cfs_rq, !wakeup); > @@ -1476,7 +1483,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) > */ > update_curr(cfs_rq); > account_entity_enqueue(cfs_rq, se); > - enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP); > + enqueue_entity_load_avg(cfs_rq, se, flags & > + (ENQUEUE_WAKEUP | ENQUEUE_NEWTASK));
It seems that just passing 'flags' is enough.
> > if (flags & ENQUEUE_WAKEUP) { > place_entity(cfs_rq, se, 0); > @@ -2586,6 +2594,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) > cfs_rq->h_nr_running++; > > flags = ENQUEUE_WAKEUP; > + flags &= ~ENQUEUE_NEWTASK;
Why is this needed?
Thanks, Namhyung
> } > > for_each_sched_entity(se) {
| |