lkml.org 
[lkml]   [2012]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 06/16] sched: SCHED_DEADLINE push and pull logic
On 04/07/2012 04:32 AM, Hillf Danton wrote:
> On Sat, Apr 7, 2012 at 1:31 AM, Juri Lelli<juri.lelli@gmail.com> wrote:
>>>>
>>>> kernel/sched_dl.c | 912
>>>> kernel/sched_rt.c | 2 +-
>
> You are working on 2.6.3x, x<= 8 ?
> If so, what is the reason(just curious)?
> Already planned to add in 3.3 and above?
>

Dario answered on this :-).

>>>> + if (!dl_entity_preempt(&entry->dl,&p->dl))
>>>
>>> if (dl_entity_preempt(&p->dl,&entry->dl))
>>>
>>
>> Any specific reason to reverse the condition?
>>
> Just for easing readers.
>

Ok, reasonable. Here and below.

>>>> +select_task_rq_dl(struct task_struct *p, int sd_flag, int flags)
>>>> +{
>>>> + struct task_struct *curr;
>>>> + struct rq *rq;
>>>> + int cpu;
>>>> +
>>>> + if (sd_flag != SD_BALANCE_WAKE)
>>>
>>> why is task_cpu(p) not eligible?
>>>
>>
>> Right, I'll change this.
>>
> No, you will first IMO sort out clear answer to the question.
>

task_cpu(p) is eligible and will be returned if sd_flag != SD_BALANCE_WAKE
&& sd_flag != SD_BALANCE_FORK as in sched_rt. I changed the code accordingly.

>>>> + (rq->curr->dl.nr_cpus_allowed< 2 ||
>>>> + dl_entity_preempt(&rq->curr->dl,&p->dl))&&
>>>
>>> !dl_entity_preempt(&p->dl,&rq->curr->dl))&&
>>
>> As above?
>>
> Just for easing reader.
>
>>>> +#ifdef CONFIG_SMP
>>>> + /*
>>>> + * In the unlikely case current and p have the same deadline
>>>> + * let us try to decide what's the best thing to do...
>>>> + */
>>>> + if ((s64)(p->dl.deadline - rq->curr->dl.deadline) == 0&&
>>>> + !need_resched())
>>>
>>> please recheck !need_resched(), say rq->curr need reschedule?
>>
>> Sorry, I don't get this..
>>
> Perhaps smp_processor_id() != rq->cpu
>

need_resched is actually checked...

>>>
>>> if (task_running(rq, p))
>>> return 0;
>>> return cpumask_test_cpu(cpu,&p->cpus_allowed);
>>
>> We use this inside pull_dl_task. Since we are searching for a task to
>> pull, you must be sure that the found task can actually migrate checking
>> nr_cpus_allowed> 1.
>>
> If cpu is certainly allowed for task to run, but nr_cpus_allowed is no more
> than one, which is corrupted?
>
>>
>> Well, ok with this and above. Anyway this code is completely removed in
>> 15/16.
>>
> Yup, another reason for monolith.
>

Monolithic is below. Anyway, please check the github repo for bug
fixes/new features. ;-)

>>>> +
>>>> +static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>>>> +
>>>> +static int find_later_rq(struct task_struct *task)
>>>> +{
>>>> + struct sched_domain *sd;
>>>> + struct cpumask *later_mask = __get_cpu_var(local_cpu_mask_dl);
>>>
>>> please check is local_cpu_mask_dl valid
>>>
>>
>> Could you explain more why should I check for validity?
>>
> Only for the case that something comes in before it is initialized,
> IIRC encountered by Steven.
>

Do you mean at kernel_init time?
Could you be more precise about the problem Steven encountered?

>>
>> Ok, I'll prepare the monolithic patch and probably store it somewhere so
>> that it can be downloaded also by others.
>>
> Info Hillf once it is ready, thanks.
>

Here we go:
https://github.com/downloads/jlelli/sched-deadline/sched-dl-V4.patch

I noticed that the Cc list is changed... something went wrong?
Anyway, I restored it to the original one. :-)

Thanks and Regards,

- Juri


\
 
 \ /
  Last update: 2012-04-08 22:23    [W:0.170 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site