Messages in this thread | | | From | Vineeth Remanan Pillai <> | Date | Mon, 6 Jul 2020 10:38:27 -0400 | Subject | Re: [RFC PATCH 06/16] sched: Add core wide task selection and scheduling. |
| |
On Mon, Jul 6, 2020 at 10:09 AM Joel Fernandes <joel@joelfernandes.org> wrote: > > > I am not sure if this can happen. If the other sibling sets core_pick, it > > will be under the core wide lock and it should set the core_sched_seq also > > before releasing the lock. So when this cpu tries, it would see the core_pick > > before resetting it. Is this the same case you were mentioning? Sorry if I > > misunderstood the case you mentioned.. > > If you have a case where you have 3 siblings all trying to enter the schedule > loop. Call them A, B and C. > > A picks something for B in core_pick. Now C comes and resets B's core_pick > before running the mega-loop, hoping to select something for it shortly. > However, C then does an unconstrained pick and forgets to set B's pick to > something. > > I don't know if this can really happen - but this is why I added the warning > in the end of the patch. I think we should make the code more robust and > handle these kind of cases. > I don't think this can happen. Each of the sibling takes the core wide lock before calling into pick_next _task. So this should not happen.
> Again, it is about making the code more robust. Why should not set > rq->core_pick when we pick something? As we discussed in the private > discussion - we should make the code robust and consistent. Correctness is > not enough, the code has to be robust and maintainable. > > I think in our private discussion, you agreed with me that there is no harm > in setting core_pick in this case. > I agreed there was no harm, because we wanted to use that in the last check after 'done' label. But now I see that adding that check after done label cause the WARN_ON to fire even in valid case. Firing the WARN_ON in valid case is not good. So, if that WARN_ON check can be removed, adding this is not necessary IMHO.
> > cpumask_copy(&select_mask, cpu_smt_mask(cpu)); > > if (unlikely(cpumask_test_cpu(cpu, &select_mask))) { > > cpumask_set_cpu(cpu, &select_mask); > > need_sync = false; > > } > > Nah, more lines of code for no good no reason, plus another branch right? I'd > like to leave my one liner alone than adding 4 more lines :-) > Remember, this is the fast path. Every schedule() except for our sync IPI reaches here. And we are sure that smt_cpumask will not have cpu only on hotplug cases which is very rare. I feel adding more code to make it clear that this setting is not needed always and also optimizing for the fast path is what I was looking for.
> > By setting need_sync to false, we will do an unconstrained pick and will > > not sync with other siblings. I guess we need to reset need_sync after > > or in the following for_each_cpu loop, because the loop may set it. > > I don't know if we want to add more conditions really and make it more > confusing. If anything, I believe we should simplify the existing code more TBH. > I don't think its more confusing. This hotplug is really a rare case and we should wrap it in a unlikely conditional IMHO. Comments can make the reasoning more clear. We are saving two things here: one is the always setting of cpu mask and second is the unnecessary syncing of the siblings during hotplug.
> > I think we would not need these here. core_pick needs to be set only > > for siblings if we are picking a task for them. For unconstrained pick, > > we pick only for ourselves. Also, core_sched_seq need not be synced here. > > We might already be synced with the existing core->core_pick_seq. Even > > if it is not synced, I don't think it will cause an issue in subsequent > > schedule events. > > As discussed both privately and above, there is no harm and it is good to > keep the code consistent. I'd rather have any task picking set core_pick and > core_sched_seq to prevent confusion. > > And if anything is resetting an existing ->core_pick of a sibling in the > selection loop, it better set it to something sane. > As I mentioned, I was okay with this as you are using this down in the WARN_ON check. But the WARN_ON check triggers even on valid cases which is bad. I don't think setting this here will make code more robust IMHO. core_pick is already NULL and I would like it to be that way unless there is a compelling reason to set it. The reason is, we could find any bad cases entering the pick condition above if this is NULL(it will crash).
> > > done: > > > + /* > > > + * If we reset a sibling's core_pick, make sure that we picked a task > > > + * for it, this is because we might have reset it though it was set to > > > + * something by another selector. In this case we cannot leave it as > > > + * NULL and should have found something for it. > > > + */ > > > + for_each_cpu(i, &select_mask) { > > > + WARN_ON_ONCE(!cpu_rq(i)->core_pick); > > > + } > > > + > > I think this check will not be true always. For unconstrained pick, we > > do not pick tasks for siblings and hence do not set core_pick for them. > > So this WARN_ON will fire for unconstrained pick. Easily reproducible > > by creating an empty cgroup and tagging it. Then only unconstrained > > picks will happen and this WARN_ON fires. I guess this check after the > > done label does not hold and could be removed. > > As discussed above, > 2 SMT case, we don't really know if the warning will > fire or not. I would rather keep the warning just in case for the future. > I think I was not clear last time. This WARN_ON will fire on valid cases if you have this check here. As I mentioned unconstrained pick, picks only for that cpu and not to any other siblings. This is by design. So for unconstrained pick, core_pick of all siblings will be NULL. We jump to done label on unconstrained pick and this for loop goes through all the siblings and finds that its core_pick is not set. Then thei WARN_ON will fire. I have reproduced this. We do not want it to fire as it is the correct logic not to set core_pick for unconstrained pick. Please let me know if this is not clear.
Thanks, Vineeth
| |