lkml.org 
[lkml]   [2024]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHSET v6] sched: Implement BPF extensible scheduler class
On Sun, May 05, 2024 at 01:31:26PM -1000, Tejun Heo wrote:

> > You Google/Facebook are touting collaboration, collaborate on fixing it.
> > Instead of re-posting this over and over. After all, your main
> > motivation for starting this was the cpu-cgroup overhead.
>
> The hierarchical scheduling overhead isn't the main motivation for us. We
> can't use the CPU controller for all workloads and while it'd be nice to
> improve that,

Hurmph, I had the impression from the earlier threads that this ~5%
cgroup overhead was most definitely a problem and a motivator for all
this.

The overhead was prohibitive, it was claimed, and you needed a solution.
Did not previous versions use this very argument in order to push for
all this?

By improving the cgroup mess -- I very much agree that the cgroup thing
is not very nice. This whole argument goes away and we all get a better
cgroup implementation.

> This view works only if you assume that the entire world contains only a
> handful of developers who can work on schedulers. The only way that would be
> the case is if the barrier of entry is raised unreasonably high. Sometimes a
> high barrier of entry can't be avoided or is beneficial. However, if it's
> pushed up high enough to leave only a handful of people to work on an area
> as large as scheduling, something probably is wrong.

I've never really felt there were too few sched patches to stare at on
any one day (quite the opposite on many days in fact).

There have also always been plenty out of tree scheduler patches --
although I rarely if ever have time to look at them.

Writing a custom scheduler isn't that hard, simply ripping out
fair_sched_class and replacing it with something simple really isn't
*that* hard.

The only really hard requirement is respecting affinities, you'll crash
and burn real hard if you get that wrong (think of all the per-cpu
kthreads that hard rely on the per-cpu-ness of them).

But you can easily ignore cgroups, uclamp and a ton of other stuff and
still boot and play around.

> I believe we agree that we want more people contributing to the scheduling
> area.

I think therein lies the rub -- contribution. If we were to do this
thing, random loadable BPF schedulers, then how do we ensure people will
contribute back?

That is, from where I am sitting I see $vendor mandate their $enterprise
product needs their $BPF scheduler. At which point $vendor will have no
incentive to ever contribute back.

And customers of $vendor that want to run additional workloads on
their machine are then stuck with that scheduler, irrespective of it
being suitable for them or not. This is not a good experience.

So I don't at all mind people playing around with schedulers -- they can
do so today, there are a ton of out of tree patches to start or learn
from, or like I said, it really isn't all that hard to just rip out fair
and write something new.

Open source, you get to do your own thing. Have at.

But part of what made Linux work so well, is in my opinion the GPL. GPL
forces people to contribute back -- to work on the shared project. And I
see the whole BPF thing as a run-around on that.

Even the large cloud vendors and service providers (Amazon, Google,
Facebook etc.) contribute back because of rebase pain -- as you well
know. The rebase pain offsets the 'TIVO hole'.

But with the BPF muck; where is the motivation to help improve things?

Keeping a rando github repo with BPF schedulers is not contributing.
That's just a repo with multiple out of tree schedulers to be ignored.
Who will put in the effort of upsteaming things if they can hack up a
BPF and throw it over the wall?

So yeah, I'm very much NOT supportive of this effort. From where I'm
sitting there is simply not a single benefit. You're not making my life
better, so why would I care?

How does this BPF muck translate into better quality patches for me?

\
 
 \ /
  Last update: 2024-05-27 18:25    [W:0.016 / U:0.784 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site