lkml.org 
[lkml]   [2022]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 11/15] drm/shmem-helper: Add generic memory shrinker
On Tue, May 10, 2022 at 04:47:52PM +0300, Dmitry Osipenko wrote:
> On 5/9/22 16:49, Daniel Vetter wrote:
> > On Fri, May 06, 2022 at 03:10:43AM +0300, Dmitry Osipenko wrote:
> >> On 5/5/22 11:34, Thomas Zimmermann wrote:
> >>> Hi
> >>>
> >>> Am 18.04.22 um 00:37 schrieb Dmitry Osipenko:
> >>>> Introduce a common DRM SHMEM shrinker. It allows to reduce code
> >>>> duplication among DRM drivers that implement theirs own shrinkers.
> >>>> This is initial version of the shrinker that covers basic needs of
> >>>> GPU drivers, both purging and eviction of shmem objects are supported.
> >>>>
> >>>> This patch is based on a couple ideas borrowed from Rob's Clark MSM
> >>>> shrinker and Thomas' Zimmermann variant of SHMEM shrinker.
> >>>>
> >>>> In order to start using DRM SHMEM shrinker drivers should:
> >>>>
> >>>> 1. Implement new purge(), evict() + swap_in() GEM callbacks.
> >>>> 2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device).
> >>>> 3. Use drm_gem_shmem_set_purgeable_and_evictable(shmem) and alike API
> >>>>     functions to activate shrinking of GEMs.
> >>>
> >>> Honestly speaking, after reading the patch and the discussion here I
> >>> really don't like where all tis is going. The interfaces and
> >>> implementation are overengineered.  Descisions about evicting and
> >>> purging should be done by the memory manager. For the most part, it's
> >>> none of the driver's business.
> >>
> >> Daniel mostly suggesting to make interface more flexible for future
> >> drivers, so we won't need to re-do it later on. My version of the
> >> interface is based on what drivers need today.
> >>
> >> Why do you think it's a problem to turn shmem helper into the simple
> >> generic memory manager? I don't see how it's better to have drivers
> >> duplicating the exactly same efforts and making different mistakes.
> >>
> >> The shmem shrinker implementation is mostly based on the freedreno's
> >> shrinker and it's very easy to enable generic shrinker for VirtIO and
> >> Panfrost drivers. I think in the future freedreno and other drivers
> >> could switch to use drm shmem instead of open coding the memory management.
> >
> > Yeah I think we have enough shrinkers all over drm to actually design
> > something solid here.
> >
> > There's also the i915 shrinker and some kinda shrinker in ttm too. So we
> > are definitely past the "have 3 examples to make sure you design something
> > solid" rule of thumb.
> >
> > I also have a bit an idea that we could try to glue the shmem shrinker
> > into ttm, at least at a very high level that's something that would make
> > some sense.
>
> Before gluing the shmem shrinker into ttm, the drivers should be
> switched to ttm? Or do you mean something else by the gluing?

No, drivers which don't need ttm shouldn't be forced to use it.

> Perhaps it should be possible to have a common drm-shrinker helper that
> will do the basic-common things like tracking the eviction size and
> check whether BO is exported or locked, but we shouldn't consider doing
> this for now. For the starter more reasonable should be to create a
> common shrinker base for drivers that use drm-shmem, IMO.

Yeah that might be the more practical approach. But really this was just
an aside, absolutely no need to worry about this for now. I just wanted to
point out that there really is a lot of use for this.

> >>> I'd like to ask you to reduce the scope of the patchset and build the
> >>> shrinker only for virtio-gpu. I know that I first suggested to build
> >>> upon shmem helpers, but it seems that it's easier to do that in a later
> >>> patchset.
> >>
> >> The first version of the VirtIO shrinker didn't support memory eviction.
> >> Memory eviction support requires page fault handler to be aware of the
> >> evicted pages, what should we do about it? The page fault handling is a
> >> part of memory management, hence to me drm-shmem is already kinda a MM.
> >
> > Hm I still don't get that part, why does that also not go through the
> > shmem helpers?
>
> The drm_gem_shmem_vm_ops includes the page faults handling, it's a
> helper by itself that is used by DRM drivers.
>
> I could try to move all the shrinker logic to the VirtIO and re-invent
> virtio_gem_shmem_vm_ops, but what is the point of doing this for each
> driver if we could have it once and for all in the common drm-shmem code?
>
> Maybe I should try to factor out all the shrinker logic from drm-shmem
> into a new drm-shmem-shrinker that could be shared by drivers? Will you
> be okay with this option?

I think we're talking past each another a bit. I'm only bringing up the
purge vs eviction topic we discussed in the other subthread again.

> > I'm still confused why drivers need to know the difference
> > between evition and purging. Or maybe I'm confused again.
>
> Example:
>
> If userspace uses IOV addresses, then these addresses must be kept
> reserved while buffer is evicted.
>
> If BO is purged, then we don't need to retain the IOV space allocated
> for the purged BO.

Yeah but is that actually needed by anyone? If userspace fails to allocate
another bo because of lack of gpu address space then it's very easy to
handle that:

1. Make a rule that "out of gpu address space" gives you a special errno
code like ENOSPC

2. If userspace gets that it walks the list of all buffers it marked as
purgeable and nukes them (whether they have been evicted or not). Then it
retries the bo allocation.

Alternatively you can do step 2 also directly from the bo alloc ioctl in
step 1. Either way you clean up va space, and actually a lot more (you
potentially nuke all buffers marked as purgeable, not just the ones that
have been purged already) and only when va cleanup is actually needed

Trying to solve this problem at eviction time otoh means:
- we have this difference between eviction and purging
- it's still not complete, you still need to glue step 2 above into your
driver somehow, and once step 2 above is glued in doing additional
cleanup in the purge function is just duplicated logic

So at least in my opinion this isn't the justification we need. And we
should definitely not just add that complication "in case, for the
future", if we don't have a real need right now. Adding it later on is
easy, removing it later on because it just gets in the way and confuses is
much harder.

> The drm-shmem only handles shmem pages, not the mappings of these pages.

Yeah that's why you need an evict callback into the driver. That part is
clear.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

\
 
 \ /
  Last update: 2022-05-11 15:10    [W:0.110 / U:0.876 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site