lkml.org 
[lkml]   [2021]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH 0/4] drm/msm: Shrinker (and related) fixes
Date
From: Rob Clark <robdclark@chromium.org>

I've been spending some time looking into how things behave under high
memory pressure. The first patch is a random cleanup I noticed along
the way. The second improves the situation significantly when we are
getting shrinker called from many threads in parallel. And the last
two are $debugfs/gem fixes I needed so I could monitor the state of GEM
objects (ie. how many are active/purgable/purged) while triggering high
memory pressure.

We could probably go a bit further with dropping the mm_lock in the
shrinker->scan() loop, but this is already a pretty big improvement.
The next step is probably actually to add support to unpin/evict
inactive objects. (We are part way there since we have already de-
coupled the iova lifetime from the pages lifetime, but there are a
few sharp corners to work through.)

Rob Clark (4):
drm/msm: Remove unused freed llist node
drm/msm: Avoid mutex in shrinker_count()
drm/msm: Fix debugfs deadlock
drm/msm: Improved debugfs gem stats

drivers/gpu/drm/msm/msm_debugfs.c | 14 ++----
drivers/gpu/drm/msm/msm_drv.c | 4 ++
drivers/gpu/drm/msm/msm_drv.h | 10 ++++-
drivers/gpu/drm/msm/msm_fb.c | 3 +-
drivers/gpu/drm/msm/msm_gem.c | 61 +++++++++++++++++++++-----
drivers/gpu/drm/msm/msm_gem.h | 58 +++++++++++++++++++++---
drivers/gpu/drm/msm/msm_gem_shrinker.c | 17 +------
7 files changed, 122 insertions(+), 45 deletions(-)

--
2.30.2

\
 
 \ /
  Last update: 2021-04-01 00:15    [W:0.254 / U:1.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site