lkml.org 
[lkml]   [2022]   [May]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] drm/prime: Ensure mmap offset is initialized
On Mon, May 30, 2022 at 10:20 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 30.05.22 um 17:41 schrieb Rob Clark:
> > On Mon, May 30, 2022 at 7:49 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >>
> >> On Mon, 30 May 2022 at 15:54, Rob Clark <robdclark@gmail.com> wrote:
> >>>
> >>> On Mon, May 30, 2022 at 12:26 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >>>>
> >>>> Hi
> >>>>
> >>>> Am 29.05.22 um 18:29 schrieb Rob Clark:
> >>>>> From: Rob Clark <robdclark@chromium.org>
> >>>>>
> >>>>> If a GEM object is allocated, and then exported as a dma-buf fd which is
> >>>>> mmap'd before or without the GEM buffer being directly mmap'd, the
> >>>>> vma_node could be unitialized. This leads to a situation where the CPU
> >>>>> mapping is not correctly torn down in drm_vma_node_unmap().
> >>>>
> >>>> Which drivers are affected by this problem?
> >>>>
> >>>> I checked several drivers and most appear to be initializing the offset
> >>>> during object construction, such as GEM SHMEM. [1] TTM-based drivers
> >>>> also seem unaffected. [2]
> >>>>
> >>>> From a quick grep, only etnaviv, msm and omapdrm appear to be affected?
> >>>> They only seem to run drm_gem_create_mmap_offset() from their
> >>>> ioctl-handling code.
> >>>>
> >>>> If so, I'd say it's preferable to fix these drivers and put a
> >>>> drm_WARN_ONCE() into drm_gem_prime_mmap().
> >>>
> >>> That is good if fewer drivers are affected, however I disagree with
> >>> your proposal. At least for freedreno userspace, a lot of bo's never
> >>> get mmap'd (either directly of via dmabuf), so we should not be
> >>> allocating a mmap offset unnecessarily.
> >>
> >> Does this actually matter in the grand scheme of things? We originally
> >> allocated mmap offset only on demand because userspace only had 32bit
> >> loff_t support and so simply couldn't mmap anything if the offset
> >> ended up above 32bit (even if there was still va space available).
> >>
> >> But those days are long gone (about 10 years or so) and the allocation
> >> overhead for an mmap offset is tiny. So I think unless you can
> >> benchmark an impact allocating it at bo alloc seems like the simplest
> >> design overall, and hence what we should be doing. And if the vma
> >> offset allocation every gets too slow due to fragmentation we can lift
> >> the hole tree from i915 into drm_mm and the job should be done. At
> >> that point we could also allocate the offset unconditionally in the
> >> gem_init function and be done with it.
> >>
> >> Iow I concur with Thomas here, unless there's hard data contrary
> >> simplicity imo trumps here.
> >
> > 32b userspace is still alive and well, at least on arm chromebooks ;-)
>
> I mostly dislike the inconsistency among drivers. If we want to create
> the offset on-demand in the DRM helpers, we should do so for all
> drivers. At least our generic GEM helpers and TTM should implement this
> pattern.

Possibly we should have drm_gem_get_mmap_offset() which combines
drm_gem_create_mmap_offset() and drm_vma_node_start() calls, and use
that everywhere.

But I think we should fix this issue first, and then refactor on top,
so that a fix can be backported to stable kernels ;-)

BR,
-R

> Best regards
> Thomas
>
> >
> > BR,
> > -R
> >
> >> -Daniel
> >>
> >>>
> >>> BR,
> >>> -R
> >>>
> >>>> Best regards
> >>>> Thomas
> >>>>
> >>>> [1]
> >>>> https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/drm_gem_shmem_helper.c#L85
> >>>> [2]
> >>>> https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/ttm/ttm_bo.c#L1002
> >>>>
> >>>>>
> >>>>> Fixes: e5516553999f ("drm: call drm_gem_object_funcs.mmap with fake offset")
> >>>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>>>> ---
> >>>>> Note, it's possible the issue existed in some related form prior to the
> >>>>> commit tagged with Fixes.
> >>>>>
> >>>>> drivers/gpu/drm/drm_prime.c | 5 +++++
> >>>>> 1 file changed, 5 insertions(+)
> >>>>>
> >>>>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> >>>>> index e3f09f18110c..849eea154dfc 100644
> >>>>> --- a/drivers/gpu/drm/drm_prime.c
> >>>>> +++ b/drivers/gpu/drm/drm_prime.c
> >>>>> @@ -716,6 +716,11 @@ int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> >>>>> struct file *fil;
> >>>>> int ret;
> >>>>>
> >>>>> + /* Ensure that the vma_node is initialized: */
> >>>>> + ret = drm_gem_create_mmap_offset(obj);
> >>>>> + if (ret)
> >>>>> + return ret;
> >>>>> +
> >>>>> /* Add the fake offset */
> >>>>> vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
> >>>>>
> >>>>
> >>>> --
> >>>> Thomas Zimmermann
> >>>> Graphics Driver Developer
> >>>> SUSE Software Solutions Germany GmbH
> >>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >>>> (HRB 36809, AG Nürnberg)
> >>>> Geschäftsführer: Ivo Totev
> >>
> >>
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Ivo Totev

\
 
 \ /
  Last update: 2022-05-30 20:21    [W:0.087 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site