lkml.org 
[lkml]   [2022]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 1/1] drm/radeon: Initialize fences array entries in radeon_sa_bo_next_hole
From
Am 07.06.22 um 17:19 schrieb Xiaohui Zhang:
> Similar to the handling of amdgpu_sa_bo_next_hole in commit 6a15f3ff19a8
> ("drm/amdgpu: Initialize fences array entries in amdgpu_sa_bo_next_hole"),
> we thought a patch might be needed here as well.
>
> The entries were only initialized once in radeon_sa_bo_new. If a fence
> wasn't signalled yet in the first radeon_sa_bo_next_hole call, but then
> got signalled before a later radeon_sa_bo_next_hole call, it could
> destroy the fence but leave its pointer in the array, resulting in
> use-after-free in radeon_sa_bo_new.

I would rather like to see the sub allocator moved into a common drm helper.

Regards,
Christian.

>
> Signed-off-by: Xiaohui Zhang <xiaohuizhang@ruc.edu.cn>
> ---
> drivers/gpu/drm/radeon/radeon_sa.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_sa.c b/drivers/gpu/drm/radeon/radeon_sa.c
> index 310c322c7112..0981948bd9ed 100644
> --- a/drivers/gpu/drm/radeon/radeon_sa.c
> +++ b/drivers/gpu/drm/radeon/radeon_sa.c
> @@ -267,6 +267,8 @@ static bool radeon_sa_bo_next_hole(struct radeon_sa_manager *sa_manager,
> for (i = 0; i < RADEON_NUM_RINGS; ++i) {
> struct radeon_sa_bo *sa_bo;
>
> + fences[i] = NULL;
> +
> if (list_empty(&sa_manager->flist[i])) {
> continue;
> }
> @@ -332,10 +334,8 @@ int radeon_sa_bo_new(struct radeon_device *rdev,
>
> spin_lock(&sa_manager->wq.lock);
> do {
> - for (i = 0; i < RADEON_NUM_RINGS; ++i) {
> - fences[i] = NULL;
> + for (i = 0; i < RADEON_NUM_RINGS; ++i)
> tries[i] = 0;
> - }
>
> do {
> radeon_sa_bo_try_free(sa_manager);

\
 
 \ /
  Last update: 2022-06-08 11:20    [W:0.059 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site