lkml.org 
[lkml]   [2022]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectlinux-next: manual merge of the drm tree with Linus' tree
Hi all,

Today's linux-next merge of the drm tree got a conflict in:

drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

between commits:

3cb93f390453 ("drm/amdgpu: fix use-after-free during gpu recovery")
b09d6acba1d9 ("drm/amdgpu: handle gang submit before VMID")

from Linus' tree and commits:

1b2d5eda5ad7 ("drm/amdgpu: move explicit sync check into the CS")
1728baa7e4e6 ("drm/amdgpu: use scheduler dependencies for CS")
c5093cddf56b ("drm/amdgpu: drop the fence argument from amdgpu_vmid_grab")
940ca22b7ea9 ("drm/amdgpu: drop amdgpu_sync from amdgpu_vmid_grab v2")
1b2d5eda5ad7 ("drm/amdgpu: move explicit sync check into the CS")
1728baa7e4e6 ("drm/amdgpu: use scheduler dependencies for CS")

from the drm tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

--
Cheers,
Stephen Rothwell

diff --cc drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index adac650cf544,032651a655f0..000000000000
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@@ -166,14 -173,8 +173,12 @@@ static void amdgpu_job_free_cb(struct d

drm_sched_job_cleanup(s_job);

- amdgpu_sync_free(&job->sync);
- amdgpu_sync_free(&job->sched_sync);
+ amdgpu_sync_free(&job->explicit_sync);
- dma_fence_put(&job->hw_fence);
+ /* only put the hw fence if has embedded fence */
+ if (!job->hw_fence.ops)
+ kfree(job);
+ else
+ dma_fence_put(&job->hw_fence);
}

void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
@@@ -247,30 -242,18 +246,18 @@@ amdgpu_job_prepare_job(struct drm_sched
{
struct amdgpu_ring *ring = to_amdgpu_ring(s_entity->rq->sched);
struct amdgpu_job *job = to_amdgpu_job(sched_job);
- struct amdgpu_vm *vm = job->vm;
- struct dma_fence *fence;
+ struct dma_fence *fence = NULL;
int r;

- fence = amdgpu_sync_get_fence(&job->sync);
- if (fence && drm_sched_dependency_optimized(fence, s_entity)) {
- r = amdgpu_sync_fence(&job->sched_sync, fence);
- if (r)
- DRM_ERROR("Error adding fence (%d)\n", r);
- }
- if (!fence && job->gang_submit)
++ if (job->gang_submit)
+ fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
+
- while (fence == NULL && vm && !job->vmid) {
- r = amdgpu_vmid_grab(vm, ring, &job->sync,
- &job->base.s_fence->finished,
- job);
+ while (!fence && job->vm && !job->vmid) {
+ r = amdgpu_vmid_grab(job->vm, ring, job, &fence);
if (r)
DRM_ERROR("Error getting VM ID (%d)\n", r);
- fence = amdgpu_sync_get_fence(&job->sync);
}

- if (!fence && job->gang_submit)
- fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
-
return fence;
}

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2022-11-28 00:59    [W:0.030 / U:2.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site