From: Chris Wilson Date: Fri, 1 May 2020 19:29:45 +0000 (+0100) Subject: drm/i915/gem: Try an alternate engine for relocations X-Git-Tag: baikal/mips/sdk5.9~13559^2~7^2~88 X-Git-Url: https://git.baikalelectronics.ru/sdk/?a=commitdiff_plain;h=53c1a8032160e16a9b0a1299e911f60bbdd30c13;p=kernel.git drm/i915/gem: Try an alternate engine for relocations If at first we don't succeed, try try again. Not all engines may support the MI ops we need to perform asynchronous relocation patching, and so we end up falling back to a synchronous operation that has a liability of blocking. However, Tvrtko pointed out we don't need to use the same engine to perform the relocations as we are planning to execute the execbuf on, and so if we switch over to a working engine, we can perform the relocation asynchronously. The user execbuf will be queued after the relocations by virtue of fencing. This patch creates a new context per execbuf requiring asynchronous relocations on an unusable engines. This is perhaps a bit excessive and can be ameliorated by a small context cache, but for the moment we only need it for working around a little used engine on Sandybridge, and only if relocations are actually required to an active batch buffer. Now we just need to teach the relocation code to handle physical addressing for gen2/3, and we should then have universal support! Suggested-by: Tvrtko Ursulin Testcase: igt/gem_exec_reloc/basic-spin # snb Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Reviewed-by: Tvrtko Ursulin Link: https://patchwork.freedesktop.org/patch/msgid/20200501192945.22215-3-chris@chris-wilson.co.uk --- diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 3c01c4193c5f4..cce7df231cb91 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1285,6 +1285,7 @@ static int reloc_move_to_gpu(struct i915_request *rq, struct i915_vma *vma) } static int __reloc_gpu_alloc(struct i915_execbuffer *eb, + struct intel_engine_cs *engine, unsigned int len) { struct reloc_cache *cache = &eb->reloc_cache; @@ -1294,7 +1295,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, u32 *cmd; int err; - pool = intel_gt_get_buffer_pool(eb->engine->gt, PAGE_SIZE); + pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE); if (IS_ERR(pool)) return PTR_ERR(pool); @@ -1317,7 +1318,23 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, if (err) goto err_unmap; - rq = i915_request_create(eb->context); + if (engine == eb->context->engine) { + rq = i915_request_create(eb->context); + } else { + struct intel_context *ce; + + ce = intel_context_create(engine); + if (IS_ERR(ce)) { + err = PTR_ERR(rq); + goto err_unpin; + } + + i915_vm_put(ce->vm); + ce->vm = i915_vm_get(eb->context->vm); + + rq = intel_context_create_request(ce); + intel_context_put(ce); + } if (IS_ERR(rq)) { err = PTR_ERR(rq); goto err_unpin; @@ -1368,10 +1385,15 @@ static u32 *reloc_gpu(struct i915_execbuffer *eb, int err; if (unlikely(!cache->rq)) { - if (!intel_engine_can_store_dword(eb->engine)) - return ERR_PTR(-ENODEV); + struct intel_engine_cs *engine = eb->engine; + + if (!intel_engine_can_store_dword(engine)) { + engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0]; + if (!engine || !intel_engine_can_store_dword(engine)) + return ERR_PTR(-ENODEV); + } - err = __reloc_gpu_alloc(eb, len); + err = __reloc_gpu_alloc(eb, engine, len); if (unlikely(err)) return ERR_PTR(err); }