lkml.org 
[lkml]   [2020]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v6 0/3] crypto: engine - support for parallel and batch requests
Date
Added support for executing multiple, independent or not, requests
for crypto engine based on a retry mechanism. If hardware was unable
to execute a backlog request, enqueue it back in front of crypto-engine
queue, to keep the order of requests.

Now do_one_request() returns:
>= 0: hardware executed the request successfully;
< 0: this is the old error path. If hardware has support for retry
mechanism, the backlog request is put back in front of crypto-engine
queue. For backwards compatibility, if the retry support is not available,
the crypto-engine will work as before.
If hardware queue is full (-ENOSPC), requeue request regardless
of MAY_BACKLOG flag.
If hardware throws any other error code (like -EIO, -EINVAL,
-ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into
crypto-engine's queue, since the others can be dropped.

If hardware supports batch requests, crypto-engine can handle this use-case
through do_batch_requests callback.

Since, these new features, cannot be supported by all hardware,
the crypto-engine framework is backward compatible:
- by using the crypto_engine_alloc_init function, to initialize
crypto-engine, the new callback is NULL and retry mechanism is
disabled, so crypto-engine will work as before these changes;
- to support multiple requests, in parallel, retry_support variable
must be set on true, in driver.
- to support batch requests, do_batch_requests callback must be
implemented in driver, to execute a batch of requests. The link
between the requests, is expected to be done in driver, in
do_one_request().

---
Changes since V5:
- updated, in algapi, the crypto_enqueue_request_head function to
enqueue requests regardless of backlog flag;
- requeue request based on the error code returned by do_one_request().

Changes since V4:
- added, in algapi a function to add a request in front of queue;
- added a retry mechanism: if hardware is unable to execute
a backlog request, enqueue it back in front of crypto-engine
queue, to keep the order of requests.

Changes since V3:
- removed can_enqueue_hardware callback and added a start-stop
mechanism based on the on the return value of do_one_request().

Changes since V2:
- readded cur_req in crypto-engine, to keep, the exact behavior as before
these changes, if can_enqueue_more is not implemented: send requests
to hardware, _one-by-one_, on crypto_pump_requests, and complete it,
on crypto_finalize_request, and so on.
- do_batch_requests is available only with can_enqueue_more.

Changes since V1:
- changed the name of can_enqueue_hardware callback to can_enqueue_more, and
the argument of this callback to crypto_engine structure (for cases when more
than ore crypto-engine is used).
- added a new patch with support for batch requests.

Changes since V0 (RFC):
- removed max_no_req and no_req, as the number of request that can be
processed in parallel;
- added a new callback, can_enqueue_more, to check whether the hardware
can process a new request.


Iuliana Prodan (3):
crypto: algapi - create function to add request in front of queue
crypto: engine - support for parallel requests based on retry
mechanism
crypto: engine - support for batch requests

crypto/algapi.c | 8 +++
crypto/crypto_engine.c | 171 +++++++++++++++++++++++++++++++++++++++---------
include/crypto/algapi.h | 2 +
include/crypto/engine.h | 15 ++++-
4 files changed, 164 insertions(+), 32 deletions(-)

--
2.1.0

\
 
 \ /
  Last update: 2020-04-28 17:50    [W:0.052 / U:1.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site