lkml.org 
[lkml]   [2020]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 2/4] scatterlist: add sgl_copy_sgl() function
From
Date
Am 19.10.20 um 21:19 schrieb Douglas Gilbert:
> Both the SCSI and NVMe subsystems receive user data from the block
> layer in scatterlist_s (aka scatter gather lists (sgl) which are
> often arrays). If drivers in those subsystems represent storage
> (e.g. a ramdisk) or cache "hot" user data then they may also
> choose to use scatterlist_s. Currently there are no sgl to sgl
> operations in the kernel. Start with a sgl to sgl copy. Stops
> when the first of the number of requested bytes to copy, or the
> source sgl, or the destination sgl is exhausted. So the
> destination sgl will _not_ grow.
>
> Signed-off-by: Douglas Gilbert <dgilbert@interlog.com>
> ---
> include/linux/scatterlist.h | 4 ++
> lib/scatterlist.c | 75 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 79 insertions(+)
>
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 80178afc2a4a..6649414c0749 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -321,6 +321,10 @@ size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents,
> size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents,
> size_t buflen, off_t skip);
>
> +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip,
> + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip,
> + size_t n_bytes);
> +
> /*
> * Maximum number of entries that will be allocated in one piece, if
> * a list larger than this is required then chaining will be utilized.
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index d5770e7f1030..1f9e093ad7da 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -974,3 +974,78 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents,
> return offset;
> }
> EXPORT_SYMBOL(sg_zero_buffer);
> +
> +/**
> + * sgl_copy_sgl - Copy over a destination sgl from a source sgl
> + * @d_sgl: Destination sgl
> + * @d_nents: Number of SG entries in destination sgl
> + * @d_skip: Number of bytes to skip in destination before starting
> + * @s_sgl: Source sgl
> + * @s_nents: Number of SG entries in source sgl
> + * @s_skip: Number of bytes to skip in source before starting
> + * @n_bytes: The (maximum) number of bytes to copy
> + *
> + * Returns:
> + * The number of copied bytes.
> + *
> + * Notes:
> + * Destination arguments appear before the source arguments, as with memcpy().
> + *
> + * Stops copying if either d_sgl, s_sgl or n_bytes is exhausted.
> + *
> + * Since memcpy() is used, overlapping copies (where d_sgl and s_sgl belong
> + * to the same sgl and the copy regions overlap) are not supported.
> + *
> + * Large copies are broken into copy segments whose sizes may vary. Those
> + * copy segment sizes are chosen by the min3() statement in the code below.
> + * Since SG_MITER_ATOMIC is used for both sides, each copy segment is started
> + * with kmap_atomic() [in sg_miter_next()] and completed with kunmap_atomic()
> + * [in sg_miter_stop()]. This means pre-emption is inhibited for relatively
> + * short periods even in very large copies.
> + *
> + * If d_skip is large, potentially spanning multiple d_nents then some
> + * integer arithmetic to adjust d_sgl may improve performance. For example
> + * if d_sgl is built using sgl_alloc_order(chainable=false) then the sgl
> + * will be an array with equally sized segments facilitating that
> + * arithmetic. The suggestion applies to s_skip, s_sgl and s_nents as well.
> + *
> + **/
> +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip,
> + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip,
> + size_t n_bytes)
> +{
> + size_t len;
> + size_t offset = 0;
> + struct sg_mapping_iter d_iter, s_iter;
> +
> + if (n_bytes == 0)
> + return 0;
> + sg_miter_start(&s_iter, s_sgl, s_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG);
> + sg_miter_start(&d_iter, d_sgl, d_nents, SG_MITER_ATOMIC | SG_MITER_TO_SG);
> + if (!sg_miter_skip(&s_iter, s_skip))
> + goto fini;
> + if (!sg_miter_skip(&d_iter, d_skip))
> + goto fini;
> +
> + while (offset < n_bytes) {
> + if (!sg_miter_next(&s_iter))
> + break;
> + if (!sg_miter_next(&d_iter))
> + break;
> + len = min3(d_iter.length, s_iter.length, n_bytes - offset);
> +
> + memcpy(d_iter.addr, s_iter.addr, len);
> + offset += len;
> + /* LIFO order (stop d_iter before s_iter) needed with SG_MITER_ATOMIC */
> + d_iter.consumed = len;
> + sg_miter_stop(&d_iter);
> + s_iter.consumed = len;
> + sg_miter_stop(&s_iter);
> + }
> +fini:
> + sg_miter_stop(&d_iter);
> + sg_miter_stop(&s_iter);
> + return offset;
> +}
> +EXPORT_SYMBOL(sgl_copy_sgl);
> +
>

Second try, this time with correct tag.

Reviewed-by: Bodo Stroesser <bostroesser@gmail.com>

\
 
 \ /
  Last update: 2020-11-03 14:26    [W:0.939 / U:0.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site