lkml.org 
[lkml]   [2018]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: block: DMA alignment of IO buffer allocated from slab
Hi Vitaly,

On Wed, Sep 19, 2018 at 11:41:07AM +0200, Vitaly Kuznetsov wrote:
> Ming Lei <tom.leiming@gmail.com> writes:
>
> > Hi Guys,
> >
> > Some storage controllers have DMA alignment limit, which is often set via
> > blk_queue_dma_alignment(), such as 512-byte alignment for IO buffer.
>
> While mostly drivers use 512-byte alignment it is not a rule of thumb,
> 'git grep' tell me we have:
> ide-cd.c with 32-byte alignment
> ps3disk.c and rsxx/dev.c with variable alignment.
>
> What if our block configuration consists of several devices (in raid
> array, for example) with different requirements, e.g. one requiring
> 512-byte alignment and the other requiring 256?

512-byte alignment is also 256-byte aligned, and the sector size is 512 byte.

>
> >
> > Block layer now only checks if this limit is respected for buffer of
> > pass-through request,
> > see blk_rq_map_user_iov(), bio_map_user_iov().
> >
> > The userspace buffer for direct IO is checked in dio path, see
> > do_blockdev_direct_IO().
> > IO buffer from page cache should be fine wrt. this limit too.
> >
> > However, some file systems, such as XFS, may allocate single sector IO buffer
> > via slab. Usually I guess kmalloc-512 should be fine to return
> > 512-aligned buffer.
> > But once KASAN or other slab debug options are enabled, looks this
> > isn't true any
> > more, kmalloc-512 may not return 512-aligned buffer. Then data corruption
> > can be observed because the IO buffer from fs layer doesn't respect the DMA
> > alignment limit any more.
> >
> > Follows several related questions:
> >
> > 1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If
> > yes, is it a stable rule?
> >
> > 2) If it is a rule for kmalloc-N slab to return N-byte aligned buffer,
> > seems KASAN violates this
> > rule?
>
> (as I was kinda involved in debugging): the issue was observed with SLUB
> allocator KASAN is not to blame, everything wich requires aditional
> metadata space will break this, see e.g. calculate_sizes() in slub.c

Buffer allocated via kmalloc() should be aligned with L1 HW cache size
at least.

I have raised the question: does kmalloc-512 slab guarantee to return
512-byte aligned buffer, let's see what the answer is from MM guys,:-)

From the Red Hat BZ, looks I understand this issue is only triggered when
KASAN is enabled, or you have figured out how to reproduce it without
KASAN involved?

>
> >
> > 3) If slab can't guarantee to return 512-aligned buffer, how to fix
> > this data corruption issue?
>
> I'm no expert in block layer but in case of complex block device
> configurations when bio submitter can't know all the requirements I see
> no other choice than bouncing.

I guess that might be the last straw, given the current way without
bouncing works for decades, and seems no one complains before.

Thanks,
Ming

\
 
 \ /
  Last update: 2018-09-19 12:03    [W:0.151 / U:0.572 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site