lkml.org 
[lkml]   [2021]   [Nov]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 3/3] btrfs: Avoid live-lock in search_ioctl() on hardware with sub-page faults
    On Fri, Nov 26, 2021 at 11:29:45PM +0100, Andreas Gruenbacher wrote:
    > On Thu, Nov 25, 2021 at 11:42 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
    > > As per Linus' reply, we can work around this by doing
    > > a sub-page fault_in_writable(point_of_failure, align) where 'align'
    > > should cover the copy_to_user() impreciseness.
    > >
    > > (of course, fault_in_writable() takes the full size argument but behind
    > > the scene it probes the 'align' prefix at sub-page fault granularity)
    >
    > That doesn't make sense; we don't want fault_in_writable() to fail or
    > succeed depending on the alignment of the address range passed to it.

    If we know that the arch copy_to_user() has an error of say maximum 16
    bytes (or 15 rather on arm64), we can instead get fault_in_writeable()
    to probe first 16 bytes rather than 1.

    > Have a look at the below code to see what I mean. Function
    > copy_to_user_nofault_unaligned() should be further optimized, maybe as
    > mm/maccess.c:copy_from_kernel_nofault() and/or per architecture
    > depending on the actual alignment rules; I'm not sure.
    [...]
    > --- a/fs/btrfs/ioctl.c
    > +++ b/fs/btrfs/ioctl.c
    > @@ -2051,13 +2051,30 @@ static noinline int key_in_sk(struct btrfs_key *key,
    > return 1;
    > }
    >
    > +size_t copy_to_user_nofault_unaligned(void __user *to, void *from, size_t size)
    > +{
    > + size_t rest = copy_to_user_nofault(to, from, size);
    > +
    > + if (rest) {
    > + size_t n;
    > +
    > + for (n = size - rest; n < size; n++) {
    > + if (copy_to_user_nofault(to + n, from + n, 1))
    > + break;
    > + }
    > + rest = size - n;
    > + }
    > + return rest;

    That's what I was trying to avoid. That's basically a fall-back to byte
    at a time copy (we do this in copy_mount_options(); at some point we
    even had a copy_from_user_exact() IIRC).

    Linus' idea (if I got it correctly) was instead to slightly extend the
    probing in fault_in_writeable() for the beginning of the buffer from 1
    byte to some per-arch range.

    I attempted the above here and works ok:

    https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=devel/btrfs-live-lock-fix

    but too late to post it this evening, I'll do it in the next day or so
    as an alternative to this series.

    --
    Catalin

    \
     
     \ /
      Last update: 2021-11-27 00:00    [W:2.340 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site