lkml.org 
[lkml]   [2014]   [Nov]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Subject[PATCH v6 00/11] Kernel address sanitizer - runtime memory debugger.
    Date
    KASan is a runtime memory debugger designed to find use-after-free
    and out-of-bounds bugs.

    Currently KASAN supported only for x86_64 architecture and requires kernel
    to be build with SLUB allocator.
    KASAN uses compile-time instrumentation for checking every memory access, therefore you
    will need a fresh GCC >= v4.9.2

    Patches are based on mmotm-2014-10-23-16-26 tree and also avaliable in git:

    git://github.com/aryabinin/linux --branch=kasan/kasan_v6

    Changes since v5:
    - Added __printf(3, 4) to slab_err to catch format mismatches (Joe Perches)

    - Changed in Documentation/kasan.txt per Jonathan.

    - Patch for inline instrumentation support merged to the first patch.
    GCC 5.0 finally has support for this.
    - Patch 'kasan: Add support for upcoming GCC 5.0 asan ABI changes' also merged into the first.
    Those GCC ABI changes are in GCC's master branch now.

    - Added information about instrumentation types to documentation.

    - Added -fno-conserve-stack to CFLAGS for mm/kasan/kasan.c file, because -fconserve-stack is bogus
    and it causing unecessary split in __asan_load1/__asan_store1. Because of this split
    kasan_report() is actually not inlined (even though it __always_inline) and _RET_IP_ gives
    unexpected value. GCC bugzilla entry: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533

    Changes since v4:
    - rebased on top of mmotm-2014-10-23-16-26

    - merge patch 'efi: libstub: disable KASAN for efistub in' into the first patch.
    No reason to keep it separate.

    - Added support for upcoming asan ABI changes in GCC 5.0 (second patch).
    GCC patch has not been published/upstreamed yet, but to will be soon. I'm adding this in advance
    in order to avoid breaking kasan with future GCC update.
    Details about gcc ABI changes in this thread: https://gcc.gnu.org/ml/gcc-patches/2014-10/msg02510.html

    - Updated GCC verison requirements in doc (GCC kasan patches were backported into 4.9 branch)

    - Dropped last patch with inline instrumentation support. At first let's wait for merging GCC patches.

    Changes since v3:

    - rebased on last mm
    - Added comment about rcu slabs.
    - Removed useless kasan_free_slab_pages().
    - Removed __asan_init_v*() stub. GCC doesn't generate this call anymore:
    https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00269.html
    - Replaced CALL_KASAN_REPORT define with inline function

    Changes since v2:

    - Shadow moved to vmalloc area.
    - Added posion page. This page mapped to shadow correspondig to
    shadow region itself:
    [kasan_mem_to_shadow(KASAN_SHADOW_START) - kasan_mem_to_shadow(KASAN_SHADOW_END)]
    It used to catch memory access to shadow outside mm/kasan/.

    - Fixed boot with CONFIG_DEBUG_VIRTUAL=y
    - Fixed boot with KASan and stack protector enabled
    (patch "x86_64: load_percpu_segment: read irq_stack_union.gs_base before load_segment")

    - Fixed build with CONFIG_EFI_STUB=y
    - Some slub specific stuf moved from mm/slab.h to include/linux/slub_def.h
    - Fixed Kconfig dependency. CONFIG_KASAN depends on CONFIG_SLUB_DEBUG.
    - Optimizations of __asan_load/__asan_store.
    - Spelling fixes from Randy.
    - Misc minor cleanups in different places.


    - Added inline instrumentation in last patch. This will require two not
    yet-in-trunk-patches for GCC:
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00452.html
    https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00605.html

    Changes since v1:

    - The main change is in shadow memory laoyut.
    Now for shadow memory we reserve 1/8 of all virtual addresses available for kernel.
    16TB on x86_64 to cover all 128TB of kernel's address space.
    At early stage we map whole shadow region with zero page.
    Latter, after physical memory mapped to direct mapping address range
    we unmap zero pages from corresponding shadow and allocate and map a real
    memory.

    - Since per-arch work is much bigger now, support for arm/x86_32 platforms was dropped.

    - CFLAGS was change from -fsanitize=address with different --params to -fsanitize=kernel-address

    - If compiler doesn't support -fsanitize=kernel-address warning printed and build continues without -fsanitize

    - Removed kasan_memset/kasan_memcpy/kasan_memmove hooks. It turned out that this hooks are not needed. Compiler
    already instrument memset/memcpy/memmove (inserts __asan_load/__asan_store call before mem*() calls).

    - branch profiling disabled for mm/kasan/kasan.c to avoid recursion (__asan_load -> ftrace_likely_update -> __asan_load -> ...)

    - kasan hooks for buddy allocator moved to right places


    Comparison with other debuggin features:
    =======================================

    KMEMCHECK:
    - KASan can do almost everything that kmemcheck can. KASan uses compile-time
    instrumentation, which makes it significantly faster than kmemcheck.
    The only advantage of kmemcheck over KASan is detection of unitialized
    memory reads.

    DEBUG_PAGEALLOC:
    - KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
    granularity level, so it able to find more bugs.

    SLUB_DEBUG (poisoning, redzones):
    - SLUB_DEBUG has lower overhead than KASan.

    - SLUB_DEBUG in most cases are not able to detect bad reads,
    KASan able to detect both reads and writes.

    - In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
    bugs only on allocation/freeing of object. KASan catch
    bugs right before it will happen, so we always know exact
    place of first bad read/write.


    Basic idea:
    ===========

    The main idea of KASAN is to use shadow memory to record whether each byte of memory
    is safe to access or not, and use compiler's instrumentation to check the shadow memory
    on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory
    (on x86_64 16TB of virtual address space reserved for shadow to cover all 128TB)
    and uses direct mapping with a scale and offset to translate a memory
    address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

    unsigned long kasan_mem_to_shadow(unsigned long addr)
    {
    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
    }
    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes of the
    corresponding memory region are valid for access; k (1 <= k <= 7) means that
    the first k bytes are valid for access, and other (8 - k) bytes are not;
    Any negative value indicates that the entire 8-bytes are inaccessible.
    Different negative values used to distinguish between different kinds of
    inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).

    To be able to detect accesses to bad memory we need a special compiler.
    Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
    before each memory access of size 1, 2, 4, 8 or 16.

    These functions check whether memory region is valid to access or not by checking
    corresponding shadow memory. If access is not valid an error printed.

    Andrey Ryabinin (11):
    Add kernel address sanitizer infrastructure.
    x86_64: load_percpu_segment: read irq_stack_union.gs_base before
    load_segment
    x86_64: add KASan support
    mm: page_alloc: add kasan hooks on alloc and free paths
    mm: slub: introduce virt_to_obj function.
    mm: slub: share slab_err and object_err functions
    mm: slub: introduce metadata_access_enable()/metadata_access_disable()
    mm: slub: add kernel address sanitizer support for slub allocator
    fs: dcache: manually unpoison dname after allocation to shut up
    kasan's reports
    kmemleak: disable kasan instrumentation for kmemleak
    lib: add kasan test module

    Documentation/kasan.txt | 169 ++++++++++++
    Makefile | 23 +-
    arch/x86/Kconfig | 1 +
    arch/x86/boot/Makefile | 2 +
    arch/x86/boot/compressed/Makefile | 2 +
    arch/x86/include/asm/kasan.h | 27 ++
    arch/x86/kernel/Makefile | 2 +
    arch/x86/kernel/cpu/common.c | 4 +-
    arch/x86/kernel/dumpstack.c | 5 +-
    arch/x86/kernel/head64.c | 9 +-
    arch/x86/kernel/head_64.S | 28 ++
    arch/x86/mm/Makefile | 3 +
    arch/x86/mm/init.c | 3 +
    arch/x86/mm/kasan_init_64.c | 87 +++++++
    arch/x86/realmode/Makefile | 2 +-
    arch/x86/realmode/rm/Makefile | 1 +
    arch/x86/vdso/Makefile | 1 +
    drivers/firmware/efi/libstub/Makefile | 1 +
    fs/dcache.c | 6 +
    include/linux/kasan.h | 69 +++++
    include/linux/sched.h | 3 +
    include/linux/slab.h | 11 +-
    include/linux/slub_def.h | 10 +
    lib/Kconfig.debug | 2 +
    lib/Kconfig.kasan | 54 ++++
    lib/Makefile | 1 +
    lib/test_kasan.c | 254 ++++++++++++++++++
    mm/Makefile | 4 +
    mm/compaction.c | 2 +
    mm/kasan/Makefile | 7 +
    mm/kasan/kasan.c | 468 ++++++++++++++++++++++++++++++++++
    mm/kasan/kasan.h | 54 ++++
    mm/kasan/report.c | 238 +++++++++++++++++
    mm/kmemleak.c | 6 +
    mm/page_alloc.c | 3 +
    mm/slab_common.c | 5 +-
    mm/slub.c | 55 +++-
    scripts/Makefile.lib | 10 +
    38 files changed, 1617 insertions(+), 15 deletions(-)
    create mode 100644 Documentation/kasan.txt
    create mode 100644 arch/x86/include/asm/kasan.h
    create mode 100644 arch/x86/mm/kasan_init_64.c
    create mode 100644 include/linux/kasan.h
    create mode 100644 lib/Kconfig.kasan
    create mode 100644 lib/test_kasan.c
    create mode 100644 mm/kasan/Makefile
    create mode 100644 mm/kasan/kasan.c
    create mode 100644 mm/kasan/kasan.h
    create mode 100644 mm/kasan/report.c

    --
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Konstantin Serebryany <kcc@google.com>
    Cc: Dmitry Chernenkov <dmitryc@google.com>
    Cc: Andrey Konovalov <adech.fo@gmail.com>
    Cc: Yuri Gribov <tetra2005@gmail.com>
    Cc: Konstantin Khlebnikov <koct9i@gmail.com>
    Cc: Sasha Levin <sasha.levin@oracle.com>
    Cc: Michal Marek <mmarek@suse.cz>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Pekka Enberg <penberg@kernel.org>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Andi Kleen <andi@firstfloor.org>
    Cc: Vegard Nossum <vegard.nossum@gmail.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: <x86@kernel.org>
    Cc: <linux-mm@kvack.org>
    Cc: Randy Dunlap <rdunlap@infradead.org>
    Cc: Michal Marek <mmarek@suse.cz>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Alexander Viro <viro@zeniv.linux.org.uk>
    Cc: Dave Jones <davej@redhat.com>
    Cc: Jonathan Corbet <corbet@lwn.net>
    Cc: Joe Perches <joe@perches.com>
    --
    2.1.3



    \
     
     \ /
      Last update: 2014-11-05 16:41    [W:3.650 / U:0.292 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site