lkml.org 
[lkml]   [2013]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[ 018/221] mm: bootmem: fix free_all_bootmem_core() with odd bitmap alignment
    Date
    3.7-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Max Filippov <jcmvbkbc@gmail.com>

    commit 10d73e655cef6e86ea8589dca3df4e495e4900b0 upstream.

    Currently free_all_bootmem_core ignores that node_min_pfn may be not
    multiple of BITS_PER_LONG. Eg commit 6dccdcbe2c3e ("mm: bootmem: fix
    checking the bitmap when finally freeing bootmem") shifts vec by lower
    bits of start instead of lower bits of idx. Also

    if (IS_ALIGNED(start, BITS_PER_LONG) && vec == ~0UL)

    assumes that vec bit 0 corresponds to start pfn, which is only true when
    node_min_pfn is a multiple of BITS_PER_LONG. Also loop in the else
    clause can double-free pages (e.g. with node_min_pfn == start == 1,
    map[0] == ~0 on 32-bit machine page 32 will be double-freed).

    This bug causes the following message during xtensa kernel boot:

    bootmem::free_all_bootmem_core nid=0 start=1 end=8000
    BUG: Bad page state in process swapper pfn:00001
    page:d04bd020 count:0 mapcount:-127 mapping: (null) index:0x2
    page flags: 0x0()
    Call Trace:
    bad_page+0x8c/0x9c
    free_pages_prepare+0x5e/0x88
    free_hot_cold_page+0xc/0xa0
    __free_pages+0x24/0x38
    __free_pages_bootmem+0x54/0x56
    free_all_bootmem_core$part$11+0xeb/0x138
    free_all_bootmem+0x46/0x58
    mem_init+0x25/0xa4
    start_kernel+0x11e/0x25c
    should_never_return+0x0/0x3be7

    The fix is the following:
    - always align vec so that its bit 0 corresponds to start
    - provide BITS_PER_LONG bits in vec, if those bits are available in the
    map
    - don't free pages past next start position in the else clause.

    Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
    Cc: Gavin Shan <shangw@linux.vnet.ibm.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Tejun Heo <tj@kernel.org>
    Cc: Yinghai Lu <yinghai@kernel.org>
    Cc: Joonsoo Kim <js1304@gmail.com>
    Cc: Prasad Koya <prasad.koya@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    mm/bootmem.c | 24 ++++++++++++++++++------
    1 file changed, 18 insertions(+), 6 deletions(-)

    --- a/mm/bootmem.c
    +++ b/mm/bootmem.c
    @@ -185,10 +185,23 @@ static unsigned long __init free_all_boo

    while (start < end) {
    unsigned long *map, idx, vec;
    + unsigned shift;

    map = bdata->node_bootmem_map;
    idx = start - bdata->node_min_pfn;
    + shift = idx & (BITS_PER_LONG - 1);
    + /*
    + * vec holds at most BITS_PER_LONG map bits,
    + * bit 0 corresponds to start.
    + */
    vec = ~map[idx / BITS_PER_LONG];
    +
    + if (shift) {
    + vec >>= shift;
    + if (end - start >= BITS_PER_LONG)
    + vec |= ~map[idx / BITS_PER_LONG + 1] <<
    + (BITS_PER_LONG - shift);
    + }
    /*
    * If we have a properly aligned and fully unreserved
    * BITS_PER_LONG block of pages in front of us, free
    @@ -201,19 +214,18 @@ static unsigned long __init free_all_boo
    count += BITS_PER_LONG;
    start += BITS_PER_LONG;
    } else {
    - unsigned long off = 0;
    + unsigned long cur = start;

    - vec >>= start & (BITS_PER_LONG - 1);
    - while (vec) {
    + start = ALIGN(start + 1, BITS_PER_LONG);
    + while (vec && cur != start) {
    if (vec & 1) {
    - page = pfn_to_page(start + off);
    + page = pfn_to_page(cur);
    __free_pages_bootmem(page, 0);
    count++;
    }
    vec >>= 1;
    - off++;
    + ++cur;
    }
    - start = ALIGN(start + 1, BITS_PER_LONG);
    }
    }




    \
     
     \ /
      Last update: 2013-01-15 20:42    [W:4.120 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site