lkml.org 
[lkml]   [2013]   [Mar]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[ 021/153] ext4: fix race in ext4_mb_add_n_trim()
    3.2-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Niu Yawei <yawei.niu@gmail.com>

    commit f1167009711032b0d747ec89a632a626c901a1ad upstream.

    In ext4_mb_add_n_trim(), lg_prealloc_lock should be taken when
    changing the lg_prealloc_list.

    Signed-off-by: Niu Yawei <yawei.niu@intel.com>
    Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    ---
    fs/ext4/mballoc.c | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    --- a/fs/ext4/mballoc.c
    +++ b/fs/ext4/mballoc.c
    @@ -4178,7 +4178,7 @@ static void ext4_mb_add_n_trim(struct ex
    /* The max size of hash table is PREALLOC_TB_SIZE */
    order = PREALLOC_TB_SIZE - 1;
    /* Add the prealloc space to lg */
    - rcu_read_lock();
    + spin_lock(&lg->lg_prealloc_lock);
    list_for_each_entry_rcu(tmp_pa, &lg->lg_prealloc_list[order],
    pa_inode_list) {
    spin_lock(&tmp_pa->pa_lock);
    @@ -4202,12 +4202,12 @@ static void ext4_mb_add_n_trim(struct ex
    if (!added)
    list_add_tail_rcu(&pa->pa_inode_list,
    &lg->lg_prealloc_list[order]);
    - rcu_read_unlock();
    + spin_unlock(&lg->lg_prealloc_lock);

    /* Now trim the list to be not more than 8 elements */
    if (lg_prealloc_count > 8) {
    ext4_mb_discard_lg_preallocations(sb, lg,
    - order, lg_prealloc_count);
    + order, lg_prealloc_count);
    return;
    }
    return ;



    \
     
     \ /
      Last update: 2013-03-04 05:41    [W:2.511 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site