lkml.org 
[lkml]   [1999]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch] buffer locks more finegrined
Is there a subtle reason that explains why we are holding some lock when
not necessary? I can't see the reason so IMO the below patch should be
applyed.

--- 2.3.16-bufferlock/fs/buffer.c.~1~ Wed Sep 1 09:30:03 1999
+++ 2.3.16-bufferlock/fs/buffer.c Wed Sep 1 17:20:22 1999
@@ -520,19 +520,20 @@
write_lock(&hash_table_lock);
if (bh->b_pprev)
__hash_unlink(bh);
- __remove_from_lru_list(bh, bh->b_list);
write_unlock(&hash_table_lock);
+ __remove_from_lru_list(bh, bh->b_list);
}

static void insert_into_queues(struct buffer_head *bh)
{
struct buffer_head **head = &hash(bh->b_dev, bh->b_blocknr);

- spin_lock(&lru_list_lock);
write_lock(&hash_table_lock);
__hash_link(bh, head);
- __insert_into_lru_list(bh, bh->b_list);
write_unlock(&hash_table_lock);
+
+ spin_lock(&lru_list_lock);
+ __insert_into_lru_list(bh, bh->b_list);
spin_unlock(&lru_list_lock);
}

Andrea


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.985 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site