lkml.org 
[lkml]   [2019]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v4 2/7] slob: Respect list_head abstraction layer
Date
Currently we reach inside the list_head.  This is a violation of the
layer of abstraction provided by the list_head. It makes the code
fragile. More importantly it makes the code wicked hard to understand.

The code logic is based on the page in which an allocation was made, we
want to modify the slob_list we are working on to have this page at the
front. We already have a function to check if an entry is at the front
of the list. Recently a function was added to list.h to do the list
rotation. We can use these two functions to reduce line count, reduce
code fragility, and reduce cognitive load required to read the code.

Use list_head functions to interact with lists thereby maintaining the
abstraction provided by the list_head structure.

Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
mm/slob.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 307c2c9feb44..39ad9217ffea 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -268,8 +268,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align)
*/
static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
{
- struct page *sp;
- struct list_head *prev;
+ struct page *sp, *prev, *next;
struct list_head *slob_list;
slob_t *b = NULL;
unsigned long flags;
@@ -296,18 +295,27 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
if (sp->units < SLOB_UNITS(size))
continue;

+ /*
+ * Cache previous entry because slob_page_alloc() may
+ * remove sp from slob_list.
+ */
+ prev = list_prev_entry(sp, lru);
+
/* Attempt to alloc */
- prev = sp->lru.prev;
b = slob_page_alloc(sp, size, align);
if (!b)
continue;

- /* Improve fragment distribution and reduce our average
+ next = list_next_entry(prev, lru); /* This may or may not be sp */
+
+ /*
+ * Improve fragment distribution and reduce our average
* search time by starting our next search here. (see
- * Knuth vol 1, sec 2.5, pg 449) */
- if (prev != slob_list->prev &&
- slob_list->next != prev->next)
- list_move_tail(slob_list, prev->next);
+ * Knuth vol 1, sec 2.5, pg 449)
+ */
+ if (!list_is_first(&next->lru, slob_list))
+ list_rotate_to_front(&next->lru, slob_list);
+
break;
}
spin_unlock_irqrestore(&slob_lock, flags);
--
2.21.0
\
 
 \ /
  Last update: 2019-03-18 01:05    [W:0.073 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site