lkml.org 
[lkml]   [2018]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm/zsmalloc: strength reduce zspage_size calculation
Hi Joey,

On Mon, Feb 26, 2018 at 02:21:26AM -1000, Joey Pabalinas wrote:
> Replace the repeated multiplication in the main loop
> body calculation of zspage_size with an equivalent
> (and cheaper) addition operation.
>
> Signed-off-by: Joey Pabalinas <joeypabalinas@gmail.com>
>
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index c3013505c30527dc42..647a1a2728634b5194 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -821,15 +821,15 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
> */
> static int get_pages_per_zspage(int class_size)
> {
> + int zspage_size = 0;
> int i, max_usedpc = 0;
> /* zspage order which gives maximum used size per KB */
> int max_usedpc_order = 1;
>
> for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> - int zspage_size;
> int waste, usedpc;
>
> - zspage_size = i * PAGE_SIZE;
> + zspage_size += PAGE_SIZE;
> waste = zspage_size % class_size;
> usedpc = (zspage_size - waste) * 100 / zspage_size;
>

Thanks for the patch! However, it's used only zs_create_pool which
is really cold path so I don't feel it would improve for real practice.

Thanks.

\
 
 \ /
  Last update: 2018-02-28 01:04    [W:0.038 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site