lkml.org 
[lkml]   [2020]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Kernel compression benchmarks
Hello

I looked at the SVG graphs and it appears that the formula used wasn't
T_load+T_decompress, but was just T_decompress.

Without considering the time it takes to load the compressed data from
a storage device, the SVG graphs are only half-done and might be
deceiving.

There are 3 kinds of typical device speeds nowadays, the sequential
read speed of a large non-fragmented compressed file is one of the
following:

100 MB/s: rotational disks and USB flash drives
500 MB/s: SATA SSD
2 GB/s: NVMe SSD

The read speeds of USB flash devices vary a lot, but in case of recent
high-speed USB flash drives it falls into the 100 MB/s category of
rotational disks. Taking USB flash read speed into consideration is
important for deciding which compression to use when creating the ISO
image of a Linux distribution.

In summary: Instead of the 1 kernel-decomp.svg file, there should be 3
kernel-read-decomp.svg files. Similarly in the case of the
initramfs-decomp.svg file.

As a rule of thumb, if the kernel and initramfs are stored on a NVMe
SSD then simply select the fastest decompressor without considering
the compression ratio - or avoid using any compression at all in which
case T_decompress will be zero.

The formula T_load+T_decompress assumes that loading and decompression
aren't executing in parallel. If they are, the formula should be
max(T_load, T_decompress).

Sincerely
Jan

\
 
 \ /
  Last update: 2020-07-26 18:44    [W:0.081 / U:0.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site