Messages in this thread | | | Subject | Re: [PATCH] iosched: Add i10 I/O Scheduler | From | Jens Axboe <> | Date | Fri, 13 Nov 2020 14:03:05 -0700 |
| |
On 11/13/20 1:34 PM, Sagi Grimberg wrote: > >> I haven't taken a close look at the code yet so far, but one quick note >> that patches like this should be against the branches for 5.11. In fact, >> this one doesn't even compile against current -git, as >> blk_mq_bio_list_merge is now called blk_bio_list_merge. > > Ugh, I guess that Jaehyun had this patch bottled up and didn't rebase > before submitting.. Sorry about that. > >> In any case, I did run this through some quick peak testing as I was >> curious, and I'm seeing about 20% drop in peak IOPS over none running >> this. Perf diff: >> >> 10.71% -2.44% [kernel.vmlinux] [k] read_tsc >> 2.33% -1.99% [kernel.vmlinux] [k] _raw_spin_lock > > You ran this with nvme? or null_blk? I guess neither would benefit > from this because if the underlying device will not benefit from > batching (at least enough for the extra cost of accounting for it) it > will be counter productive to use this scheduler.
This is nvme, actual device. The initial posting could be a bit more explicit on the use case, it says:
"For NVMe SSDs, the i10 I/O scheduler achieves ~60% improvements in terms of IOPS per core over "noop" I/O scheduler."
which made me very skeptical, as it sounds like it's raw device claims.
Does beg the question of why this is a new scheduler then. It's pretty basic stuff, something that could trivially just be added a side effect of the core (and in fact we have much of it already). Doesn't really seem to warrant a new scheduler at all. There isn't really much in there.
>>> [5] https://github.com/i10-kernel/upstream-linux/blob/master/dss-evaluation.pdf >> >> Was curious and wanted to look it up, but it doesn't exist. > > I think this is the right one: > https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf > > We had some back and forth around the naming, hence this was probably > omitted.
That works, my local results were a bit worse than listed in there though. And what does this mean:
"We note that Linux I/O scheduler introduces an additional kernel worker thread at the I/O dispatching stage"
It most certainly does not for the common/hot case.
-- Jens Axboe
| |