Messages in this thread | | | Date | Fri, 20 Jul 2018 17:22:54 -0700 | From | Davidlohr Bueso <> | Subject | Re: [PATCH -next 0/2] fs/epoll: loosen irq safety when possible |
| |
On Fri, 20 Jul 2018, Andrew Morton wrote:
>Did you try measuring it on bare hardware?
I did and wasn't expecting much difference.
For a 2-socket 40-core (ht) IvyBridge on a few workloads, unfortunately I don't have a xen environment and the results for Xen I do have (which numbers are in patch 1) I don't have the actual workload, so cannot compare them directly.
1) Different configurations were used for a epoll_wait (pipes io) microbench (http://linux-scalability.org/epoll/epoll-test.c) and shows around a 7-10% improvement in overall total number of times the epoll_wait() loops when using both regular and nested epolls, so very raw numbers, but measurable nonetheless.
# threads vanilla dirty 1 1677717 1805587 2 1660510 1854064 4 1610184 1805484 8 1577696 1751222 16 1568837 1725299 32 1291532 1378463 64 752584 787368
Note that stddev is pretty small.
2) Another pipe test, which shows no real measurable improvement. (http://www.xmailserver.org/linux-patches/pipetest.c)
>> > >> >I'd have more confidence if we had some warning mechanism if we run >> >spin_lock_irq() when IRQs are disabled, which is probably-a-bug. But >> >afaict we don't have that. Probably for good reasons - I wonder what >> >they are? > >Well ignored ;) > >We could open-code it locally. Add a couple of >WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with >Xen but surely just reading the thing isn't too expensive?
I agree, I'll see what I can come up with and also ask the customer to test in his setup. Bare metal would also need some new numbers I guess.
Thanks, Davidlohr
| |