Messages in this thread Patch in this message | | | Date | Thu, 9 Mar 2000 13:39:51 +0100 (CET) | From | Andrea Arcangeli <> | Subject | Re: new IRQ scalability changes in 2.3.48 |
| |
On Mon, 13 Mar 2000, Ingo Molnar wrote:
>no. First, this is definitely not going to happen before 2.5. And even in >that case we do not care about copy_user latencies. Adding 'may preempt'
So why the lowlatency patch is doing this stuff if you think current->need_reshed is going to be zero during the copy_user business?
--- linux/include/asm-i386/uaccess.h.orig Sat Feb 5 03:13:21 2000 +++ linux/include/asm-i386/uaccess.h Sat Feb 5 03:22:15 2000 @@ -6,6 +6,7 @@ */ #include <linux/config.h> #include <linux/sched.h> +#include <linux/condsched.h> #include <asm/page.h> #define VERIFY_READ 0 @@ -253,6 +254,7 @@ #define __copy_user(to,from,size) \ do { \ int __d0, __d1; \ + conditional_schedule(); \ __asm__ __volatile__( \ "0: rep; movsl\n" \ " movl %3,%0\n" \ @@ -275,6 +277,7 @@ #define __copy_user_zeroing(to,from,size) \ do { \ int __d0, __d1; \ + conditional_schedule(); \ __asm__ __volatile__( \ "0: rep; movsl\n" \ " movl %3,%0\n" \ @@ -324,6 +327,7 @@ int __d0, __d1; \ switch (size & 3) { \ default: \ + conditional_schedule(); \ __asm__ __volatile__( \ "0: rep; movsl\n" \ "1:\n" \ @@ -408,6 +412,7 @@ int __d0, __d1; \ switch (size & 3) { \ default: \ + conditional_schedule(); \ __asm__ __volatile__( \ "0: rep; movsl\n" \ "1:\n" \ >We care about generic unbound kernel-space routines. Quality RT-capable >kernel code will only happen in the long run if _all_ (but a well >specified) section can be preempted and spinlocks are held for a bound ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >amount of time, and we do not need any explicit 'cpu_preemptable' ^^^^^^^^^^^^^^
Unless you want to add also a slowww conditional schedule within the spin_unlock() (that will trigger at the last lock released if need_resched is 1), then the preemtible kernel won't preemt anything if the timeslice expired while any spinlock was held. And you also for sure payed the cost to avoid preemption during the spin_lock/unlock.
So you pay and you get _nothing_ unless you pay even more.
IMHO most of the code needs at least a basic kind of serialization lock held so the preemtable kernel is not going to have much changes to preempt.
And the only place of kernel that would be really worty to be preemtable are CPU bound parts that runs with none lock held. And copy user is that case. Making copy_user preemtable is only a per-CPU cacheline cost at ret_from_interrupt time. Making the kernel fully preemtable is a locking bloat with still necessary explicit reschedule all over the place to provide low latency.
For example in the free_inodes the preemtable kernel won't buy anything. The same for shrink_mmap and most other places where you correctly added the conditional schedule in the lowlatency patch. You'll have to keep the conditional schedule even with the preemtable kernel there. And you'll also pay the double cost the lock fast path.
I currently don't believe preemptible kernel is the way to go. Allowing some CPU bound part that runs with no locks to be preemtable looks a nice latency optimization with no downside instead.
Andrea
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |