Messages in this thread | | | Date | Mon, 8 Feb 1999 13:41:12 -0500 (EST) | From | "Theodore Y. Ts'o" <> | Subject | Re: Linux-2.2.2-pre2.. |
| |
Date: Mon, 8 Feb 1999 12:32:05 +0100 (CET) From: Andrea Arcangeli <andrea@e-mind.com>
These lowlevel tty drivers (just grep for tq_scheduler to see which) inside an irq handler can queue an hangup task in the tq_scheduler.
_Then_ such hangup task will eventually call our tty_hangup() that will be queued again in tq_scheduler. It's obviuos that in such many cases, do_tty_hangup() can be run again from the scheduler (and not from an aware piece of code like tty_io.c) and so do_tty_hangup() can race again with the other CPU in the same way it was doing that in the pty case in 2.2.1. The pty case was just fine moving up the run_task_queue(&tq_scheduler) because it could hangup only when release_dev was recalled I think (and not when the hardware ask for a hangup, because there's no hardware for the pty).
It seems to me that the _only_ reason we was queuing the hangup tasks in tq_scheduler was just to have the lock_kernel() thing and to avoid to run do_tty_hangup() from irq space (so avoiding irq race issues). Is that the real reason?
Yes, the main reason why I did this was to avoid the irq race issues that had been reported to me on UP machines. I admit I seriously consider SMP issues, although I thought things would be safe.
I used tq_scheduler since that seemed to be the easiest way to solve the IRQ race problems. If we don't want to make tq_scheduler SMP lock safe (as per Andrea's patch), the other option is to create a new task queue which is dedicated for handling tty hangups, and then change tty_hangup to use that new task queue. That may be cleaner long-term solution.
- Ted
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |