lkml.org 
[lkml]   [2014]   [Nov]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH -next v2 11/26] tty: Don't release tty locks for wait queue sanity check
On Wed, Nov 05, 2014 at 12:12:54PM -0500, Peter Hurley wrote:
> Releasing the tty locks while waiting for the tty wait queues to
> be empty is no longer necessary nor desirable. Prior to
> "tty: Don't take tty_mutex for tty count changes", dropping the
> tty locks was necessary to reestablish the correct lock order between
> tty_mutex and the tty locks. Dropping the global tty_mutex was necessary;
> otherwise new ttys could not have been opened while waiting.
>
> However, without needing the global tty_mutex held, the tty locks for
> the releasing tty can now be held through the sleep. The sanity check
> is for abnormal conditions caused by kernel bugs, not for recoverable
> errors caused by misbehaving userspace; dropping the tty locks only
> allows the tty state to get more sideways.
>
> Reviewed-by: Alan Cox <alan@linux.intel.com>
> Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
> ---
> drivers/tty/tty_io.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
> index e59de81c39a9..b008e2b38d54 100644
> --- a/drivers/tty/tty_io.c
> +++ b/drivers/tty/tty_io.c
> @@ -1798,13 +1798,10 @@ int tty_release(struct inode *inode, struct file *filp)
> * first, its count will be one, since the master side holds an open.
> * Thus this test wouldn't be triggered at the time the slave closes,
> * so we do it now.
> - *
> - * Note that it's possible for the tty to be opened again while we're
> - * flushing out waiters. By recalculating the closing flags before
> - * each iteration we avoid any problems.
> */
> + tty_lock_pair(tty, o_tty);
> +
> while (1) {
> - tty_lock_pair(tty, o_tty);
> tty_closing = tty->count <= 1;
> o_tty_closing = o_tty &&
> (o_tty->count <= (pty_master ? 1 : 0));
> @@ -1835,7 +1832,6 @@ int tty_release(struct inode *inode, struct file *filp)
>
> printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
> __func__, tty_name(tty, buf));
> - tty_unlock_pair(tty, o_tty);
> schedule();
> }
>

This patch had the same type of fuzz as the previous one, the version I
used was:


diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index e59de81c39a9..b008e2b38d54 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -1798,13 +1798,10 @@ int tty_release(struct inode *inode, struct file *filp)
* first, its count will be one, since the master side holds an open.
* Thus this test wouldn't be triggered at the time the slave closes,
* so we do it now.
- *
- * Note that it's possible for the tty to be opened again while we're
- * flushing out waiters. By recalculating the closing flags before
- * each iteration we avoid any problems.
*/
+ tty_lock_pair(tty, o_tty);
+
while (1) {
- tty_lock_pair(tty, o_tty);
tty_closing = tty->count <= 1;
o_tty_closing = o_tty &&
(o_tty->count <= (pty_master ? 1 : 0));
@@ -1835,7 +1832,6 @@ int tty_release(struct inode *inode, struct file *filp)

printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
__func__, tty_name(tty, buf));
- tty_unlock_pair(tty, o_tty);
schedule();
}


\
 
 \ /
  Last update: 2014-11-06 04:01    [W:0.854 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site