lkml.org 
[lkml]   [2008]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: race leading to held mutexes, inode_cache corruption
On Wed, 2 Apr 2008 00:13:04 -0400 "Sapan Bhatia" <sapan.bhatia@gmail.com> wrote:

> >
> >
> > That's the only way in which I can interpret your second paragraph, but as
> > far as I can tell the code cannot do that.
> >
> > Can you provide more detail?
> >
>
> On running the example again, it seems that attributing the problem to a
> generic locking bug was a misdiagnosis. I apologize for the misinformation.
> The error is more likely a path with a dangling mutex_lock somewhere, or
> something else. I'll investigate further and try to provide a more detailed
> description of the problem when I have something concrete.
>

OK, thanks.

Recent kernels have this:

config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select LOCKDEP
help
This feature will check whether any held lock (spinlock, rwlock,
mutex or rwsem) is incorrectly freed by the kernel, via any of the
memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
vfree(), etc.), whether a live lock is incorrectly reinitialized via
spin_lock_init()/mutex_init()/etc., or whether there is any lock
held during task exit.

which seems rather relevant, no? ;)


\
 
 \ /
  Last update: 2008-04-02 06:33    [W:0.037 / U:0.344 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site