Messages in this thread | | | Date | Thu, 23 Oct 2003 21:37:45 +0800 (WST) | From | Ian Kent <> | Subject | Re: [NFS] RE: [autofs] multiple servers per automount |
| |
Please forgive my ignorance Ingo but ...
I suffer from race condition blindness. A terible afflicition when one is trying to understand the sublties of the kernel, but I'm trying.
While I am not questioning your suggestion, I have thought about the code and fail to see the race you point out. Please help me along.
On Wed, 15 Oct 2003, Ingo Oeser wrote:
> On Wednesday 15 October 2003 01:12, Mike Waychison wrote: > > The problem still remains in 2.6 that we limit the count to 256. I've > > attached a quick patch that I've compiled and tested. I don't know if > > there is a better way to handle dynamic assignment of minors (haven't > > kept up to date in that realm), but if there is, then we should probably > > use it instead. > > > In your patch you allocate inside the spinlock.
Do you mean we don't want to sleep under the spin lock? Would a GFP_ATOMIC make a difference to the analysis?
> > I would suggest to do sth. like the following: > > void *local; > if (!unamed_dev_inuse) { > local = get_zeroed_page(GFP_KERNEL); > > if (!local) > return -ENOMEM; > } > > spinlock(&unamed_dev_lock); > mb(); > if (!unamed_dev_inuse) { > unamed_dev_inuse = local; > > /* Used globally, don't free now */ > local = NULL; > } > > /* > Do the lookup and alloc > */ > > spinunlock(&unamed_dev_lock); > > /* Free page, because of race on allocation. */ > if (local) > free_page(local); > > > Which will swap the pointers atomically and still alloc outside the > non-sleeping locking.
As I said please give me a hint about your thinking here. And the use of a memory barrier as well ... umm?
--
,-._|\ Ian Kent / \ Perth, Western Australia *_.--._/ E-mail: raven@themaw.net v Web: http://themaw.net/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |