Messages in this thread | | | Subject | Bug report (fork/expand_files) | From | Amnon Shiloh <> | Date | Wed, 08 Mar 2000 15:28:13 +0200 |
| |
Hello,
Can someone tell me who is responsible for fixing this bug? I already posted the following bug-report here, but there was no reply and no fixing in 2.3.[48-50]:
Looking at Linux version 2.3.47, there is a discrepancy in the semantics of "expand_files", "expand_fdset" and "expand_fd_array", as used on the one hand by "fcntl" and "open", versus the use in "copy_files" (kernel/fork.c):
In the first case, the second argument ("nr") is that of the highest desired file-descriptor, beginning at 0, while in "copy_files" the argument is the requested new number of files, beginning at 1 and therefore one higher.
As a result, once a parent process that has more than __FD_SETSIZE files (currently an overwhelming 1024, which is why nobody probably ever noticed) calls "fork", the son will be allocated 2048 files, the grandson will be allocated 4096 files, the 10th generation will be allocated NR_OPEN (1024*1024) files and further "fork"s down the line will fail due to exceeding even this absolute limit -- this is unless an earlier fork already failed for exceeding RLIM_NOFILE (defaulting to INR_OPEN==1024).
Fixing: ------- There are a 100 ways to fix it, but probably the easiest is to change "copy_files", so that: error = expand_fdset(newf, size); becomes error = expand_fdset(newf, size-1); and error = expand_fd_array(newf, open_files); becomes error = expand_fd_array(newf, open_files-1);
Amnon Shiloh.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |