Good insights. Would the solution therefore be to synchronize on all the FileChannel methods (not just those you think you need to synchronize on) or is there another way to get around the too many open files error?
> is there another way to get around the too many open files error?
Since this isn't an actual leak, raising the limit should be fine. The default limit on Linux is 1024 due to some issues with SELECT, but you can easily raise it to a much higher value if you don't use that API.
You can't just lock on the file operations, since this problem comes from thread interruptions. No interrupt, no problem. So, instead you need to make file operations and _any_ thread interrupt mutual exclusive.
Finding and patching all possible locations which could interrupt your threads doing file operations is probably a foolish effort.
So, raising the limit, or load balancing (depending on the type of application) is probably the best solution.
No, the solution would be to handle the unexpected closing specially and gracefully, for example trying to reopen the channel and retrying the higher-level operation once.
The other question is why the threads are receiving interrupts in the first place. Depending on the reason, a different course of action might be appropriate.
Cache the file contents, perhaps? Isolate actual file I/O to dedicated threads and vend reads and writes from it? Buffer writes in-memory, only flushing at some interval or when the buffer fills up? Use a DB server and not raw files?
Lots of ways to skin this cat, but it really depends on what the application is doing and why.