io-threads (was Re: [Gluster-devel] write-behind mtime workaround)

Anand Avati avati at
Fri Apr 27 23:30:14 UTC 2007

  this is a valuable observation. my initial suspicion is that NFS's
open+write+close for every write (to be stateless) causes seperate FDs
created for each write chunk, which are ending in sepearte I/O
threads. I/O threads currently allots fd's to threads in a
'least-used-at-the-moment' policy. changing this to an inode number
basedd static policy might help.

thanks again for the observation!

On Fri, Apr 27, 2007 at 07:06:17PM -0400, Brent A Nelson wrote:
> Hmm, it looks like io-threads is responsible for more than just mtime 
> glitches when used with write-behind.  I just found that the problems I 
> had with NFS re-export go away when I get rid of io-threads (plus, now 
> that I can enable write-behind, the NFS write performance is far better, 
> by at least a factor of 5)!
> It looks like I'll be switching off io-threads for now, and turning on all 
> the other performance enhancements.
> Thanks,
> Brent
> On Fri, 27 Apr 2007, Brent A Nelson wrote:
> >On Thu, 26 Apr 2007, Anand Avati wrote:
> >
> >>Brent,
> >>I understand what is happening. It is because I/O threads lets the
> >>mtime overtake the write call. I assume you have loaded io-threads on
> >>server side (or below write-behind on client side).
> >
> >Yes, I have io-threads loaded on the server.  This occurs when I load 
> >write-behind on the client.
> >
> >>I could provide you a temporary 'ugly' fix just for you if the issue is 
> >>critical (until the proper framework comes in 1.4)
> >
> >It would be worthwhile if the temporary fix is acceptable for the 1.3 
> >release (otherwise, you'll need a warning included with the release, so 
> >that people enabling io-threads and write-behind know what to expect), but 
> >don't waste your time if it's just for me.  Push on to 1.4 and the real 
> >fix; I'll just leave write-behind disabled for now.
> >
> >Many Thanks,
> >
> >Brent
> >

deep_thought (void)
  sleep (years2secs (7500000)); 
  return 42;

More information about the Gluster-devel mailing list