[Gluster-devel] Too many open files


Thu Apr 12 06:13:29 UTC 2007


Brent,
  this time the leak of fds was in posix-locks. posix-locks is still
not yet completely ready. please use it after we announce on the
list that the feature is available for use. i have fixed the leak in
posix locks.
  about the mtime bug, there was a wrong understanding i had about how
direct_io enabled files are handled by the kernel during close() hence the writes's
before close() and the following utime() call in cp were ending up in
a race condition in different threads and sometime utime would happen
before the last write() hence the following write() would "screw" the
utime() call's effect. anyway now the appropriate changes are done and
hopefully should work correctly.

avati

On Wed, Apr 11, 2007 at 03:35:34PM -0400, Brent A Nelson wrote:
> On Wed, 11 Apr 2007, Krishna Srinivas wrote:
> 
> >Hi Brent,
> >
> >I tried to reproduce again but was not successful. I am not using
> >unify. Can you give me your spec files again? Just so that I dont
> >miss anything. Also can you tell me what all you did to see the
> >problem? Any other info that might be useful to me to reproduce?
> >
> >Also regarding write-behind+mtime bug, can you check out the
> >latest code and see if rsync or "cp -a" still sees the problem?
> >Avati made has some changes.
> >
> >Thanks
> >Krishna
> >
> 
> Spec files are attached.  The two nodes use the same server and client 
> spec files (both servers are also the clients).
> 
> Thanks,
> 
> Brent

> volume disk0
>   type storage/posix                   # POSIX FS translator
>   option directory /share0             # Export this directory
> end-volume
> 
> volume lock0
>   type features/posix-locks
>   subvolumes disk0
> end-volume
> 
> volume share0
>   type performance/io-threads
>   option thread-count 8
>   subvolumes lock0
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
> # option transport-type ib-sdp/server  # For Infiniband transport
> # option bind-address 192.168.1.10     # Default is to listen on all interfaces
> # option listen-port 6996               # Default is 6996
>   option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>   subvolumes share0
> # NOTE: Access to any volume through protocol/server is denied by
> # default. You need to explicitly grant access through # "auth"
> # option.
>   option auth.ip.share0.allow 128.227.64.*,128.227.89.*
> end-volume

> volume share0-0
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
> # option transport-type ib-sdp/client  # for Infiniband transport
>   option remote-host 128.227.64.163    # IP address of the remote brick
> # option remote-port 6996              # default server port is 6996
>   option remote-subvolume share0       # name of the remote volume
> end-volume
> volume share0-1
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
> # option transport-type ib-sdp/client  # for Infiniband transport
>   option remote-host 128.227.64.164    # IP address of the remote brick
> # option remote-port 6996              # default server port is 6996
>   option remote-subvolume share0       # name of the remote volume
> end-volume
> 
> volume mirror0
>   type cluster/afr
>   subvolumes share0-0 share0-1
>   option replicate *:2     # Do not leave space before or after "," and ":"
> end-volume
> 
> #volume mirrors
> #  type cluster/unify
> #  subvolumes mirror0
> ##  option scheduler rr
> #  option scheduler alu
> #  option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
> #  option alu.disk-usage.entry-threshold 2GB
> #  option alu.disk-usage.exit-threshold 60MB
> #  option alu.limits.min-free-disk 5GB
> #  option alu.stat-refresh.interval 10sec
> #end-volume
> 
> volume statprefetch
>   type performance/stat-prefetch
>   option cache-seconds 2
>   subvolumes mirror0
> end-volume
> 
> #volume writebehind
> #  type performance/write-behind
> #  option aggregate-size 131072 # in bytes
> #  subvolumes statprefetch
> #end-volume
> 
> volume readahead
>   type performance/read-ahead
>   option page-size 65536 ### in bytes
>   option page-count 16 ### memory cache size is page-count x page-size per file
>   subvolumes statprefetch
> end-volume


-- 
ultimate_answer_t
deep_thought (void)
{ 
  sleep (years2secs (7500000)); 
  return 42;
}





More information about the Gluster-devel mailing list