[Gluster-devel] What would cause this slow down?

Anand Avati avati at zresearch.com
Mon Jun 4 12:21:31 UTC 2007


please try it with the write-behind translator on the client side.
this is likely because each meta data operation is delaying every
write operation's turn around time which can drastically effect the
overall throughput. the write-behind translator was specifically
written to handle this case, where the aplication's write turn around
time is made independant of server's response time. loading this
should also improve your regular write throughput. if 11MB/s what you
got was on a Gig/E, then loading write-behind on client will surely
improve that.

thanks,
avati

2007/6/4, Dale Dude <dale at oc3networks.com>:
> Adding to my own email. This only happens when I run du/rsync on the
> glusterfs mount that is on one of the glusterfsd servers. If I do a
> du/rsync on a machine that is just a client there is no slow down.
>
> Dale Dude wrote:
> > using 2007-06-01 gluster tla 2.4mainline, linux 2.6.15 and 2.6.20,
> > fuse 2.6.5
> >
> > If I have a single large copy going to the glusterfs volume the speed
> > can peak at 11MB/s which is ok. When I do just a 'du -sh
> > /mnt/glusterfs' while the transfer is happening then the transfer
> > falls to about 300kB/s until the du is done. Same when I run an rsync
> > and it only is comparing the files (hasnt started transferred yet).
> >
> > Curious why a copy would slow down so much.
> >
> > gluster-*.conf files below. For the client conf I was using the unify
> > example from the wiki with same results. Tried with the writebehind
> > and iothreads with same results as well.
> >
> > Thanks in advance,
> > Dale
> >
> >
> > *glusterfs-server.vol:*
> > volume volume1
> >  type storage/posix           # POSIX FS translator
> >  option directory /volumes/clusterfs   # Export this directory
> > end-volume
> >
> > volume locks
> >  type features/posix-locks
> >  subvolumes volume1
> > end-volume
> >
> > volume iothreads    #iothreads can give performance a boost
> >   type performance/io-threads
> >   option thread-count 8
> >   subvolumes locks
> > end-volume
> >
> > volume writebehind
> >  type performance/write-behind
> >  option aggregate-size 131072 # in bytes
> >  subvolumes iothreads
> > end-volume
> >
> > ### Add network serving capability to above brick.
> > volume clusterfs
> >  type protocol/server
> >  option transport-type tcp/server  # For TCP/IP transport
> >  subvolumes writebehind
> >  option auth.ip.clusterfs.allow 192.168.*
> > end-volume
> >
> > --------------------------------------------------------------------
> >
> >
> > *glusterfs-client.vol:*
> > volume client1
> >         type protocol/client
> >         option transport-type tcp/client     # for TCP/IP transport
> >         option remote-host 192.168.10.10     # IP address of the
> > remote brick
> >         option remote-subvolume clusterfs
> > end-volume
> >
> > #volume client2
> >         #type protocol/client
> >         #option transport-type tcp/client     # for TCP/IP transport
> >         #option remote-host 192.168.10.11     # IP address of the
> > remote brick
> >         #option remote-subvolume clusterfs
> > #end-volume
> >
> > volume client3
> >         type protocol/client
> >         option transport-type tcp/client     # for TCP/IP transport
> >         option remote-host 192.168.10.16     # IP address of the
> > remote brick
> >         option remote-subvolume clusterfs
> > end-volume
> >
> > volume client4
> >         type protocol/client
> >         option transport-type tcp/client     # for TCP/IP transport
> >         option remote-host 192.168.10.17     # IP address of the
> > remote brick
> >         option remote-subvolume clusterfs
> > end-volume
> >
> >
> > volume bricks
> >        type cluster/unify
> >        subvolumes client1 client3 client4
> >        option scheduler alu
> >        option alu.limits.min-free-disk  6GB   # Don't create files one
> > a volume with less than 60GB free diskspace
> >        option alu.limits.max-open-files 10000   # Don't create files
> > on a volume with more than 10000 files open
> >
> >  option alu.order read-usage:write-usage
> > option alu.read-usage.entry-threshold 20%   # Kick in when the
> > read-usage discrepancy is 20%
> > option alu.read-usage.exit-threshold 4%   # Don't stop until the
> > discrepancy has been reduced with 4%
> > option alu.write-usage.entry-threshold 20%   # Kick in when the
> > write-usage discrepancy is 20%
> > option alu.write-usage.exit-threshold 4%   # Don't stop until the
> > discrepancy has been reduced with 4%
> >  option alu.stat-refresh.interval 10sec   # Refresh the statistics
> > used for decision-making every 10 seconds
> >
> > end-volume
> >
> > volume iothreads
> >        type performance/io-threads
> >        option thread-count 10
> >        subvolumes bricks
> > end-volume
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>


-- 
Anand V. Avati





More information about the Gluster-devel mailing list