[Gluster-users] Gluster-users Digest, Vol 86, Issue 1 - Message 5: client load high using FUSE mount

Ben England bengland at redhat.com
Mon Jun 1 12:31:43 UTC 2015



----- Original Message -----
> From: gluster-users-request at gluster.org
> To: gluster-users at gluster.org
> Sent: Monday, June 1, 2015 8:00:01 AM
> Subject: Gluster-users Digest, Vol 86, Issue 1
> 
> Message: 5
> Date: Mon, 01 Jun 2015 13:11:13 +0200
> From: Mitja Miheli? <mitja.mihelic at arnes.si>
> To: gluster-users at gluster.org
> Subject: [Gluster-users] Client load high (300) using fuse mount
> Message-ID: <556C3DD1.1080100 at arnes.si>
> Content-Type: text/plain; charset=utf-8; format=flowed
> 
> Hi!
> 
> I am trying to set up a Wordpress cluster using GlusterFS used for
> storage. Web nodes will access the same Wordpress install on a volume
> mounted via FUSE from a 3 peer GlusterFS TSP.
> 
> I started with one web node and Wordpress on local storage. The load
> average was constantly about 5. iotop showed about 300kB/s disk reads or
> less. The load average was below 6.
> 
> When I mounted the GlusterFS volume to the web node the 1min load
> average went over 300. Each of the 3 peers is transmitting about 10MB/s
> to my web node regardless of the load.
> TSP peers are on 10Gbit NICs and the web node is on a 1Gbit NIC.

30 MB/s is about 1/3 line speed for a 1-Gbps NIC port.  Sounds like network latency and lack of client-side caching might be your bottleneck, might want to put a 10-Gbps NIC port on your client.  You did disable client-side caching (md-cache and io-cache translators) below, was that your intent?  Also, defaults for these translators are very conservative, if only 1 client you may want to increase time that data is cached (in the client) using FUSE mount options "entry-timeout=30" and "attribute-timeout=30".  Unlike non-distributed Linux filesystems, Gluster is very conservative about client side caching to avoid cache coherency issues.

> 
> I'm out of ideas here... Could it be the network?
> What should I look at for optimizing the network stack on the client?
> 
> Options set on TSP:
> Options Reconfigured:
> performance.cache-size: 4GB
> network.ping-timeout: 15
> cluster.quorum-type: auto
> network.remote-dio: on
> cluster.eager-lock: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.cache-refresh-timeout: 4
> performance.io-thread-count: 32
> nfs.disable: on
> 

Too many tunings, what are these intended to do?  The "gluster volume reset" command allows you to undo this.  in Gluster 3.7, the "gluster volume get your-volume all" command lets you see what the defaults are.  

> Regards, Mitja
> 
> --
> --
> Mitja Miheli?
> ARNES, Tehnolo?ki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
> tel: +386 1 479 8877, fax: +386 1 479 88 78


More information about the Gluster-users mailing list