[Gluster-users] Optimizing Gluster (gfapi) for high IOPS

Carlos Capriotti capriotti.carlos at gmail.com
Sat Mar 22 12:24:09 UTC 2014


are you using the native glusterfs client or NFS ? you mentioned something
about gfapi.

your mentioned that processes reach 100% and the machine stalls ? Which
process ?

this seems to ring a bell, but related to a client issue.




On Fri, Mar 21, 2014 at 5:20 PM, Josh Boon <gluster at joshboon.com> wrote:

> Hardware RAID 5 on SSD's using LVM formatted with XFS default options
> mounted with noatime
>
> Also I don't a lot of history for this current troubled machine but the
> sysctl additions don't appear to have made a significant difference
>
>
> ------------------------------
> *From: *"Nick Majeran" <nmajeran at gmail.com>
> *To: *"Josh Boon" <gluster at joshboon.com>
> *Cc: *"Carlos Capriotti" <capriotti.carlos at gmail.com>, "
> Gluster-users at gluster.org List" <gluster-users at gluster.org>
> *Sent: *Thursday, March 20, 2014 8:31:11 PM
>
> *Subject: *Re: [Gluster-users] Optimizing Gluster (gfapi) for high IOPS
>
> Just curious, what is your disk layout for the bricks?
>
> On Mar 20, 2014, at 6:27 PM, Josh Boon <gluster at joshboon.com> wrote:
>
> Stuck those in as is.  Will look at optimizing based on my system's config
> too.
>
> ------------------------------
> *From: *"Carlos Capriotti" <capriotti.carlos at gmail.com>
> *To: *"Josh Boon" <gluster at joshboon.com>
> *Cc: *"Gluster-users at gluster.org List" <gluster-users at gluster.org>
> *Sent: *Thursday, March 20, 2014 7:21:08 PM
> *Subject: *Re: [Gluster-users] Optimizing Gluster (gfapi) for high IOPS
>
> Well, if you want to join my tests, here are a couple of sysctl options:
>
> net.core.wmem_max=12582912
> net.core.rmem_max=12582912
> net.ipv4.tcp_rmem= 10240 87380 12582912
> net.ipv4.tcp_wmem= 10240 87380 12582912
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_timestamps = 1
> net.ipv4.tcp_sack = 1
> vm.swappiness=10
> vm.dirty_background_ratio=1
> net.ipv4.neigh.default.gc_thresh2=2048
> net.ipv4.neigh.default.gc_thresh3=4096
> net.core.netdev_max_backlog=2500
> net.ipv4.tcp_mem= 12582912 12582912 12582912
>
>
> On Fri, Mar 21, 2014 at 12:05 AM, Josh Boon <gluster at joshboon.com> wrote:
>
>> Hey folks,
>>
>> We've been running VM's on qemu using a replicated gluster volume
>> connecting using gfapi and things have been going well for the most part.
>>  Something we've noticed though is that we have problems with many
>> concurrent disk operations and disk latency. The latency gets bad enough
>> that the process eats the cpu and the entire machine stalls. The place
>> where we've seen it the worst is a apache2 server under very high load
>> which had to be converted to raw disk image due to performance issues.  The
>> hypervisors are connected directly to each other over a bonded pair of 10Gb
>> fiber modules and are the only bricks in the volume.  Volume info is
>>
>> Volume Name: VMARRAY
>> Type: Replicate
>> Volume ID: 67b3ad79-4b48-4597-9433-47063f90a7a0
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.9.1.1:/mnt/xfs/VMARRAY
>> Brick2: 10.9.1.2:/mnt/xfs/VMARRAY
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 7
>> cluster.eager-lock: on
>> performance.flush-behind: on
>> performance.write-behind: on
>> performance.write-behind-window-size: 4MB
>> performance.cache-size: 1GB
>> server.allow-insecure: on
>> diagnostics.client-log-level: ERROR
>>
>>
>> Any advice for performance improvements for high IO / low bandwidth
>> tuning would be appreciated.
>>
>>
>> Thanks,
>>
>> Josh
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140322/e1f8cc17/attachment.html>


More information about the Gluster-users mailing list