[Gluster-users] High I/O And Processor Utilization

Roman romeo.r at gmail.com
Sun Jan 10 16:08:27 UTC 2016


Hey,

healing across VM-s is really not normal. I would search for reasons on
network side. I run 3.6.5 for my Proxmox setup (with libgfapi backend),
which serves around 30 VM atm (and hope to run even more) and have no
issues even when it starts it backups (copy large VM snapshots from
glusterf volume to proxmox local disk). Disks in Glusterfs servers are all
SAS 10K rpm and runing raid5 for distributed volumes and no raid for
replicated volumes.

My options are:

Options Reconfigured:
network.ping-timeout: 15
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on
performance.write-behind: off

2016-01-10 1:14 GMT+02:00 Lindsay Mathieson <lindsay.mathieson at gmail.com>:

> On 9/01/2016 12:34 PM, Ravishankar N wrote:
>
>> If you're trying arbiter, it would be good if you can compile the 3.7
>> branch and use it since it has an important fix (
>> http://review.gluster.org/#/c/12479/) that will only make it to
>> glusterfs-3.7.7. That way you'd get this fix and the sharding ones too
>> right away.
>>
>
>
> is 3.7.7 far off?
>
> --
> Lindsay Mathieson
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Best regards,
Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160110/1b484855/attachment.html>


More information about the Gluster-users mailing list