[Gluster-users] poor performance during healing

Craig Yoshioka craigyk at nanoimagingservices.com
Mon Feb 23 23:30:42 UTC 2015

I’m using Gluster 3.6 to host a volume with some KVM images.  I’d seen before that other people were having terrible performance while Gluster was auto-healing but that a rewrite in 3.6 had potentially solved this problem.  

Well, it hasn’t (for me).  If my gluster volume starts to auto-heal, performance can get so bad that some of the VMs essentially lock up.  In top I can see the glusterfsd process sometime hitting 700% of the CPU.  Is there anything I can do to prevent this by throttling the healing process?

Here are my volume options:

Volume Name: vm-images
Type: Replicate
Volume ID: 5b38ddbe-a1ae-4e10-b0ad-dcd785a44493
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: vmhost-1:/gfs/brick-0
Brick2: vmhost-2:/gfs/brick-0
Options Reconfigured:
nfs.disable: on
cluster.quorum-count: 1
network.frame-timeout: 1800
network.ping-timeout: 15
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 107
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%


More information about the Gluster-users mailing list