[Gluster-users] High I/O And Processor Utilization

Lindsay Mathieson lindsay.mathieson at gmail.com
Tue Jan 12 03:22:25 UTC 2016

On 11/01/16 15:37, Krutika Dhananjay wrote:
> Kyle,
> Based on the testing we have done from our end, we've found that 512MB 
> is a good number that is neither too big nor too small,
> and provides good performance both on the IO side and with respect to 
> self-heal.

Hi Krutika, I experimented a lot with different chunk sizes, didn't find 
all that much difference between 4MB and 1GB

But benchmarks are tricky things - I used Crystal Diskmark inside a VM, 
which is probably not the best assessment. And two of the bricks on my 
replica 3 are very slow, just test drives, not production. So I guess 
that would effevt things :)

These are my current setting - what do you use?

Volume Name: datastore1
Type: Replicate
Volume ID: 1261175d-64e1-48b1-9158-c32802cc09f0
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Brick1: vnb.proxmox.softlog:/vmdata/datastore1
Brick2: vng.proxmox.softlog:/vmdata/datastore1
Brick3: vna.proxmox.softlog:/vmdata/datastore1
Options Reconfigured:
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: off
performance.strict-write-ordering: on
performance.write-behind: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
performance.cache-refresh-timeout: 4
performance.io-thread-count: 32
cluster.server-quorum-type: server
cluster.quorum-type: auto
client.event-threads: 4
server.event-threads: 4
cluster.self-heal-window-size: 256
features.shard-block-size: 512MB
features.shard: on
performance.readdir-ahead: off

Lindsay Mathieson

More information about the Gluster-users mailing list