harsha at gluster.com
Fri Jun 11 18:34:20 UTC 2010
On 06/10/2010 10:03 PM, Todd Daugherty wrote:
> I agree it is a matter of tuning. So what do we tune? I am not stuck
> on 2.0.9, there just was no performance benefit for my data set.
> (Millions of 8-50 megabyte files) My test system has 12 Gigabytes of
> RAM which is why I used a size of 16 Gigabytes so that cache is not a
> factor in the test. But away what did you learn from these results?
> Read performance is the most important to me. (That is because the
> write performance is pretty good already)
> Thanks again.
Right now we can increase "read-ahead" page count. Since the best
performance of glusterfs is seen when run from multiple clients its
better to run iozone in cluster mode over multiple different clients.
How many servers do you have? and how many clients?.
Also remember to add "sysctl vm.swappiness = 0" on server side to make
sure that dirty cache in not filled up since the increase writebehind
will cause aggressiveness on server side RAM usage as infiniband will
receive lot of data.
There are other parameters for transport ib-verbs on server side
option transport.ib-verbs.work-request-send-count 256
option transport.ib-verbs.work-request-recv-count 256
These values are 32 by default. Try increasing these and rerun the
If you still don't have enough performance benefit. We will apply
write-behind and read-ahead on server side to optimally utilize RAM,
since its not a well tested configuration its not recommended.
But i think you should be getting enough benefit from the above details.
Gluster Inc - http://www.gluster.com
More information about the Gluster-users