todd at fotokem.hu
Mon Jun 14 12:09:10 UTC 2010
ok I have done all of this. The numbers are the same. With all of the
tuning nothing gets better. What is the next step?
On Fri, Jun 11, 2010 at 8:34 PM, Harshavardhana <harsha at gluster.com> wrote:
> On 06/10/2010 10:03 PM, Todd Daugherty wrote:
>> I agree it is a matter of tuning. So what do we tune? I am not stuck
>> on 2.0.9, there just was no performance benefit for my data set.
>> (Millions of 8-50 megabyte files) My test system has 12 Gigabytes of
>> RAM which is why I used a size of 16 Gigabytes so that cache is not a
>> factor in the test. But away what did you learn from these results?
>> Read performance is the most important to me. (That is because the
>> write performance is pretty good already)
>> Thanks again.
> Right now we can increase "read-ahead" page count. Since the best
> performance of glusterfs is seen when run from multiple clients its better
> to run iozone in cluster mode over multiple different clients.
> How many servers do you have? and how many clients?.
> Also remember to add "sysctl vm.swappiness = 0" on server side to make sure
> that dirty cache in not filled up since the increase writebehind will cause
> aggressiveness on server side RAM usage as infiniband will receive lot of
> There are other parameters for transport ib-verbs on server side
> option transport.ib-verbs.work-request-send-count 256
> option transport.ib-verbs.work-request-recv-count 256
> These values are 32 by default. Try increasing these and rerun the
> If you still don't have enough performance benefit. We will apply
> write-behind and read-ahead on server side to optimally utilize RAM, since
> its not a well tested configuration its not recommended.
> But i think you should be getting enough benefit from the above details.
> Gluster Inc - http://www.gluster.com
> +1(408)-770-1887, Ext-113
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users