[Gluster-users] Description of performance.cache-size
shreyansh.shah at alpha-grep.com
Wed Sep 30 15:22:20 UTC 2020
Thanks for taking out time to help me.
This is not a hyperconverged setup. We have 7 nodes with 2 bricks on each
node. Total 14 node distributed setup.
The host on which i saw the increased RAM is a client with glusterfs client
On Wed, Sep 30, 2020 at 8:42 PM Strahil Nikolov <hunter86_bg at yahoo.com>
> Sadly I can't help much here.
> Is this a Hyperconverged setup (host is also a client) ?
> Best Regards,
> Strahil Nikolov
> В вторник, 29 септември 2020 г., 18:29:20 Гринуич+3, Shreyansh Shah <
> shreyansh.shah at alpha-grep.com> написа:
> Hi All,
> Can anyone help me out with this?
> On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah <
> shreyansh.shah at alpha-grep.com> wrote:
> > Hi,
> > We are using distributed gluster version 5.10 (7 nodes with 2 bricks per
> node, i.e 14 bricks total).
> > We have set the performance.cache-size parameter as 8GB on server. We
> assumed that this config parameter indicates the amount of RAM that will be
> used on the client machine (i.e. upto 8 GB of RAM to be used for data
> caching at clients). But we observed that on a machine the RAM usage of
> glusterfs process was around 17GB.
> > So we want to know whether our understanding of the parameter is
> correct? Or something else that we have missed.
> > Below are the options configured at glusterfs server, please advise if
> we can add/tune some parameters to extract more performance.
> > storage.health-check-interval: 10
> > performance.client-io-threads: on
> > performance.cache-refresh-timeout: 60
> > performance.cache-size: 8GB
> > transport.address-family: inet
> > nfs.disable: on
> > server.keepalive-time: 60
> > client.keepalive-time: 60
> > network.ping-timeout: 90
> > --
> > Regards,Shreyansh Shah
> Regards,Shreyansh Shah
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users