[Gluster-users] GlusterFS performance for big files...

Yaniv Kaul ykaul at redhat.com
Tue Aug 18 13:19:17 UTC 2020


On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes <gilberto.nunes32 at gmail.com>
wrote:

> Hi friends...
>
> I have a 2-nodes GlusterFS, with has the follow configuration:
> gluster vol info
>
> Volume Name: VMS
> Type: Replicate
> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: server02:/DATA/vms
> Brick2: server01:/DATA/vms
> Options Reconfigured:
> performance.read-ahead: off
> performance.io-cache: on
> performance.cache-refresh-timeout: 1
> performance.cache-size: 1073741824
> performance.io-thread-count: 64
> performance.write-behind-window-size: 64MB
> cluster.granular-entry-heal: enable
> cluster.self-heal-daemon: enable
> performance.client-io-threads: on
> cluster.data-self-heal-algorithm: full
> cluster.favorite-child-policy: mtime
> network.ping-timeout: 2
> cluster.quorum-count: 1
> cluster.quorum-reads: false
> cluster.heal-timeout: 20
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
>
> HDDs are SSD and SAS
> Network connections between the servers are dedicated 1GB (no switch!).
>

You can't get good performance on 1Gb.

> Files are 500G 200G 200G 250G 200G 100G size each.
>
> Performance so far so good is ok...
>

What's your workload? Read? Write? sequential? random? many files?
With more bricks and nodes, you should probably use sharding.

What are your expectations, btw?
Y.


> Any other advice which could point me, let me know!
>
> Thanks
>
>
>
> ---
> Gilberto Nunes Ferreira
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200818/3f1f369f/attachment.html>


More information about the Gluster-users mailing list