[Gluster-users] GlusterFS Scale

Lindolfo Meira meira at cesup.ufrgs.br
Mon Feb 18 17:53:19 UTC 2019


We're running some benchmarks on a striped glusterfs volume.

We have 6 identical servers acting as bricks. Measured link speed between 
these servers is 3.36GB/s. Link speed between clients of the parallel file 
system and its servers is also 3.36GB/s. So we're expecting this system to 
have a write performance of around 20.16GB/s (6 times 3.36GB/s) minus some 
write overhead.

If we write to the system from a single client, we manage to write at 
around 3.36GB/s. That's okay, because we're limited by the max throughput 
of that client's network adapter. But when we account for that and write 
from 6 or more clients, we can never get past 11GB/s. Is that right? Is 
this really the overhead to be expected? We'd appreciate any inputs.

Output of gluster volume info:

Volume Name: gfs0
Type: Stripe
Volume ID: 2ca3dd45-6209-43ff-a164-7f2694097c64
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 6 = 6
Transport-type: tcp
Bricks:
Brick1: pfs01-ib:/mnt/data
Brick2: pfs02-ib:/mnt/data
Brick3: pfs03-ib:/mnt/data
Brick4: pfs04-ib:/mnt/data
Brick5: pfs05-ib:/mnt/data
Brick6: pfs06-ib:/mnt/data
Options Reconfigured:
cluster.stripe-block-size: 128KB
performance.cache-size: 32MB
performance.write-behind-window-size: 1MB
performance.strict-write-ordering: off
performance.strict-o-direct: off
performance.stat-prefetch: off
server.event-threads: 4
client.event-threads: 2
performance.io-thread-count: 16
transport.address-family: inet
nfs.disable: on
cluster.localtime-logging: enable



Thanks,

Lindolfo Meira, MSc
Diretor Geral, Centro Nacional de Supercomputação
Universidade Federal do Rio Grande do Sul
+55 (51) 3308-3139


More information about the Gluster-users mailing list