<div dir="ltr">Hello users,<div><br></div><div>I have started using gluster just a few weeks ago and I am rocking a Replicated-Distributed setup with arbiters (A) and SATA Volumes (V). I have around 6 volumes and 3 arbiters in this setup: </div><div>V+V+A | V+V+A | V+V+A </div><div><br></div><div>All these volumes are spread across 3 different nodes, all of them being 1Gbit. Due to hardware limitations, SSD or 10Gbit network is not available. </div><div><br></div><div>But even then, testing via iperf and normal rsync of files between servers, I am easily able to achieve 700Mbps~ </div><div>[ ID] Interval Transfer Bandwidth Retr Cwnd<br>[ 4] 0.00-1.00 sec 49.9 MBytes 419 Mbits/sec 21 132 KBytes<br>[ 4] 1.00-2.00 sec 80.0 MBytes 671 Mbits/sec 0 214 KBytes<br>[ 4] 2.00-3.00 sec 87.0 MBytes 730 Mbits/sec 3 228 KBytes<br>[ 4] 3.00-4.00 sec 91.6 MBytes 769 Mbits/sec 15 215 KBytes<br></div><div><br></div><div>But when rsyncing data from same server to another node with mounted glusterVolume, I am getting measly 50Mbps (7MBps). </div><div><br></div><div>All servers have 64GB Ram and their memory usage is around 50% and CPU usage less than 10%. </div><div>All bricks are zfs volumes, no Raid setup or anything. All volumes are direct hard disks formatted as ZFS (JBOD setup).</div><div><br></div><div><br></div><div>My Gluster Vol Info</div><div><br></div><div>gluster vol info<br><br>Volume Name: glusterStore<br>Type: Distributed-Replicate<br>Volume ID: c7ac8094-f379-45fc-8cfd-f2937355e03d<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x (2 + 1) = 9<br>Transport-type: tcp<br>Bricks:<br>Brick1: 62.0.0.1:/zpool1/proxmox<br>Brick2: 5.0.0.1:/zpool1/proxmox<br>Brick3:
62.0.0.1
:/home/glusterArbiter (arbiter)<br>Brick4:
62.0.0.1
:/zpool2/proxmox<br>Brick5:
5.0.0.1
:/zpool2/proxmox<br>Brick6:
62.0.0.2:/home/glusterArbiter2 (arbiter)<br>Brick7:
62.0.0.2:/zpool/proxmox<br>Brick8:
5.0.0.1
:/zpool3/proxmox<br>Brick9:
62.0.0.2:/home/glusterArbiter (arbiter)<br>Options Reconfigured:<br>performance.readdir-ahead: enable<br>cluster.rsync-hash-regex: none<br>client.event-threads: 16<br>server.event-threads: 16<br>network.ping-timeout: 5<br>performance.normal-prio-threads: 64<br>performance.high-prio-threads: 64<br>performance.io-thread-count: 64<br>performance.cache-size: 1GB<br>performance.read-ahead: off<br>performance.io-cache: off<br>performance.flush-behind: off<br>performance.quick-read: on<br>network.frame-timeout: 60<br>storage.batch-fsync-delay-usec: 0<br>server.allow-insecure: on<br>performance.stat-prefetch: off<br>cluster.lookup-optimize: on<br>performance.write-behind: on<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: off<br></div><div><br></div><div><br></div><div>Regards</div></div>