[Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC
Danny
dbray925+gluster at gmail.com
Mon Dec 11 13:15:31 UTC 2023
Hello list, I'm hoping someone can let me know what setting I missed.
Hardware:
Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
8x SSD s Negotiated Speed 12 Gbps
PERC H755 Controller - RAID 6
Created virtual "data" disk from the above 8 SSD drives, for a ~20 TB
/dev/sdb
OS:
CentOS Stream
kernel-4.18.0-526.el8.x86_64
glusterfs-7.9-1.el8.x86_64
IPERF Test between nodes:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 0 sender
[ 5] 0.00-10.04 sec 11.5 GBytes 9.86 Gbits/sec
receiver
All good there. ~10 Gbps, as expected.
LVM Install:
export DISK="/dev/sdb"
sudo parted --script $DISK "mklabel gpt"
sudo parted --script $DISK "mkpart primary 0% 100%"
sudo parted --script $DISK "set 1 lvm on"
sudo pvcreate --dataalignment 128K /dev/sdb1
sudo vgcreate --physicalextentsize 128K gfs_vg /dev/sdb1
sudo lvcreate -L 16G -n gfs_pool_meta gfs_vg
sudo lvcreate -l 95%FREE -n gfs_pool gfs_vg
sudo lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata
gfs_vg/gfs_pool_meta
sudo lvchange --zero n gfs_vg/gfs_pool
sudo lvcreate -V 19.5TiB --thinpool gfs_vg/gfs_pool -n gfs_lv
sudo mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10
/dev/mapper/gfs_vg-gfs_lv
sudo vim /etc/fstab
/dev/mapper/gfs_vg-gfs_lv /gluster/data/brick xfs
rw,inode64,noatime,nouuid 0 0
sudo systemctl daemon-reload && sudo mount -a
fio --name=test --filename=/gluster/data/brick/wow --size=1G
--readwrite=write
Run status group 0 (all jobs):
WRITE: bw=2081MiB/s (2182MB/s), 2081MiB/s-2081MiB/s (2182MB/s-2182MB/s),
io=1024MiB (1074MB), run=492-492msec
All good there. 2182MB/s =~ 17.5 Gbps. Nice!
Gluster install:
export NODE1='10.54.95.123'
export NODE2='10.54.95.124'
export NODE3='10.54.95.125'
sudo gluster peer probe $NODE2
sudo gluster peer probe $NODE3
sudo gluster volume create data replica 3 arbiter 1
$NODE1:/gluster/data/brick $NODE2:/gluster/data/brick
$NODE3:/gluster/data/brick force
sudo gluster volume set data network.ping-timeout 5
sudo gluster volume set data performance.client-io-threads on
sudo gluster volume set data group metadata-cache
sudo gluster volume start data
sudo gluster volume info all
Volume Name: data
Type: Replicate
Volume ID: b52b5212-82c8-4b1a-8db3-52468bc0226e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.54.95.123:/gluster/data/brick
Brick2: 10.54.95.124:/gluster/data/brick
Brick3: 10.54.95.125:/gluster/data/brick (arbiter)
Options Reconfigured:
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
network.ping-timeout: 5
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
sudo vim /etc/fstab
localhost:/data /data glusterfs
defaults,_netdev 0 0
sudo systemctl daemon-reload && sudo mount -a
fio --name=test --filename=/data/wow --size=1G --readwrite=write
Run status group 0 (all jobs):
WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s),
io=1024MiB (1074MB), run=9366-9366msec
Oh no, what's wrong? From 2182MB/s down to only 115MB/s? What am I missing?
I'm not expecting the above ~17 Gbps, but I'm thinking it should at least
be close(r) to ~10 Gbps.
Any suggestions?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20231211/800fda39/attachment.html>
More information about the Gluster-users
mailing list