[Gluster-users] Performance tuning suggestions for nvme on aws (Strahil)
Strahil Nikolov
hunter86_bg at yahoo.com
Mon Jan 6 20:36:25 UTC 2020
Hi Michael,
For the I/O Scheduler - none is used for multiqueue , while noop is used when multiqueue is disabled - keep it none.
It's strange that all reads go to node1:/data/nvme/testvol1/brick, they should be equally balanced between bricks.
Can you provide your gluster volume's info ?What is the cluster.choose-local value ?
Maybe you can give a try of the sharding option (once enabled - do not disable it . If you need you data , just create a regular volume and copy the data there). Maybe the file is quite large and is located on a single brick and that's why you are using only it for reads , while all writes have to be pushed to all nodes.
You can define Jumbo Frames (MTU 9000) for the gluster's interfaces - this should potentially give some extra juice.
If you need extra performance ,I guess you should get rid of the FUSE and use NFS Ganesha, which uses libgfapi and performance is better.
Best Regards,Strahil Nikolov
В понеделник, 6 януари 2020 г., 00:31:42 ч. Гринуич+2, Michael Richardson <hello at mikerichardson.com.au> написа:
Hi Mohit and Strahil,
Thank you for taking the time to share your advice. Strahil, unfortunately your message didn't come to my inbox, so I'm combining my reply to both yourself and Mohit.
Mohit,
I mentioned there was no performance difference between using SSL and not using SSL. I did try setting it to AES128 anyway, but that seemed to have no effect as well.
Strahil,
I've had a go at various different settings this morning, including all of those you've supplied. I couldn't see any improvement in throughput when messing with these, but I could make it worse by adjusting things like event-threads.
I've also tried adjusting the fio test types from one large file to many small files and a mix in between without any real change in peak performance.
In answer to your other question, I'm running Gluster 7.1-1 on Debian 10. I have confirmed that blk-mq is on and my scheduler is 'none'. I don't seem to have an option to change it to 'noop' but I'm not sure why.
Here's some profile output as well, in case that's helpful. These' values don't look so bad (to me, at least), but again this is much less than the raw nvme output that I was hoping to get closer to.
Brick: node1:/data/nvme/testvol1/brick
-------------------------------------------------------------
Cumulative Stats:
Block Size: 16384b+
No. of Reads: 492397
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 22 RELEASEDIR
0.00 33.67 us 30.56 us 43.31 us 5 GETXATTR
0.01 26.05 us 14.36 us 128.33 us 870 ENTRYLK
0.02 53.48 us 43.23 us 87.31 us 435 FTRUNCATE
0.02 58.54 us 46.75 us 94.94 us 435 FALLOCATE
0.04 24.21 us 9.58 us 679.17 us 2436 FLUSH
0.04 145.95 us 116.32 us 211.72 us 435 CREATE
0.28 211.24 us 32.92 us 17442.92 us 2000 OPEN
0.38 82.16 us 34.00 us 15537.54 us 6929 LOOKUP
1.92 6683.77 us 437.30 us 28616.62 us 436 FSYNC
2.03 48.87 us 7.50 us 24840.37 us 63019 FINODELK
18.97 455.94 us 43.78 us 43701.58 us 63019 FXATTROP
28.06 134.43 us 41.87 us 36537.65 us 316217 WRITE
48.22 611.65 us 100.31 us 43842.80 us 119420 READ
Duration: 3505 seconds
Data Read: 8067432448 bytes
Data Written: 46685782016 bytes
Interval 0 Stats:
Block Size: 16384b+
No. of Reads: 492397
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 22 RELEASEDIR
0.00 33.67 us 30.56 us 43.31 us 5 GETXATTR
0.01 26.05 us 14.36 us 128.33 us 870 ENTRYLK
0.02 53.48 us 43.23 us 87.31 us 435 FTRUNCATE
0.02 58.54 us 46.75 us 94.94 us 435 FALLOCATE
0.04 24.21 us 9.58 us 679.17 us 2436 FLUSH
0.04 145.95 us 116.32 us 211.72 us 435 CREATE
0.28 211.24 us 32.92 us 17442.92 us 2000 OPEN
0.38 82.16 us 34.00 us 15537.54 us 6929 LOOKUP
1.92 6683.77 us 437.30 us 28616.62 us 436 FSYNC
2.03 48.87 us 7.50 us 24840.37 us 63019 FINODELK
18.97 455.94 us 43.78 us 43701.58 us 63019 FXATTROP
28.06 134.43 us 41.87 us 36537.65 us 316217 WRITE
48.23 611.65 us 100.31 us 43842.80 us 119420 READ
Duration: 3505 seconds
Data Read: 8067432448 bytes
Data Written: 46685782016 bytes
Brick: node2:/data/nvme/testvol1/brick
-------------------------------------------------------------
Cumulative Stats:
Block Size: 16384b+
No. of Reads: 0
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 22 RELEASEDIR
0.00 27.59 us 26.48 us 28.92 us 5 GETXATTR
0.06 16.31 us 11.92 us 43.37 us 870 ENTRYLK
0.07 40.53 us 36.62 us 163.89 us 435 FTRUNCATE
0.08 43.30 us 38.41 us 70.49 us 435 FALLOCATE
0.13 12.82 us 9.48 us 79.63 us 2436 FLUSH
0.21 114.31 us 96.53 us 166.02 us 435 CREATE
1.53 184.29 us 24.29 us 19723.10 us 2000 OPEN
1.82 63.25 us 25.23 us 19525.54 us 6929 LOOKUP
4.81 18.44 us 9.06 us 182.62 us 63019 FINODELK
11.69 6473.51 us 380.98 us 17129.47 us 436 FSYNC
17.41 66.69 us 44.52 us 12198.65 us 63019 FXATTROP
62.20 47.49 us 31.49 us 16215.18 us 316195 WRITE
Duration: 3486 seconds
Data Read: 0 bytes
Data Written: 46685782016 bytes
Interval 0 Stats:
Block Size: 16384b+
No. of Reads: 0
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 22 RELEASEDIR
0.00 27.59 us 26.48 us 28.92 us 5 GETXATTR
0.06 16.31 us 11.92 us 43.37 us 870 ENTRYLK
0.07 40.53 us 36.62 us 163.89 us 435 FTRUNCATE
0.08 43.30 us 38.41 us 70.49 us 435 FALLOCATE
0.13 12.82 us 9.48 us 79.63 us 2436 FLUSH
0.21 114.31 us 96.53 us 166.02 us 435 CREATE
1.53 184.29 us 24.29 us 19723.10 us 2000 OPEN
1.82 63.25 us 25.23 us 19525.54 us 6929 LOOKUP
4.81 18.44 us 9.06 us 182.62 us 63019 FINODELK
11.69 6473.51 us 380.98 us 17129.47 us 436 FSYNC
17.41 66.69 us 44.52 us 12198.65 us 63019 FXATTROP
62.20 47.49 us 31.49 us 16215.18 us 316195 WRITE
Duration: 3486 seconds
Data Read: 0 bytes
Data Written: 46685782016 bytes
Brick: node3:/data/nvme/testvol1/brick
-------------------------------------------------------------
Cumulative Stats:
Block Size: 16384b+
No. of Reads: 0
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 19 RELEASEDIR
0.00 30.60 us 27.30 us 39.71 us 5 GETXATTR
0.06 17.42 us 12.10 us 61.92 us 870 ENTRYLK
0.07 40.77 us 36.20 us 58.29 us 435 FTRUNCATE
0.08 45.15 us 38.82 us 68.36 us 435 FALLOCATE
0.13 13.22 us 9.49 us 79.73 us 2436 FLUSH
0.21 117.47 us 98.17 us 169.38 us 435 CREATE
1.47 176.64 us 26.93 us 19510.96 us 2000 OPEN
1.78 61.78 us 25.32 us 17795.12 us 6929 LOOKUP
4.98 19.05 us 10.17 us 7635.73 us 63019 FINODELK
11.53 6369.75 us 392.09 us 16146.95 us 436 FSYNC
17.82 68.11 us 44.52 us 11326.21 us 63019 FXATTROP
61.85 47.11 us 30.86 us 3458.30 us 316194 WRITE
Duration: 3481 seconds
Data Read: 0 bytes
Data Written: 46685782016 bytes
Interval 0 Stats:
Block Size: 16384b+
No. of Reads: 0
No. of Writes: 2849474
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 57 FORGET
0.00 0.00 us 0.00 us 0.00 us 4133 RELEASE
0.00 0.00 us 0.00 us 0.00 us 19 RELEASEDIR
0.00 30.60 us 27.30 us 39.71 us 5 GETXATTR
0.06 17.42 us 12.10 us 61.92 us 870 ENTRYLK
0.07 40.77 us 36.20 us 58.29 us 435 FTRUNCATE
0.08 45.15 us 38.82 us 68.36 us 435 FALLOCATE
0.13 13.22 us 9.49 us 79.73 us 2436 FLUSH
0.21 117.47 us 98.17 us 169.38 us 435 CREATE
1.47 176.64 us 26.93 us 19510.96 us 2000 OPEN
1.78 61.78 us 25.32 us 17795.12 us 6929 LOOKUP
4.98 19.05 us 10.17 us 7635.73 us 63019 FINODELK
11.53 6369.75 us 392.09 us 16146.95 us 436 FSYNC
17.82 68.11 us 44.52 us 11326.21 us 63019 FXATTROP
61.85 47.12 us 30.86 us 3458.30 us 316194 WRITE
Duration: 3481 seconds
Data Read: 0 bytes
Data Written: 46685782016 bytes
On Mon, Jan 6, 2020 at 12:49 AM Mohit Agrawal <moagrawa at redhat.com> wrote:
Hi,
Along with previous tunning suggested by Strahil please configure "ssl.cipher-list"to AES128 for specific volume to improve the performance. As you mentioned youhave configured SSL on a volume and the performance is drop in case of SSL. To improve the same please configure the AES128 cipher list, I hope you should get sufficient performance improvement.
Please share the result of a performance improvement after configuring the cipherAES128 if it is possible for you.
Thanks,Mohit Agrawal
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200106/6e145f9c/attachment.html>
More information about the Gluster-users
mailing list