[Gluster-users] glusterfs vs. syncthing

Thomas tpdev.tester at gmail.com
Thu May 29 12:31:28 UTC 2025


I'm running a syncthing instance on a raspi 5 which is part of a gluster 
cluster ((*) details see below). Without the running syncthing
service I'm absolutely happy with the glusterfs performance 
(100+MB/sec), but when I start the syncthing service, performance drops 
after some
time to nearly standstill (1MB/sec and less).

# ls -alh SOME_DIR takes 10-15 sec (or even longer)
# cd INTO_SOME_DIR takes 10-15sec

I see some glusterfs daemon activity (30-40 %), but nearly no network- 
or disk activity (2+GB free RAM), that means to me, syncthing
is not consuming the resources (but possibly blocking some resources). I 
read
https://docs.gluster.org/en/main/Administrator-Guide/Performance-Testing/
and tried:

     gluster volume profile wdVolume start
     setfattr -n trusted.io-stats-dump -v io-stats-pre.txt /mnt/wdVolume

results (**) but I don't know what to do with the numbers. The cluster 
is healthy, see (***). I can't see any serious in glusterd.log, see (****)

A few seconds after I stopped the syncthing service, the performance is 
back at 100+MB/sec.

I have no idea, where to start my search for the performance, any ideas 
are welcome.


Thank in advance

Thomas









###################################################################################
(*)
root at tpi5wb3:/mnt/wdVolume# gluster volume info wdVolume
Volume Name: wdVolume
Type: Distributed-Disperse
Volume ID: 5b47f69a-7731-4c7b-85bf-a5014e2a5209
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: tPi5Wb:/mnt/glusterBricks/extWd18a/data
Brick2: 192.168.129.9:/mnt/glusterBricks/extWd18b/data
Brick3: tPi5Wb3:/mnt/glusterBricks/extWd18c/data
Brick4: tPi5Wb:/mnt/glusterBricks/extWd5a/data
Brick5: 192.168.129.9:/mnt/glusterBricks/extWd5b/data
Brick6: tPi5Wb3:/mnt/glusterBricks/extWd5x/data
Options Reconfigured:
storage.build-pgfid: on
diagnostics.client-log-level: DEBUG
cluster.disperse-self-heal-daemon: enable
transport.address-family: inet
storage.fips-mode-rchecksum: on
features.bitrot: on
features.scrub: Active
performance.cache-size: 64MB
disperse.shd-max-threads: 16
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 200000
performance.readdir-ahead: on
performance.parallel-readdir: on
performance.nl-cache: on
performance.nl-cache-timeout: 600
performance.nl-cache-positive-entry: on
###########################################################################
(**)
=== Interval 69 stats ===
       Duration : 103 secs
      BytesRead : 117440512
   BytesWritten : 0

Block Size   :          131072B+
Read Count   :               896
Write Count  :                 0

Fop           Call Count    Avg-Latency    Min-Latency Max-Latency
---           ----------    -----------    ----------- -----------
RELEASEDIR            21           0 us           0 us           0 us
------ ----- ----- ----- ----- ----- ----- -----  ----- ----- ----- -----

Sa 3. Mai 12:21:40 CEST 2025

=== Cumulative stats ===
       Duration : 1027061 secs
      BytesRead : 3263114620730
   BytesWritten : 4524375262

Block Size   :               1B+               2B+ 4B+
Read Count   :                 0                 0 0
Write Count  :               317                 5 12

Block Size   :               8B+              16B+ 32B+
Read Count   :                 0                 0 4
Write Count  :                25               354 384

Block Size   :              64B+             128B+ 256B+
Read Count   :                 4                12 3
Write Count  :               452              3174 177

Block Size   :             512B+            1024B+ 2048B+
Read Count   :                 5                67 20
Write Count  :               819               644 517

Block Size   :            4096B+            8192B+ 16384B+
Read Count   :                94               159 238
Write Count  :               657                15 14

Block Size   :           32768B+           65536B+ 131072B+
Read Count   :               487              1542 24894452
Write Count  :                20                62 34400

Fop           Call Count    Avg-Latency    Min-Latency Max-Latency
---           ----------    -----------    ----------- -----------
STAT                 855 1636322233.15 us     129481.00 us 7613060937.00 us
READ               22774  867794690.83 us    7150668.00 us 5464263390.00 us
FLUSH                  1 1019668889.00 us 1019668889.00 us 1019668889.00 us
SETXATTR              10 4586558316.50 us 3273936240.00 us 6897317026.00 us
OPENDIR              459 3863731511.82 us  565874532.00 us 7436044801.00 us
FSTAT                  1     130574.00 us     130574.00 us 130574.00 us
LOOKUP             20464  793541909.91 us      26574.00 us 4389861051.00 us
SETATTR              124 4928152166.21 us 1786135647.00 us 8324248584.00 us
READDIRP            1905  315443817.07 us      44445.00 us 2690928799.00 us
FORGET              1000             0 us             0 us             0 us
RELEASE             1930             0 us             0 us             0 us
RELEASEDIR        745442             0 us             0 us             0 us
------ ----- ----- ----- ----- ----- ----- -----  ----- ----- ----- -----

Current open fd's: 4 Max open fd's: 21 time 2025-04-21 13:29:31.114171 +0000
...
########################################################################################
(***)
# gluster volume heal wdVolume info
Brick tPi5Wb:/mnt/glusterBricks/extWd18a/data
Status: Connected
Number of entries: 0

Brick 192.168.129.9:/mnt/glusterBricks/extWd18b/data
Status: Connected
Number of entries: 0

Brick tPi5Wb3:/mnt/glusterBricks/extWd18c/data
Status: Connected
Number of entries: 0

Brick tPi5Wb:/mnt/glusterBricks/extWd5a/data
Status: Connected
Number of entries: 0

Brick 192.168.129.9:/mnt/glusterBricks/extWd5b/data
Status: Connected
Number of entries: 0

Brick tPi5Wb3:/mnt/glusterBricks/extWd5x/data
Status: Connected
Number of entries: 0
########################################################################################
(****)
root at tpi5wb3:/var/log/glusterfs# cat glusterd.log | grep ' E '
[2025-05-03 23:23:04.415227 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 00:23:06.167938 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 01:23:06.427137 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 02:23:07.828266 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 03:23:09.458889 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 04:23:07.200384 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 05:23:12.085744 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 06:23:08.550034 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict
[2025-05-04 07:23:10.401254 +0000] E [MSGID: 106061] 
[glusterd-utils.c:9944:glusterd_volume_rebalance_use_rsp_dict] 
0-glusterd: failed to get index from rsp dict


root at tpi5wb3:/var/log/glusterfs# cat glusterd.log | grep ' W '
[2025-05-04 02:29:24.540484 +0000] W 
[glusterd-locks.c:729:glusterd_mgmt_v3_unlock] 
(-->/lib/aarch64-linux-gnu/libgfrpc.so.0(+0xd9e0) [0xffffbb65d9e0] 
-->/usr/lib/aarch64-linux-gnu/glusterfs/11.1/xlator/mgmt/glusterd.so(+0x2f6f0) 
[0xffffb653f6f0] 
-->/usr/lib/aarch64-linux-gnu/glusterfs/11.1/xlator/mgmt/glusterd.so(+0xc0dc8) 
[0xffffb65d0dc8] ) 0-management: Lock for vol wdVolume not held
[2025-05-04 02:29:24.540534 +0000] W [MSGID: 106117] 
[glusterd-handler.c:6680:__glusterd_peer_rpc_notify] 0-management: Lock 
not released for wdVolume
root at tpi5wb3:/var/log/glusterfs#

##########################################################################################


More information about the Gluster-users mailing list