[Gluster-users] Cannot finish heal

Innostus - Arnold Boer arnold at innostus.com
Mon May 1 12:47:13 UTC 2023


Heal does never finish on disperse 1 x (2 + 1) volume gv0. Something 
seems to be wrong with a shard? Help and explanation appreciated!


*gstatus*

Cluster:
      Status: Healthy          GlusterFS: 10.1
      Nodes: 3/3              Volumes: 1/1

Volumes:

gv0
                  Disperse          Started (UP) - 3/3 Bricks Up
                                    Capacity: (74.76% used) 648.00 
GiB/867.00 GiB (used/total)
                                    Self-Heal:
armc1m1.net.innostus.com:/export/nvme0n1p3/brick (1 File(s) to heal).
armc1m3.net.innostus.com:/export/nvme0n1p3/brick (1 File(s) to heal).

*gluster volume info gv0*
Volume Name: gv0
Type: Disperse
Volume ID: d511c58e-45a0-4829-b41d-fb98885e6cf5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: armc1m1.net.innostus.com:/export/nvme0n1p3/brick
Brick2: armc1m2.net.innostus.com:/export/nvme0n1p3/brick
Brick3: armc1m3.net.innostus.com:/export/nvme0n1p3/brick
Options Reconfigured:
storage.build-pgfid: on
cluster.use-anonymous-inode: yes
features.shard: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
features.scrub: Active
features.bitrot: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 200000
performance.cache-samba-metadata: on
performance.readdir-ahead: on
performance.parallel-readdir: on
performance.nl-cache: on
performance.nl-cache-timeout: 600
performance.nl-cache-positive-entry: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: disable
performance.strict-o-direct: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
performance.client-io-threads: on
network.ping-timeout: 20
server.tcp-user-timeout: 20
server.keepalive-time: 10
server.keepalive-interval: 2
server.keepalive-count: 5
cluster.lookup-optimize: off

*gluster volume heal gv0 info*

Brick armc1m1.net.innostus.com:/export/nvme0n1p3/brick
/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27
Status: Connected
Number of entries: 1

Brick armc1m2.net.innostus.com:/export/nvme0n1p3/brick
Status: Connected
Number of entries: 0

Brick armc1m3.net.innostus.com:/export/nvme0n1p3/brick
<gfid:c348cefd-1cfe-442a-899e-9302f907f9e2>
Status: Connected
Number of entries: 1

*root at armc1m1:~# getfattr -d -m . -e hex 
/export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27*
getfattr: Removing leading '/' from absolute path names
# file: 
export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27
trusted.bit-rot.version=0x0400000000000000644f8e31000ec56c
trusted.ec.config=0x0000080301000200
trusted.ec.dirty=0x00000000000000510000000000000051
trusted.ec.size=0x0000000004000000
trusted.ec.version=0x00000000000001050000000000000105
trusted.gfid=0xc348cefd1cfe442a899e9302f907f9e2
trusted.gfid2path.417a2b73213425c1=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f33373330373837312d613462392d343439322d393532332d3463383434366430643136332e3237
trusted.glusterfs.mdata=0x01000000000000000000000000644bd9f0000000001be65cb300000000644bd9f0000000001be65cb300000000644b7fcc000000001c12b5cd
trusted.pgfid.be318638-e8a0-4c6d-977d-7a937aa84806=0x00000001

*root at armc1m2:~# getfattr -d -m . -e hex 
/export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27*
getfattr: Removing leading '/' from absolute path names
# file: 
export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27
trusted.ec.config=0x0000080301000200
trusted.ec.size=0x0000000000000000
trusted.ec.version=0x00000000000000000000000000000000
trusted.gfid=0x1e63467f038d47688e76bd808ecdccd0
trusted.gfid2path.417a2b73213425c1=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f33373330373837312d613462392d343439322d393532332d3463383434366430643136332e3237
trusted.glusterfs.mdata=0x01000000000000000000000000644b7fcc000000001c12b5cd00000000644b7fcc000000001c12b5cd00000000644b7fcc000000001c12b5cd
trusted.pgfid.be318638-e8a0-4c6d-977d-7a937aa84806=0x00000001

*root at armc1m3:~# getfattr -d -m . -e hex 
/export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27*
getfattr: 
/export/nvme0n1p3/brick/.shard/37307871-a4b9-4492-9523-4c8446d0d163.27: 
No such file or directory


Kind regards,


-- 
<https://www.innostus.com/>
	
Arnold Boer
tel: 06-24499722
email: arnold at innostus.com
Verlengde Vaart NZ 124
7887EK, Erica
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230501/6df17a1d/attachment.html>


More information about the Gluster-users mailing list