<div dir="ltr"><div>Hi,</div><div><br></div><div>To test corruption detection and repair, we modified a file inside the brick directory on server glusterfs1, and scheduled regular scrubs. The corruption is detected:</div><div><br></div><div>Error count: 1<br><br>Corrupted object's [GFID]:<br><br>9be5eecf-5ad8-4256-8b08-879aecf65881 ==> BRICK: /data/brick1/gv0<br> path: /prd/drupal-files-prd/inline-images/small - main building 1_0.jpg</div><div><br></div><div>We have self-healing enabled, and used these steps to correct the corrupted object:</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>user@glusterfs1:~$ sudo find /data/brick1/gv0/.glusterfs -name 9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881<br><br>user@glusterfs1:~$ sudo find /data/brick1 -samefile /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/prd/drupal-files-prd/inline-images/small - main building 1_0.jpg<br><br>user@glusterfs1:~$ sudo rm /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>user@glusterfs1:~$ sudo rm "/data/brick1/gv0/prd/drupal-files-prd/inline-images/small - main building 1_0.jpg"<br><br>user@glusterfs1:~$ stat /glusterfs/prd/drupal-files-prd/inline-images/"small - main building 1_0.jpg"<br>  File: /glusterfs/prd/drupal-files-prd/inline-images/small - main building 1_0.jpg<br>  Size: 6296651         Blocks: 12299      IO Block: 131072 regular file<br>Device: 2dh/45d Inode: 10018406470555949185  Links: 1<br>Access: (0664/-rw-rw-r--)  Uid: (42333178/ UNKNOWN)   Gid: (41000002/ UNKNOWN)<br>Access: 2022-11-22 11:58:10.080206283 +0100<br>Modify: 2022-05-12 17:00:41.000000000 +0200<br>Change: 2022-11-22 12:41:18.095579069 +0100<br> Birth: -</div></blockquote><div><br></div><div>Seems to have worked, but now we want to verify / confirm the repair, and use md5sum to compare the file between our thee glusterfs servers:</div><div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">user@glusterfs1:~$ sudo find /data/brick1/gv0/.glusterfs -name 9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>/data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881<br><br>user@glusterfs1:~$ sudo md5sum /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>d41d8cd98f00b204e9800998ecf8427e  /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br><br>user@glusterfs1:~$ sudo md5sum /data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881<br>d41d8cd98f00b204e9800998ecf8427e  /data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881<br></blockquote><br>but then on brick2 and brick3:<br><br><blockquote>user@glusterfs2:~$ sudo md5sum /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>d4927e00e0db4498bcbbaedf3b5680ed  /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br><br>user@glusterfs3:~$ sudo md5sum /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881<br>d4927e00e0db4498bcbbaedf3b5680ed  /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881</blockquote></div><div>The md5sum does NOT match the repaired server.</div><div><br></div><div>What in our logic is wrong, why is this happening?</div><div><br></div><div>Some cluster info:</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>user@glusterfs2:~$ sudo gluster volume get gv0  all | grep self-heal<br>cluster.background-self-heal-count       8 (DEFAULT)<br>cluster.metadata-self-heal               on<br>cluster.data-self-heal                   on<br>cluster.entry-self-heal                  on<br>cluster.self-heal-daemon                 on (DEFAULT)<br>cluster.self-heal-window-size            8 (DEFAULT)<br>cluster.data-self-heal-algorithm         (null) (DEFAULT)<br>cluster.self-heal-readdir-size           1KB (DEFAULT)<br>cluster.disperse-self-heal-daemon        enable (DEFAULT)<br>disperse.self-heal-window-size           32 (DEFAULT)<br></div></blockquote><div><br></div><div>glusterfs 10.1 running on ubuntu 22.04.01 x86_64.<br></div><div> </div><div>Any help would be appreciated!</div><div><br></div><div>MJ<br></div></div>