[Gluster-users] Heal pending problem

Grzegorz Lechowicz grzegorz.lechowicz at enigma.com.pl
Mon Oct 28 13:35:53 UTC 2019


Anyone can help me?

regards
grzegorz

W dniu pon, 28.10.2019 o godzinie 13∶13 +0100, użytkownik Grzegorz
Lechowicz napisał:
> Hi, updating gluster to 6.5 didn't resolve problem :-(
> 
> regards
> grzegorz
> 
> W dniu pon, 28.10.2019 o godzinie 09∶53 +0100, użytkownik Grzegorz
> Lechowicz napisał:
> > Hi, I have big problem after network outage I think. I have a
> > volume
> > which is
> > corrupted I think. Here are details:
> > 
> > [root at hv-01 ~]# gluster --version
> > glusterfs 5.3
> > 
> > 1. volume info:
> > 
> > [root at hv-01 ~]# gluster volume info engine
> >  
> > Volume Name: engine
> > Type: Replicate
> > Volume ID: ca7c567a-1512-4c6c-964b-e9a2ca45b872
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 1 x (2 + 1) = 3
> > Transport-type: tcp
> > Bricks:
> > Brick1: hv-01.cencert.local:/gluster_bricks/engine/engine
> > Brick2: hv-02.cencert.local:/gluster_bricks/engine/engine
> > Brick3: arbiter.cencert.local:/gluster_bricks/engine/engine
> > (arbiter)
> > Options Reconfigured:
> > cluster.eager-lock: enable
> > performance.io-cache: off
> > performance.read-ahead: off
> > performance.quick-read: off
> > user.cifs: off
> > network.ping-timeout: 30
> > network.remote-dio: off
> > performance.strict-o-direct: on
> > performance.low-prio-threads: 32
> > features.shard: on
> > storage.owner-gid: 36
> > storage.owner-uid: 36
> > transport.address-family: inet
> > nfs.disable: on
> > performance.client-io-threads: off
> > 
> > 
> > 2. Volume heal info and summary
> > 
> > [root at hv-01 ~]# gluster volume heal engine info
> > Brick hv-01.cencert.local:/gluster_bricks/engine/engine
> > /7396183b-825b-4670-8646-5eb8d69f3480/images/02d6cf5e-179c-4730-
> > 8191-
> > a1ee7b5a7737/30b0a24e-01cb-4d53-9cf1-422f8f217e33 
> > Status: Connected
> > Number of entries: 1
> > 
> > Brick hv-02.cencert.local:/gluster_bricks/engine/engine
> > /7396183b-825b-4670-8646-5eb8d69f3480/images/02d6cf5e-179c-4730-
> > 8191-
> > a1ee7b5a7737/30b0a24e-01cb-4d53-9cf1-422f8f217e33 
> > Status: Connected
> > Number of entries: 1
> > 
> > Brick arbiter.cencert.local:/gluster_bricks/engine/engine
> > /7396183b-825b-4670-8646-5eb8d69f3480/images/02d6cf5e-179c-4730-
> > 8191-
> > a1ee7b5a7737/30b0a24e-01cb-4d53-9cf1-422f8f217e33 
> > Status: Connected
> > Number of entries: 1
> > 
> > 
> > [root at hv-01 ~]# gluster volume heal engine info summary
> > Brick hv-01.cencert.local:/gluster_bricks/engine/engine
> > Status: Connected
> > Total Number of entries: 1
> > Number of entries in heal pending: 1
> > Number of entries in split-brain: 0
> > Number of entries possibly healing: 0
> > 
> > Brick hv-02.cencert.local:/gluster_bricks/engine/engine
> > Status: Connected
> > Total Number of entries: 2
> > Number of entries in heal pending: 2
> > Number of entries in split-brain: 0
> > Number of entries possibly healing: 0
> > 
> > Brick arbiter.cencert.local:/gluster_bricks/engine/engine
> > Status: Connected
> > Total Number of entries: 2
> > Number of entries in heal pending: 2
> > Number of entries in split-brain: 0
> > Number of entries possibly healing: 0
> > 
> > 
> > 3. Details about corrupted file
> > 
> > [root at hv-01 ~]# stat /gluster_bricks/engine/engine/7396183b-825b-
> > 4670-
> > 8646-5eb8d69f3480/images/02d6cf5e-179c-4730-8191-
> > a1ee7b5a7737/30b0a24e-
> > 01cb-4d53-9cf1-422f8f217e33
> >   Plik: „/gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33”
> >   rozmiar: 67108864     bloków: 122968     bloki I/O: 4096   zwykły
> > plik
> > Urządzenie: fd0fh/64783d        inody: 134349053   dowiązań: 2
> > Dostęp: (0660/-rw-rw----)  Uid: (   36/    vdsm)   Gid:
> > (   36/     kvm)
> > Kontekst: system_u:object_r:unlabeled_t:s0
> > Dostęp:      2019-02-17 15:00:33.130284553 +0100
> > Modyfikacja: 2019-02-20 13:15:08.090856642 +0100
> > Zmiana:      2019-10-25 11:13:15.174957824 +0200
> > Utworzenie:  -
> > 
> > [root at hv-02 ~]# stat /gluster_bricks/engine/engine/7396183b-825b-
> > 4670-
> > 8646-5eb8d69f3480/images/02d6cf5e-179c-4730-8191-
> > a1ee7b5a7737/30b0a24e-
> > 01cb-4d53-9cf1-422f8f217e33
> >   Plik: „/gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33”
> >   rozmiar: 67108864     bloków: 109320     bloki I/O: 4096   zwykły
> > plik
> > Urządzenie: fd14h/64788d        inody: 201326687   dowiązań: 2
> > Dostęp: (0660/-rw-rw----)  Uid: (   36/    vdsm)   Gid:
> > (   36/     kvm)
> > Kontekst: system_u:object_r:unlabeled_t:s0
> > Dostęp:      2019-02-17 15:00:33.130284553 +0100
> > Modyfikacja: 2019-02-20 13:15:08.090856642 +0100
> > Zmiana:      2019-10-25 11:13:06.424507311 +0200
> > Utworzenie:  -
> > 
> > [root at hv-01 ~]# getfattr -d -e hex -m .
> > /gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33
> > getfattr: Usunięcie wiodącego '/' ze ścieżek bezwzględnych
> > # file: gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33
> > security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c616265
> > 6c
> > 65
> > 645f743a733000
> > trusted.afr.dirty=0x000000000000000000000000
> > trusted.afr.engine-client-1=0x000000010000000000000000
> > trusted.afr.engine-client-2=0x000000000000000000000000
> > trusted.gfid=0xf5af72751c1d498a97b7cee251b7f5a7
> > trusted.gfid2path.448e0d5596fdc4d9=0x38623630363364642d306562322d34
> > 63
> > 62
> > 392d383038342d3435653132656362353336652f33306230613234652d303163622
> > d3
> > 46
> > 435332d396366312d343232663866323137653333
> > trusted.glusterfs.shard.block-size=0x0000000004000000
> > trusted.glusterfs.shard.file-
> > size=0x000000200000000000000000000000000000000002bf8cdb000000000000
> > 00
> > 00
> > 
> > 
> > [root at hv-02 ~]# getfattr -d -e hex -m .
> > /gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33
> > getfattr: Usunięcie wiodącego '/' ze ścieżek bezwzględnych
> > # file: gluster_bricks/engine/engine/7396183b-825b-4670-8646-
> > 5eb8d69f3480/images/02d6cf5e-179c-4730-8191-a1ee7b5a7737/30b0a24e-
> > 01cb-
> > 4d53-9cf1-422f8f217e33
> > security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c616265
> > 6c
> > 65
> > 645f743a733000
> > trusted.afr.dirty=0x000000010000000000000000
> > trusted.afr.engine-client-0=0x000000010000000000000000
> > trusted.afr.engine-client-1=0x000000000000000000000000
> > trusted.gfid=0xf5af72751c1d498a97b7cee251b7f5a7
> > trusted.gfid2path.448e0d5596fdc4d9=0x38623630363364642d306562322d34
> > 63
> > 62
> > 392d383038342d3435653132656362353336652f33306230613234652d303163622
> > d3
> > 46
> > 435332d396366312d343232663866323137653333
> > trusted.glusterfs.shard.block-size=0x0000000004000000
> > trusted.glusterfs.shard.file-
> > size=0x000000200000000000000000000000000000000002bf8cdb000000000000
> > 00
> > 00
> > 
> > I'm also attaching shd log info.
> > Can it be fixed? I really, really need to repair it.
> > 
> > regards
> > grzegorz
> > ________
> > 
> > Community Meeting Calendar:
> > 
> > APAC Schedule -
> > Every 2nd and 4th Tuesday at 11:30 AM IST
> > Bridge: https://bluejeans.com/118564314
> > 
> > NA/EMEA Schedule -
> > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > Bridge: https://bluejeans.com/118564314
> > 
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> ________
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
> 
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list