[Gluster-users] problems with heal.

Yaniv Kaul ykaul at redhat.com
Thu Oct 15 13:20:18 UTC 2020


On Thu, Oct 15, 2020 at 4:04 PM Alvin Starr <alvin at netvel.net> wrote:

> We are running glusterfs-server-3.8.9-1.el7.x86_64
>

This was released >3.5 years ago. Any plans to upgrade?
Y.

>
> If there is any more info you need I am happy to provide it.
>
>
> gluster v info SYCLE-PROD-EDOCS:
>
> Volume Name: SYCLE-PROD-EDOCS
> Type: Replicate
> Volume ID: ada836a4-1456-4d7a-a00f-934038669127
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: edocs3:/bricks/sycle-prod/data
> Brick2: edocs4:/bricks/sycle-prod/data
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: off
> nfs.disable: on
> client.event-threads: 8
> features.bitrot: on
> features.scrub: Active
> features.scrub-freq: weekly
> features.scrub-throttle: normal
>
>
> gluster v status SYCLE-PROD-EDOCS:
>
> Volume Name: SYCLE-PROD-EDOCS
> Type: Replicate
> Volume ID: ada836a4-1456-4d7a-a00f-934038669127
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: edocs3:/bricks/sycle-prod/data
> Brick2: edocs4:/bricks/sycle-prod/data
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: off
> nfs.disable: on
> client.event-threads: 8
> features.bitrot: on
> features.scrub: Active
> features.scrub-freq: weekly
> features.scrub-throttle: normal
> [root at edocs4 .glusterfs]# cat /tmp/gstatus
> Status of volume: SYCLE-PROD-EDOCS
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
>
> ------------------------------------------------------------------------------
> Brick edocs3:/bricks/sycle-prod/data        49160     0          Y
> 51434
> Brick edocs4:/bricks/sycle-prod/data        49178     0          Y
> 25053
> Self-heal Daemon on localhost               N/A       N/A        Y
> 25019
> Bitrot Daemon on localhost                  N/A       N/A        Y
> 25024
> Scrubber Daemon on localhost                N/A       N/A        Y
> 25039
> Self-heal Daemon on edocs3                  N/A       N/A        Y
> 40404
> Bitrot Daemon on edocs3                     N/A       N/A        Y
> 40415
> Scrubber Daemon on edocs3                   N/A       N/A        Y
> 40426
>
> Task Status of Volume SYCLE-PROD-EDOCS
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> gluster v heal SYCLE-PROD-EDOCS info:
>
> Brick edocs3:/bricks/sycle-prod/data
> <gfid:145923e6-afcb-4262-997a-193d046b54ba>
> <gfid:af44f71c-19da-46ea-ac83-2a24efa76012>
> <gfid:4efd5b61-e084-490b-864a-60fea9b2f6b4>
> <gfid:9cb4d6f7-a9b6-4b74-a4c1-595dba07d14a>
> <gfid:cb77eca6-58b6-4375-9c41-228017b11e41>
> [sniped for brevity ]
> <gfid:5531dd22-0026-4cfe-8f7e-82fc1e921d97>
> <gfid:2980fb47-8b9c-4d66-8463-8b4465fa733c>
> <gfid:99193e8d-1072-479b-a0c7-27e85dd3711f>
> <gfid:1ffbd140-8e6e-46a6-a23d-eb8badefed72>
> <gfid:0018c3ae-0195-44be-95f1-ffe3de82c1d9>
> Status: Connected
> Number of entries: 589
>
> Brick edocs4:/bricks/sycle-prod/data
> Status: Connected
> Number of entries: 0
>
>
>
> On 10/15/20 2:29 AM, Ashish Pandey wrote:
>
> It will require much more information than what you have provided to fix
> this issue.
>
> gluster v <volname> info
> gluster v <volname> status
> gluster v <volname> heal info
>
> This is mainly to understand what is the volume type and what is current
> status of bricks.
> Knowing that, we can come up with next set of steps ti debug and fix the
> issue.
>
> Note: Please hide/mask hostname/Ip or any other confidential information
> in above output.
>
> ---
> Ashish
>
> ------------------------------
> *From: *"Alvin Starr" <alvin at netvel.net> <alvin at netvel.net>
> *To: *"gluster-user" <gluster-users at gluster.org>
> <gluster-users at gluster.org>
> *Sent: *Wednesday, October 14, 2020 10:45:10 PM
> *Subject: *[Gluster-users] problems with heal.
>
> We are running a smiple 2 server gluster cluster with a large number of
> small files.
>
> We had a problem where the clients lost connection to one of the servers
> and forced the system to run constantly self healing.
> We have since fixed the problem but now I have about 600 files that will
> not self heal.
>
> Is there any way to manually correct the problem?
>
> --
> Alvin Starr                   ||   land:  (647)478-6285
> Netvel Inc.                   ||   Cell:  (416)806-0133
> alvin at netvel.net              ||
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Alvin Starr                   ||   land:  (647)478-6285
> Netvel Inc.                   ||   Cell:  (416)806-0133alvin at netvel.net              ||
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201015/9991753e/attachment.html>


More information about the Gluster-users mailing list