<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>I've been trying to find the file name from guid with references
such as
<a class="moz-txt-link-freetext" href="https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/">https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/</a>,
the script I referenced, and other ways but no luck. The Guid in
the command below does not exist in the directory
/srv/gfs01/Projects/.glusterfs/63/5a. Other files with a GUID for
a name exist.</p>
<p>It appears the files do not exist. In addition the file that is
in the 63/5a directory shows a link to a file that does not
exist.<br>
</p>
<div class="moz-cite-prefix">On 12/28/18 6:23 PM, Brett Holcomb
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:9d548f7b-1859-f438-2cb9-9ca1cb3baa86@l1049h.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>I've done step 1 with no results yet so I'm trying step 2 but
can't find the file via the GFID name. The gluster volume heal
projects info output is in a text file so I grabbed the first
entry from the file for Brick gfssrv1:/srv/gfs01/Projects which
is listed a</p>
<p><gfid:the long gfid></p>
<p>I then tried to use this method here, <a
class="moz-txt-link-freetext"
href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/ch"
moz-do-not-send="true">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/ch</a>,
to find the file. However when I do the mount there is no
.gfid directory anywhere.</p>
<p>I then used the Gluster GFID resolver from here, <a
class="moz-txt-link-freetext"
href="https://gist.github.com/semiosis/4392640"
moz-do-not-send="true">https://gist.github.com/semiosis/4392640</a>,
and that gives me this output which has no file linked to it.</p>
<p>[root@srv-1-gfs1 ~]# ./gfid-resolver.sh /srv/gfs01/Projects
6e5ab8ae-65f4-4594-9313-3483bf031adc<br>
6e5ab8ae-65f4-4594-9313-3483bf031adc == File:<br>
Done.<br>
</p>
<p>So at this point either I'm doing something wrong (most likely)
or the files do not exist. I've tried this on several files.</p>
<p><br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 12/28/18 1:00 AM, Ashish Pandey
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:988970243.54246776.1545976827971.JavaMail.zimbra@redhat.com">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div style="font-family: times new roman, new york, times,

serif;
 font-size: 12pt; color: #000000">
<div><br>
</div>
<div>Hi Brett,<br>
</div>
<div><br>
</div>
<div>First the answers of all your questions - <br>
</div>
<div><br>
</div>
<div>1. If a self-heal deamon is listed on a host (all of
mine show one with <br>
a volume status command) can I assume it's enabled and
running?<br>
</div>
<div><br>
</div>
<div>For your volume, projects self heal daemon is UP and
running<br>
</div>
<div><br>
2. I assume the volume that has all the self-heals pending
has some <br>
serious issues even though I can access the files and
directories on <br>
it. If self-heal is running shouldn't the numbers be
decreasing?</div>
<div>
<div><br>
</div>
<div>It should heal the entries and the number of entries
coming in "gluster v heal volname info" command should be
decreasing.<br>
</div>
</div>
<div><br>
It appears to me self-heal is not working properly so how to
I get it to <br>
start working or should I delete the volume and start over?</div>
<div><br>
</div>
<div>As you can access all the files from mount point, I think
the volume and the files are in good state as of now.<br>
</div>
<div>I don't think you should think of deleting your volume
before trying to fix it.<br>
</div>
<div>If there is no fix or the fix is taking time you can go
ahead with that option.<br>
</div>
<div><br>
</div>
<div>-----------------------<br>
</div>
<div>Why all these options are off? <br>
</div>
<div><br>
</div>
<div>performance.quick-read: off<br>
performance.parallel-readdir: off<br>
performance.readdir-ahead: off<br>
performance.write-behind: off<br>
performance.read-ahead: off</div>
<div><br>
</div>
<div>Although this should not matter to your issue but I think
you should enable all the above unless you have a reason to
not to do so.<br>
</div>
<div>--------------------<br>
</div>
<div><br>
</div>
<div>I would like you to perform following steps and provide
some more information - <br>
</div>
<div><br>
</div>
<div>1 - Try to restart self heal and see if that works. <br>
</div>
<div>"gluster v start volume force" will kill and restart the
self heal processes.<br>
</div>
<div><br>
</div>
<div>2 - If step 1 is not fruitful, get the list of entries
need to be healed and pick one of the entry to heal. I mean
we should focus on one entry to find out why it is <br>
</div>
<div>not getting healed instead of all the 5900 entries. Let's
call it entry1.<br>
</div>
<div><br>
</div>
<div>3 - Now access the entry1 from mount point, read, write
on it and see if this entry has been healed. Check for heal
info. Accessing file from mount point triggers client side
heal<br>
</div>
<div>which could also heal the file.<br>
</div>
<div><br>
</div>
<div>4 - Check for the logs in /var/log/gluster, mount logs
and glustershd logs should be checked and provided.<br>
</div>
<div><br>
</div>
<div>5 - Get the external attributes of entry1 from all the
bricks.<br>
</div>
<div><br>
</div>
<div>If the path of the entry1 on mount point is /a/b/c/entry1
then you have to run following command on all the nodes - <br>
</div>
<div><br>
</div>
<div>getfattr -m. -d -e hex <path of the brick on the
node>/a/b/c/entry1<br>
</div>
<div><br>
</div>
<div>Please provide the output of above command too.<br>
</div>
<div><br>
</div>
<div>---<br>
</div>
<div>Ashish<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<hr id="zwchr">
<div
style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"
data-mce-style="color: #000; font-weight: normal;

font-style:
 normal; text-decoration: none;

font-family:
 Helvetica,Arial,sans-serif;
font-size:
 12pt;"><b>From: </b>"Brett Holcomb" <a
class="moz-txt-link-rfc2396E"
href="mailto:biholcomb@l1049h.com" moz-do-not-send="true"><biholcomb@l1049h.com></a><br>
<b>To: </b><a class="moz-txt-link-abbreviated"
href="mailto:gluster-users@gluster.org"
moz-do-not-send="true">gluster-users@gluster.org</a><br>
<b>Sent: </b>Friday, December 28, 2018 3:49:50 AM<br>
<b>Subject: </b>Re: [Gluster-users] Self Heal Confusion<br>
<div><br>
</div>
<p>Resend as I did not reply to the list earlier. TBird
responded to the poster and not the list.<br>
</p>
<div class="moz-cite-prefix">On 12/27/18 11:46 AM, Brett
Holcomb wrote:<br>
</div>
<blockquote
cite="mid:a09be52c-2001-a599-fbdc-64f8131d40ba@l1049h.com">
<p>Thank you. I appreciate the help Here is the
information. Let me know if you need anything else.
I'm fairly new to gluster.<br>
</p>
<p>Gluster version is 5.2</p>
<p>1. gluster v info<br>
</p>
<p>Volume Name: projects<br>
Type: Distributed-Replicate<br>
Volume ID: 5aac71aa-feaa-44e9-a4f9-cb4dd6e0fdc3<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 2 x 3 = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gfssrv1:/srv/gfs01/Projects<br>
Brick2: gfssrv2:/srv/gfs01/Projects<br>
Brick3: gfssrv3:/srv/gfs01/Projects<br>
Brick4: gfssrv4:/srv/gfs01/Projects<br>
Brick5: gfssrv5:/srv/gfs01/Projects<br>
Brick6: gfssrv6:/srv/gfs01/Projects<br>
Options Reconfigured:<br>
cluster.self-heal-daemon: enable<br>
performance.quick-read: off<br>
performance.parallel-readdir: off<br>
performance.readdir-ahead: off<br>
performance.write-behind: off<br>
performance.read-ahead: off<br>
performance.client-io-threads: off<br>
nfs.disable: on<br>
transport.address-family: inet<br>
server.allow-insecure: on<br>
storage.build-pgfid: on<br>
changelog.changelog: on<br>
changelog.capture-del-path: on<br>
<br>
</p>
<p>2. gluster v status<br>
</p>
<p>Status of volume: projects<br>
Gluster process TCP Port
RDMA Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick gfssrv1:/srv/gfs01/Projects 49154
0 Y 7213 <br>
Brick gfssrv2:/srv/gfs01/Projects 49154
0 Y 6932 <br>
Brick gfssrv3:/srv/gfs01/Projects 49154
0 Y 6920 <br>
Brick gfssrv4:/srv/gfs01/Projects 49154
0 Y 6732 <br>
Brick gfssrv5:/srv/gfs01/Projects 49154
0 Y 6950 <br>
Brick gfssrv6:/srv/gfs01/Projects 49154
0 Y 6879 <br>
Self-heal Daemon on localhost N/A
N/A Y 11484<br>
Self-heal Daemon on gfssrv2 N/A
N/A Y 10366<br>
Self-heal Daemon on gfssrv4 N/A
N/A Y 9872 <br>
Self-heal Daemon on srv-1-gfs3.corp.l1049h.<br>
net N/A
N/A Y 9892 <br>
Self-heal Daemon on gfssrv6 N/A
N/A Y 10372<br>
Self-heal Daemon on gfssrv5 N/A
N/A Y 10761<br>
<br>
Task Status of Volume projects<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks</p>
<p>3. I've given the summary since the actual list for two
volumes is around 5900 entries.</p>
<p>Brick gfssrv1:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 85<br>
Number of entries in heal pending: 85<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
<br>
Brick gfssrv2:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 0<br>
Number of entries in heal pending: 0<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
<br>
Brick gfssrv3:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 0<br>
Number of entries in heal pending: 0<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
<br>
Brick gfssrv4:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 0<br>
Number of entries in heal pending: 0<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
</p>
<p>Brick gfssrv5:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 58854<br>
Number of entries in heal pending: 58854<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
<br>
Brick gfssrv6:/srv/gfs01/Projects<br>
Status: Connected<br>
Total Number of entries: 58854<br>
Number of entries in heal pending: 58854<br>
Number of entries in split-brain: 0<br>
Number of entries possibly healing: 0<br>
</p>
<div class="moz-cite-prefix">On 12/27/18 3:09 AM, Ashish
Pandey wrote:<br>
</div>
<blockquote
cite="mid:1851464190.54195617.1545898152060.JavaMail.zimbra@redhat.com">
<div style="font-family: times new roman, new
york,

 times,
 serif;
 font-size:
12pt; color: #000000" data-mce-style="font-family:
times new roman, new
 york,
 times,

serif; font-size: 12pt; color:
 #000000;">
<div>Hi Brett,<br>
</div>
<div><br>
</div>
<div>Could you please tell us more about the setup?<br>
</div>
<div><br>
</div>
<div>1 - Gluster v info<br>
</div>
<div>2 - gluster v status<br>
</div>
<div>3 - gluster v heal <volname> info<br>
</div>
<div><br>
</div>
<div>These are the very basic information to start
with debugging or suggesting any workaround.</div>
<div>It should always be included when asking such
questions on mailing list so that people can reply
sooner. <br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Note: Please hide IP address/hostname or any
other information you don't want world to see.<br>
</div>
<div><br>
</div>
<div>---<br>
</div>
<div>Ashish<br>
</div>
<div><br>
</div>
<hr id="zwchr">
<div
style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"
data-mce-style="color: #000; font-weight:
normal;

 font-style: normal;
text-decoration: none;

 font-family:
Helvetica,Arial,sans-serif;
 font-size:

12pt;"><b>From: </b>"Brett Holcomb" <a
class="moz-txt-link-rfc2396E"
href="mailto:biholcomb@l1049h.com" target="_blank"
data-mce-href="mailto:biholcomb@l1049h.com"
moz-do-not-send="true"><biholcomb@l1049h.com></a><br>
<b>To: </b><a class="moz-txt-link-abbreviated"
href="mailto:gluster-users@gluster.org"
target="_blank"
data-mce-href="mailto:gluster-users@gluster.org"
moz-do-not-send="true">gluster-users@gluster.org</a><br>
<b>Sent: </b>Thursday, December 27, 2018 12:19:15
AM<br>
<b>Subject: </b>Re: [Gluster-users] Self Heal
Confusion<br>
<div><br>
</div>
<p>Still no change in the heals pending. I found
this reference, <a class="moz-txt-link-freetext"
href="https://archive.fosdem.org/2017/schedule/event/glusterselinux/attachments/slides/1876/export/events/attachments/glusterselinux/slides/1876/fosdem.pdf"
target="_blank"
data-mce-href="https://archive.fosdem.org/2017/schedule/event/glusterselinux/attachments/slides/1876/export/events/attachments/glusterselinux/slides/1876/fosdem.pdf"
moz-do-not-send="true">https://archive.fosdem.org/2017/schedule/event/glusterselinux/attachments/slides/1876/export/events/attachments/glusterselinux/slides/1876/fosdem.pdf</a>,
which mentions the default SELinux context for a
brick and that internal operations such as
self-heal, rebalance should be ignored. but they
do not elaborate on what ignore means - is it just
not doing self-heal or something else.</p>
<p>I did set SELinux to permissive and nothing
changed. I'll try setting the bricks to the
context mentioned in this pdf and see what
happens.</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 12/20/18 8:26 PM,
John Strunk wrote:<br>
</div>
<blockquote
cite="mid:CAMLs-gRPSPvjEAvZNZLyNYvruEGvs-v2WL6m=un5v00pXXcMnw@mail.gmail.com">
<div dir="ltr">Assuming your bricks are up... yes,
the heal count should be decreasing.
<div><br>
</div>
<div>There is/was a bug wherein self-heal would
stop healing but would still be running. I
don't know whether your version is affected,
but the remedy is to just restart the
self-heal daemon.</div>
<div>Force start one of the volumes that has
heals pending. The bricks are already running,
but it will cause shd to restart and, assuming
this is the problem, healing should begin...</div>
<div><br>
</div>
<div>$ gluster vol start my-pending-heal-vol
force</div>
<div><br>
</div>
<div>Others could better comment on the status
of the bug.</div>
<div><br>
</div>
<div>-John</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Dec 20, 2018 at 5:45 PM
Brett Holcomb <<a
href="mailto:biholcomb@l1049h.com"
target="_blank"
data-mce-href="mailto:biholcomb@l1049h.com"
moz-do-not-send="true">biholcomb@l1049h.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px
 0px

0px


 0.8ex;border-left:1px
solid




rgb(204,204,204);padding-left:1ex"
data-mce-style="margin: 0px

0px


 0px
 0.8ex;
border-left: 1px
 solid



#cccccc; padding-left:
 1ex;">I have one
volume that has 85 pending entries in healing
and two more <br>
volumes with 58,854 entries in healing
pending. These numbers are from <br>
the volume heal info summary command. They
have stayed constant for two <br>
days now. I've read the gluster docs and many
more. The Gluster docs <br>
just give some commands and non gluster docs
basically repeat that. <br>
Given that it appears no self-healing is going
on for my volume I am <br>
confused as to why.<br>
<br>
1. If a self-heal deamon is listed on a host
(all of mine show one with <br>
a volume status command) can I assume it's
enabled and running?<br>
<br>
2. I assume the volume that has all the
self-heals pending has some <br>
serious issues even though I can access the
files and directories on <br>
it. If self-heal is running shouldn't the
numbers be decreasing?<br>
<br>
It appears to me self-heal is not working
properly so how to I get it to <br>
start working or should I delete the volume
and start over?<br>
<br>
I'm running gluster 5.2 on Centos 7 latest and
updated.<br>
<br>
Thank you.<br>
<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org"
target="_blank"
data-mce-href="mailto:Gluster-users@gluster.org"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank"
data-mce-href="https://lists.gluster.org/mailman/listinfo/gluster-users"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
</blockquote>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a class="moz-txt-link-abbreviated"
href="mailto:Gluster-users@gluster.org"
target="_blank"
data-mce-href="mailto:Gluster-users@gluster.org"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a class="moz-txt-link-freetext"
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
target="_blank"
data-mce-href="https://lists.gluster.org/mailman/listinfo/gluster-users"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br
data-mce-bogus="1">
</div>
<div><br>
</div>
</div>
</blockquote>
</blockquote>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a class="moz-txt-link-abbreviated"
href="mailto:Gluster-users@gluster.org"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a class="moz-txt-link-freetext"
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a></div>
<div><br>
</div>
</div>
</blockquote>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
</body>
</html>