<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Oct 15, 2020 at 4:04 PM Alvin Starr <<a href="mailto:alvin@netvel.net">alvin@netvel.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
We are running glusterfs-server-3.8.9-1.el7.x86_64<br></div></blockquote><div><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">This was released >3.5 years ago. Any plans to upgrade?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">Y.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<br>
If there is any more info you need I am happy to provide it.<br>
<br>
<br>
gluster v info SYCLE-PROD-EDOCS:<br>
<br>
Volume Name: SYCLE-PROD-EDOCS<br>
Type: Replicate<br>
Volume ID: ada836a4-1456-4d7a-a00f-934038669127<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: edocs3:/bricks/sycle-prod/data<br>
Brick2: edocs4:/bricks/sycle-prod/data<br>
Options Reconfigured:<br>
transport.address-family: inet<br>
performance.readdir-ahead: off<br>
nfs.disable: on<br>
client.event-threads: 8<br>
features.bitrot: on<br>
features.scrub: Active<br>
features.scrub-freq: weekly<br>
features.scrub-throttle: normal<br>
<br>
<br>
gluster v status SYCLE-PROD-EDOCS:<br>
<br>
Volume Name: SYCLE-PROD-EDOCS<br>
Type: Replicate<br>
Volume ID: ada836a4-1456-4d7a-a00f-934038669127<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: edocs3:/bricks/sycle-prod/data<br>
Brick2: edocs4:/bricks/sycle-prod/data<br>
Options Reconfigured:<br>
transport.address-family: inet<br>
performance.readdir-ahead: off<br>
nfs.disable: on<br>
client.event-threads: 8<br>
features.bitrot: on<br>
features.scrub: Active<br>
features.scrub-freq: weekly<br>
features.scrub-throttle: normal<br>
[root@edocs4 .glusterfs]# cat /tmp/gstatus <br>
Status of volume: SYCLE-PROD-EDOCS<br>
Gluster process TCP Port RDMA Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick edocs3:/bricks/sycle-prod/data 49160 0
Y 51434<br>
Brick edocs4:/bricks/sycle-prod/data 49178 0
Y 25053<br>
Self-heal Daemon on localhost N/A N/A
Y 25019<br>
Bitrot Daemon on localhost N/A N/A
Y 25024<br>
Scrubber Daemon on localhost N/A N/A
Y 25039<br>
Self-heal Daemon on edocs3 N/A N/A
Y 40404<br>
Bitrot Daemon on edocs3 N/A N/A
Y 40415<br>
Scrubber Daemon on edocs3 N/A N/A
Y 40426<br>
<br>
Task Status of Volume SYCLE-PROD-EDOCS<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
gluster v heal SYCLE-PROD-EDOCS info:<br>
<br>
Brick edocs3:/bricks/sycle-prod/data<br>
<gfid:145923e6-afcb-4262-997a-193d046b54ba> <br>
<gfid:af44f71c-19da-46ea-ac83-2a24efa76012> <br>
<gfid:4efd5b61-e084-490b-864a-60fea9b2f6b4> <br>
<gfid:9cb4d6f7-a9b6-4b74-a4c1-595dba07d14a> <br>
<gfid:cb77eca6-58b6-4375-9c41-228017b11e41> <br>
[sniped for brevity ]<br>
<gfid:5531dd22-0026-4cfe-8f7e-82fc1e921d97> <br>
<gfid:2980fb47-8b9c-4d66-8463-8b4465fa733c> <br>
<gfid:99193e8d-1072-479b-a0c7-27e85dd3711f> <br>
<gfid:1ffbd140-8e6e-46a6-a23d-eb8badefed72> <br>
<gfid:0018c3ae-0195-44be-95f1-ffe3de82c1d9> <br>
Status: Connected<br>
Number of entries: 589<br>
<br>
Brick edocs4:/bricks/sycle-prod/data<br>
Status: Connected<br>
Number of entries: 0<br>
<br>
<br>
<br>
<div>On 10/15/20 2:29 AM, Ashish Pandey
wrote:<br>
</div>
<blockquote type="cite">
<div style="font-family:"times new roman","new york",times,serif;font-size:12pt;color:rgb(0,0,0)">
<div>It will require much more information than what you have
provided to fix this issue.<br>
</div>
<div><br>
</div>
<div>gluster v <volname> info<br>
</div>
<div>gluster v <volname> status<br>
</div>
<div>gluster v <volname> heal info</div>
<div><br>
</div>
<div>This is mainly to understand what is the volume type and
what is current status of bricks.<br>
</div>
<div>Knowing that, we can come up with next set of steps ti
debug and fix the issue.<br>
</div>
<div><br>
</div>
<div>Note: Please hide/mask hostname/Ip or any other
confidential information in above output.<br>
</div>
<div><br>
</div>
<div>---<br>
</div>
<div>Ashish<br>
</div>
<div><br>
</div>
<hr id="gmail-m_-7260168170160366002zwchr">
<div style="color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Alvin
Starr" <a href="mailto:alvin@netvel.net" target="_blank"><alvin@netvel.net></a><br>
<b>To: </b>"gluster-user" <a href="mailto:gluster-users@gluster.org" target="_blank"><gluster-users@gluster.org></a><br>
<b>Sent: </b>Wednesday, October 14, 2020 10:45:10 PM<br>
<b>Subject: </b>[Gluster-users] problems with heal.<br>
<div><br>
</div>
We are running a smiple 2 server gluster cluster with a large
number of <br>
small files.<br>
<div><br>
</div>
We had a problem where the clients lost connection to one of
the servers <br>
and forced the system to run constantly self healing.<br>
We have since fixed the problem but now I have about 600 files
that will <br>
not self heal.<br>
<div><br>
</div>
Is there any way to manually correct the problem?<br>
<div><br>
</div>
-- <br>
Alvin Starr || land: (647)478-6285<br>
Netvel Inc. || Cell: (416)806-0133<br>
<a href="mailto:alvin@netvel.net" target="_blank">alvin@netvel.net</a> ||<br>
<div><br>
</div>
________<br>
<div><br>
</div>
<br>
<div><br>
</div>
Community Meeting Calendar:<br>
<div><br>
</div>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a><br>
<div><br>
</div>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<div><br>
</div>
</div>
<div><br>
</div>
</div>
</blockquote>
<br>
<pre cols="72">--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
<a href="mailto:alvin@netvel.net" target="_blank">alvin@netvel.net</a> ||
</pre>
</div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>