<div dir="ltr">Mohit,<div><br></div><div>Have we came across this kind of issue? This user using gluster 4.1 version. Did we fix any related bug afterwards?</div><div><br></div><div>Looks like setup has some issues but I'm not sure.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 24, 2019 at 4:01 PM Shaik Salam <<a href="mailto:shaik.salam@tcs.com">shaik.salam@tcs.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font size="2" face="sans-serif"> </font>
<br>
<br><font size="2" face="sans-serif">Hi Sanju,</font>
<br>
<br><font size="2" face="sans-serif">Please find requested information (these
are latest logs :) ).</font>
<br>
<br><font size="2" face="sans-serif">I can see only following error messages
related to brick "brick_e15c12cceae12c8ab7782dd57cf5b6c1" (on
secondnode log)</font>
<br>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.322902] I [glusterd-utils.c:5994:glusterd_brick_start]
0-management: discovered already-running brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.322925] I [MSGID:
106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/</font><font size="2" color="red" face="sans-serif">brick
on port 49165 >> showing running on port but not</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.327557] I [MSGID:
106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs
already stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.327586] I [MSGID:
106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service
is stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.327604] I [MSGID:
106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so
xlator is not installed</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:20.337735] I [MSGID:
106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping
glustershd daemon running in pid: 69525</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.338058] I [MSGID:
106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd
service is stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.338180] I [MSGID:
106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting
glustershd service</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.348234] I [MSGID:
106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd
already stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.348285] I [MSGID:
106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd
service is stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.348866] I [MSGID:
106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub
already stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:21.348883] I [MSGID:
106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub
service is stopped</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:22.356502] I [run.c:241:runner_log]
(-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a]
-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8
--first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd</font>
<br><font size="2" face="sans-serif">[2019-01-23 11:50:22.368845] E [run.c:241:runner_log]
(-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a]
-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management:
Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font>
<br><font size="2" face="sans-serif"> </font>
<br>
<br><font size="2" face="Courier New">sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="Courier New">Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="Courier New">Gluster process
TCP Port RDMA Port Online Pid</font>
<br><font size="2" face="Courier New">------------------------------------------------------------------------------</font>
<br><font size="2" face="Courier New">Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font>
<br><font size="2" face="Courier New">_ca57f326195c243be2380ce4e42a4191/brick_952</font>
<br><font size="2" face="Courier New">d75fd193c7209c9a81acbc23a3747/brick
49157 0
Y 250</font>
<br><font size="2" face="Courier New">Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font>
<br><font size="2" face="Courier New">_d5f17487744584e3652d3ca943b0b91b/brick_e15</font>
<br><font size="2" color="red" face="Courier New">c12cceae12c8ab7782dd57cf5b6c1/brick
N/A N/A
N N/A</font>
<br><font size="2" face="Courier New">Brick 192.168.3.15:/var/lib/heketi/mounts/v</font>
<br><font size="2" face="Courier New">g_462ea199185376b03e4b0317363bb88c/brick_17</font>
<br><font size="2" face="Courier New">36459d19e8aaa1dcb5a87f48747d04/brick
49173 0
Y 225</font>
<br><font size="2" face="Courier New">Self-heal Daemon on localhost
N/A N/A
Y 109550</font>
<br><font size="2" face="Courier New">Self-heal Daemon on 192.168.3.6
N/A N/A
Y 52557</font>
<br><font size="2" face="Courier New">Self-heal Daemon on 192.168.3.15
N/A N/A
Y 16946</font>
<br>
<br><font size="2" face="Courier New">Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="Courier New">------------------------------------------------------------------------------</font>
<br><font size="2" face="Courier New">There are no active volume tasks</font>
<br>
<br>
<br><font size="2" face="sans-serif">BR</font>
<br><font size="2" face="sans-serif">Salam</font>
<br>
<br>
<br>
<br><font size="1" color="#5f5f5f" face="sans-serif">From:
</font><font size="1" face="sans-serif">"Sanju Rakonde"
<<a href="mailto:srakonde@redhat.com" target="_blank">srakonde@redhat.com</a>></font>
<br><font size="1" color="#5f5f5f" face="sans-serif">To:
</font><font size="1" face="sans-serif">"Shaik Salam"
<<a href="mailto:shaik.salam@tcs.com" target="_blank">shaik.salam@tcs.com</a>></font>
<br><font size="1" color="#5f5f5f" face="sans-serif">Cc:
</font><font size="1" face="sans-serif">"Amar Tumballi
Suryanarayan" <<a href="mailto:atumball@redhat.com" target="_blank">atumball@redhat.com</a>>, "<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>
List" <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>>, "Murali Kottakota"
<<a href="mailto:murali.kottakota@tcs.com" target="_blank">murali.kottakota@tcs.com</a>></font>
<br><font size="1" color="#5f5f5f" face="sans-serif">Date:
</font><font size="1" face="sans-serif">01/24/2019 02:32 PM</font>
<br><font size="1" color="#5f5f5f" face="sans-serif">Subject:
</font><font size="1" face="sans-serif">Re: [Gluster-users]
[Bugs] Bricks are going offline unable to recover with heal/start force
commands</font>
<br>
<hr noshade>
<br>
<br>
<br><font size="2" color="#ff8141"><b>"External email. Open with Caution"</b></font>
<br><font size="3">Shaik,</font>
<br>
<br><font size="3">Sorry to ask this again. What errors are you seeing in
glusterd logs? Can you share the latest logs?</font>
<br>
<br><font size="3">On Thu, Jan 24, 2019 at 2:05 PM Shaik Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="3" color="blue"><u>shaik.salam@tcs.com</u></font></a><font size="3">>
wrote:</font>
<br><font size="2" face="sans-serif">Hi Sanju,</font><font size="3"> <br>
</font><font size="2" face="sans-serif"><br>
Please find requsted information.</font><font size="3"> <br>
<br>
Are you still seeing the error "Unable to read pidfile:" in glusterd
log?</font><font size="2" face="sans-serif"> >>>> No</font><font size="3">
<br>
Are you seeing "brick is deemed not to be a part of the volume"
error in glusterd log?</font><font size="2" face="sans-serif">>>>>
No</font><font size="3"> <br>
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae1^C8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# pwd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2# getfattr -d -m . -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/</font><font size="3">
</font><font size="2" face="sans-serif"><br>
getfattr: Removing leading '/' from absolute path names</font><font size="3">
</font><font size="2" face="sans-serif"><br>
# file: var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/</font><font size="3">
</font><font size="2" face="sans-serif"><br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000</font><font size="3">
</font><font size="2" face="sans-serif"><br>
trusted.afr.dirty=0x000000000000000000000000</font><font size="3"> </font><font size="2" face="sans-serif"><br>
trusted.afr.vol_3442e86b6d994a14de73f1b8c82cf0b8-client-0=0x000000000000000000000000</font><font size="3">
</font><font size="2" face="sans-serif"><br>
trusted.gfid=0x00000000000000000000000000000001</font><font size="3"> </font><font size="2" face="sans-serif"><br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff</font><font size="3">
</font><font size="2" face="sans-serif"><br>
trusted.glusterfs.volume-id=0x15477f3622e84757a0ce9000b63fa849</font><font size="3">
<br>
</font><font size="2" face="sans-serif"><br>
sh-4.2# ls -la |wc -l</font><font size="3"> </font><font size="2" face="sans-serif"><br>
86</font><font size="3"> </font><font size="2" face="sans-serif"><br>
sh-4.2# pwd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="sans-serif"><br>
sh-4.2#</font><font size="3"> <br>
<br>
<br>
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
From: </font><font size="1" face="sans-serif">"Sanju
Rakonde" <</font><a href="mailto:srakonde@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>srakonde@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">"Shaik
Salam" <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>shaik.salam@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">"Amar
Tumballi Suryanarayan" <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">>,
"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">>,
"Murali Kottakota" <</font><a href="mailto:murali.kottakota@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>murali.kottakota@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/24/2019
01:38 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Gluster-users] [Bugs] Bricks are going offline unable to recover with
heal/start force commands</font><font size="3"> <br>
</font>
<hr noshade><font size="3"><br>
<br>
</font><font size="2" color="#ff8141"><b><br>
"External email. Open with Caution"</b></font><font size="3"> <br>
Shaik, <br>
<br>
Previously I was suspecting, whether brick pid file is missing. But I see
it is present. <br>
<br>
>From second node (this brick is in offline state): <br>
/var/run/gluster/vols/vol_3442e86b6d994a14de73f1b8c82cf0b8/192.168.3.5-var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.pid
<br>
271 <br>
Are you still seeing the error "Unable to read pidfile:"
in glusterd log? <br>
<br>
I also suspect whether brick is missing its extended attributes. Are you
seeing "brick is deemed not to be a part of the volume" error
in glusterd log? If not can you please provide us output of "getfattr
-m -d -e hex <brickpath>" <br>
<br>
On Thu, Jan 24, 2019 at 12:18 PM Shaik Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="3" color="blue"><u>shaik.salam@tcs.com</u></font></a><font size="3">>
wrote: </font><font size="2" face="sans-serif"><br>
Hi Sanju,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Could you please have look my issue if you have time (atleast provide workaround).</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
BR</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Salam</font><font size="3"> <br>
<br>
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
From: </font><font size="1" face="sans-serif">Shaik
Salam/HYD/TCS</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">"Sanju
Rakonde" <</font><a href="mailto:srakonde@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>srakonde@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">"Amar
Tumballi Suryanarayan" <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">>,
"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">>,
"Murali Kottakota" <</font><a href="mailto:murali.kottakota@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>murali.kottakota@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/23/2019
05:50 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Gluster-users] [Bugs] Bricks are going offline unable to recover with
heal/start force commands</font><font size="3"> <br>
</font>
<hr noshade><font size="2" face="sans-serif"><br>
<br>
</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Hi Sanju,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Please find requested information.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Sorry to repeat again I am trying start force command once brick log enabled
to debug by taking one volume example.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Please correct me If I am doing wrong.</font><font size="3"> <br>
</font><font size="2" face="Courier New"><br>
<br>
[root@master ~]# oc rsh glusterfs-storage-vll7x</font><font size="3"> </font><font size="2" face="Courier New"><br>
sh-4.2# gluster volume info </font><font size="2" color="blue" face="Courier New">vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
<br>
Volume Name: </font><font size="2" color="blue" face="Courier New">vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
Type: Replicate</font><font size="3"> </font><font size="2" face="Courier New"><br>
Volume ID: 15477f36-22e8-4757-a0ce-9000b63fa849</font><font size="3"> </font><font size="2" face="Courier New"><br>
Status: Started</font><font size="3"> </font><font size="2" face="Courier New"><br>
Snapshot Count: 0</font><font size="3"> </font><font size="2" face="Courier New"><br>
Number of Bricks: 1 x 3 = 3</font><font size="3"> </font><font size="2" face="Courier New"><br>
Transport-type: tcp</font><font size="3"> </font><font size="2" face="Courier New"><br>
Bricks:</font><font size="3"> </font><font size="2" face="Courier New"><br>
Brick1: 192.168.3.6:/var/lib/heketi/mounts/vg_ca57f326195c243be2380ce4e42a4191/brick_952d75fd193c7209c9a81acbc23a3747/brick</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick2: 192.168.3.5:/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/</font><font size="2" color="blue" face="Courier New">brick_e15c12cceae12c8ab7782dd57cf5b6c1</font><font size="2" face="Courier New">/brick</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick3: 192.168.3.15:/var/lib/heketi/mounts/vg_462ea199185376b03e4b0317363bb88c/brick_1736459d19e8aaa1dcb5a87f48747d04/brick</font><font size="3">
</font><font size="2" face="Courier New"><br>
Options Reconfigured:</font><font size="3"> </font><font size="2" face="Courier New"><br>
diagnostics.brick-log-level: INFO</font><font size="3"> </font><font size="2" face="Courier New"><br>
performance.client-io-threads: off</font><font size="3"> </font><font size="2" face="Courier New"><br>
nfs.disable: on</font><font size="3"> </font><font size="2" face="Courier New"><br>
transport.address-family: inet</font><font size="3"> </font><font size="2" face="Courier New"><br>
sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
Gluster process
TCP Port RDMA Port Online
Pid</font><font size="3"> </font><font size="2" face="Courier New"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="Courier New"><br>
_ca57f326195c243be2380ce4e42a4191/brick_952</font><font size="3"> </font><font size="2" face="Courier New"><br>
d75fd193c7209c9a81acbc23a3747/brick 49157
0 Y 250</font><font size="3">
</font><font size="2" color="blue" face="Courier New"><br>
Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" color="blue" face="Courier New"><br>
_d5f17487744584e3652d3ca943b0b91b/brick_e15</font><font size="3"> </font><font size="2" color="blue" face="Courier New"><br>
c12cceae12c8ab7782dd57cf5b6c1/brick N/A
N/A N N/A</font><font size="3">
</font><font size="2" color="blue" face="Courier New"><br>
Brick 192.168.3.15:/var/lib/heketi/mounts/v</font><font size="3"> </font><font size="2" face="Courier New"><br>
g_462ea199185376b03e4b0317363bb88c/brick_17</font><font size="3"> </font><font size="2" face="Courier New"><br>
36459d19e8aaa1dcb5a87f48747d04/brick 49173
0 Y 225</font><font size="3">
</font><font size="2" face="Courier New"><br>
Self-heal Daemon on localhost
N/A N/A Y
108434</font><font size="3"> </font><font size="2" face="Courier New"><br>
Self-heal Daemon on matrix1.matrix.orange.l</font><font size="3"> </font><font size="2" face="Courier New"><br>
ab
N/A
N/A Y
69525</font><font size="3"> </font><font size="2" face="Courier New"><br>
Self-heal Daemon on matrix2.matrix.orange.l</font><font size="3"> </font><font size="2" face="Courier New"><br>
ab
N/A
N/A Y
18569</font><font size="3"> </font><font size="2" face="Courier New"><br>
<br>
gluster volume set </font><font size="2" color="blue" face="Courier New">vol_3442e86b6d994a14de73f1b8c82cf0b8
diagnostics.brick-log-level DEBUG</font><font size="3"> </font><font size="2" face="Courier New"><br>
volume set: success</font><font size="3"> </font><font size="2" face="Courier New"><br>
sh-4.2# gluster volume get vol_3442e86b6d994a14de73f1b8c82cf0b8 all |grep
log</font><font size="3"> </font><font size="2" face="Courier New"><br>
cluster.entry-change-log
on</font><font size="3"> </font><font size="2" face="Courier New"><br>
cluster.data-change-log
on</font><font size="3"> </font><font size="2" face="Courier New"><br>
cluster.metadata-change-log on</font><font size="3">
</font><font size="2" face="Courier New"><br>
diagnostics.brick-log-level DEBUG</font><font size="3">
</font><font size="2" face="Courier New"><br>
<br>
sh-4.2# cd /var/log/glusterfs/bricks/</font><font size="3"> </font><font size="2" color="blue" face="Courier New"><br>
sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1</font><font size="3">
</font><font size="2" color="red" face="Courier New"><br>
-rw-------. 1 root root 0 Jan 20 02:46 </font><font size="2" color="blue" face="Courier New">
var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log
</font><font size="2" color="red" face="Courier New"> >>> Noting
in log</font><font size="3"> </font><font size="2" face="Courier New"><br>
<br>
-rw-------. 1 root root 189057 Jan 18 09:20 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log-20190120</font><font size="3">
</font><font size="2" face="Courier New"><br>
<br>
[2019-01-23 11:49:32.475956] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605)
[0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 -o diagnostics.brick-log-level=DEBUG
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:49:32.483191] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605)
[0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 -o diagnostics.brick-log-level=DEBUG
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:48:59.111292] W [MSGID: 106036] [glusterd-snapshot.c:9514:glusterd_handle_snapshot_fn]
0-management: Snapshot list failed</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:14.112271] E [MSGID: 106026] [glusterd-snapshot.c:3962:glusterd_handle_snapshot_list]
0-management: Volume (vol_63854b105c40802bdec77290e91858ea) does not exist
[Invalid argument]</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:14.112305] W [MSGID: 106036] [glusterd-snapshot.c:9514:glusterd_handle_snapshot_fn]
0-management: Snapshot list failed</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.322902] I [glusterd-utils.c:5994:glusterd_brick_start]
0-management: discovered already-running brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick</font><font size="3">
</font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.322925] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind]
0-pmap: adding brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick
on port 49165</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.327557] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop]
0-management: nfs already stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.327586] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop]
0-management: nfs service is stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.327604] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager]
0-management: nfs/server.so xlator is not installed</font><font size="3">
</font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:20.337735] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop]
0-management: Stopping glustershd daemon running in pid: 69525</font><font size="3">
</font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.338058] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop]
0-management: glustershd service is stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.338180] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start]
0-management: Starting glustershd service</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.348234] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop]
0-management: bitd already stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.348285] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop]
0-management: bitd service is stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.348866] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop]
0-management: scrub already stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:21.348883] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop]
0-management: scrub service is stopped</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:22.356502] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605)
[0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="Courier New"><br>
[2019-01-23 11:50:22.368845] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563)
[0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
</font><font size="2" face="sans-serif">--gd-workdir=/var/lib/glusterd</font><font size="3">
<br>
</font><font size="2" face="Courier New"><br>
<br>
sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
Gluster process
TCP Port RDMA Port Online
Pid</font><font size="3"> </font><font size="2" face="Courier New"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="Courier New"><br>
_ca57f326195c243be2380ce4e42a4191/brick_952</font><font size="3"> </font><font size="2" face="Courier New"><br>
d75fd193c7209c9a81acbc23a3747/brick 49157
0 Y 250</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="Courier New"><br>
_d5f17487744584e3652d3ca943b0b91b/brick_e15</font><font size="3"> </font><font size="2" face="Courier New"><br>
c12cceae12c8ab7782dd57cf5b6c1/brick N/A
N/A N N/A</font><font size="3">
</font><font size="2" face="Courier New"><br>
Brick 192.168.3.15:/var/lib/heketi/mounts/v</font><font size="3"> </font><font size="2" face="Courier New"><br>
g_462ea199185376b03e4b0317363bb88c/brick_17</font><font size="3"> </font><font size="2" face="Courier New"><br>
36459d19e8aaa1dcb5a87f48747d04/brick 49173
0 Y 225</font><font size="3">
</font><font size="2" face="Courier New"><br>
Self-heal Daemon on localhost
N/A N/A Y
109550</font><font size="3"> </font><font size="2" face="Courier New"><br>
Self-heal Daemon on 192.168.3.6
N/A N/A Y
52557</font><font size="3"> </font><font size="2" face="Courier New"><br>
Self-heal Daemon on 192.168.3.15 N/A
N/A Y
16946</font><font size="3"> </font><font size="2" face="Courier New"><br>
<br>
Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="Courier New"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="Courier New"><br>
There are no active volume tasks</font><font size="3"> <br>
<br>
<br>
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
From: </font><font size="1" face="sans-serif">"Sanju
Rakonde" <</font><a href="mailto:srakonde@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>srakonde@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">"Shaik
Salam" <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>shaik.salam@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">"Amar
Tumballi Suryanarayan" <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">>,
"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">>,
"Murali Kottakota" <</font><a href="mailto:murali.kottakota@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>murali.kottakota@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/23/2019
02:15 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Gluster-users] [Bugs] Bricks are going offline unable to recover with
heal/start force commands</font><font size="3"> <br>
</font>
<hr noshade><font size="3"><br>
</font><font size="2" color="#ff8141"><b><br>
<br>
"External email. Open with Caution"</b></font><font size="3"> <br>
Hi Shaik, <br>
<br>
I can see below errors in glusterd logs. <br>
<br>
[2019-01-22 09:20:17.540196] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/vols/vol_e1aa1283d5917485d88c4a742eeff422/192.168.3.6-var-lib-heketi-mounts-vg_526f35058433c6b03130bba4e0a7dd87-brick_9e7c382e5f853d471c347bc5590359af-brick.pid
<br>
[2019-01-22 09:20:17.546408] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/vols/vol_f0ed498d7e781d7bb896244175b31f9e/192.168.3.6-var-lib-heketi-mounts-vg_56391bec3c8bfe4fc116de7bddfc2af4-brick_47ed9e0663ad0f6f676ddd6ad7e3dcde-brick.pid
<br>
[2019-01-22 09:20:17.552575] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/vols/vol_f387519c9b004ec14e80696db88ef0f8/192.168.3.6-var-lib-heketi-mounts-vg_56391bec3c8bfe4fc116de7bddfc2af4-brick_06ad6c73dfbf6a5fc21334f98c9973c2-brick.pid
<br>
[2019-01-22 09:20:17.558888] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/vols/vol_f8ca343c60e6efe541fe02d16ca02a7d/192.168.3.6-var-lib-heketi-mounts-vg_526f35058433c6b03130bba4e0a7dd87-brick_525225f65753b05dfe33aeaeb9c5de39-brick.pid
<br>
[2019-01-22 09:20:17.565266] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/vols/vol_fe882e074c0512fd9271fc2ff5a0bfe1/192.168.3.6-var-lib-heketi-mounts-vg_28708570b029e5eff0a996c453a11691-brick_d4f30d6e465a8544b759a7016fb5aab5-brick.pid
<br>
[2019-01-22 09:20:17.585926] E [MSGID: 106028] [glusterd-utils.c:8222:glusterd_brick_signal]
0-glusterd: Unable to get pid of brick process <br>
[2019-01-22 09:20:17.617806] E [MSGID: 106028] [glusterd-utils.c:8222:glusterd_brick_signal]
0-glusterd: Unable to get pid of brick process <br>
[2019-01-22 09:20:17.649628] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/glustershd/glustershd.pid
<br>
[2019-01-22 09:20:17.649700] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running]
0-: Unable to read pidfile: /var/run/gluster/glustershd/glustershd.pid
<br>
<br>
So it looks like, neither gf_is_service_running() nor glusterd_brick_signal()
are able to read the pid file. That means pidfiles might be having nothing
to read. <br>
<br>
Can you please paste the contents of brick pidfiles. You can find brick
pidfiles in /var/run/gluster/vols/<volname>/ or you can just run
this command "for i in `ls /var/run/gluster/vols/*/*.pid`;do echo
$i;cat $i;done" <br>
<br>
On Wed, Jan 23, 2019 at 12:49 PM Shaik Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="3" color="blue"><u>shaik.salam@tcs.com</u></font></a><font size="3">>
wrote: </font><font size="2" face="sans-serif"><br>
Hi Sanju,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Please find requested information attached logs.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
<br>
<br>
Below brick is offline and try to start force/heal commands but doesn't
makes up.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
sh-4.2# <br>
sh-4.2# gluster --version</font><font size="3"> </font><font size="2" face="sans-serif"><br>
glusterfs 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Gluster process
TCP Port RDMA Port Online
Pid</font><font size="3"> </font><font size="2" face="sans-serif"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="sans-serif"><br>
_ca57f326195c243be2380ce4e42a4191/brick_952</font><font size="3"> </font><font size="2" face="sans-serif"><br>
d75fd193c7209c9a81acbc23a3747/brick 49166
0 Y 269</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="sans-serif"><br>
_d5f17487744584e3652d3ca943b0b91b/brick_e15</font><font size="3"> </font><font size="2" face="sans-serif"><br>
c12cceae12c8ab7782dd57cf5b6c1/brick N/A
N/A N N/A</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.15:/var/lib/heketi/mounts/v</font><font size="3"> </font><font size="2" face="sans-serif"><br>
g_462ea199185376b03e4b0317363bb88c/brick_17</font><font size="3"> </font><font size="2" face="sans-serif"><br>
36459d19e8aaa1dcb5a87f48747d04/brick 49173
0 Y 225</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Self-heal Daemon on localhost
N/A N/A Y
45826</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Self-heal Daemon on 192.168.3.6
N/A N/A Y
65196</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Self-heal Daemon on 192.168.3.15 N/A
N/A Y
52915</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
<br>
We can see following events from when we start forcing volumes</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605)
[0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.555068] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563)
[0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.389049] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume]
0-management: Received status volume req for volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346839] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends]
0-glusterd: Received cli list req</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
We can see following events from when we heal volumes.</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
[2019-01-21 08:20:07.576070] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:20:07.580225] I [cli-rpc-ops.c:9182:gf_cli_heal_volume_cbk]
0-cli: Received resp to heal volume</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:20:07.580326] I [input.c:31:cli_batch] 0-: Exiting with:
-1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.423311] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463648] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463718] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463859] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:33.427710] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.581555] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk]
0-cli: Received resp to start volume</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.581678] I [input.c:31:cli_batch] 0-: Exiting with:
0</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.345351] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.387992] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.388059] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.388138] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.394737] I [input.c:31:cli_batch] 0-: Exiting with:
0</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.304688] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346319] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346389] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
<br>
Enabled DEBUG mode for brick level. But nothing writing to brick log.</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
gluster volume set vol_3442e86b6d994a14de73f1b8c82cf0b8 diagnostics.brick-log-level
DEBUG</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
sh-4.2# pwd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
/var/log/glusterfs/bricks</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1</font><font size="3">
</font><font size="2" face="sans-serif"><br>
-rw-------. 1 root root </font><font size="2" color="red" face="sans-serif">0</font><font size="2" face="sans-serif">
Jan 20 02:46 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log</font><font size="3">
<br>
<br>
<br>
<br>
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
<br>
From: </font><font size="1" face="sans-serif">Sanju
Rakonde <</font><a href="mailto:srakonde@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>srakonde@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">Shaik
Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>shaik.salam@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">Amar
Tumballi Suryanarayan <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">>,
"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/22/2019
02:21 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Gluster-users] [Bugs] Bricks are going offline unable to recover with
heal/start force commands</font><font size="3"> <br>
</font>
<hr noshade><font size="2" color="#ff8141"><b><br>
<br>
<br>
"External email. Open with Caution"</b></font><font size="3"> <br>
Hi Shaik, <br>
<br>
Can you please provide us complete glusterd and cmd_history logs from all
the nodes in the cluster? Also please paste output of the following commands
(from all nodes): <br>
1. gluster --version <br>
2. gluster volume info <br>
3. gluster volume status <br>
4. gluster peer status <br>
5. ps -ax | grep glusterfsd <br>
<br>
On Tue, Jan 22, 2019 at 12:47 PM Shaik Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="3" color="blue"><u>shaik.salam@tcs.com</u></font></a><font size="3">>
wrote: </font><font size="2" face="sans-serif"><br>
Hi Surya,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
It is already customer setup and cant redeploy again.</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Enabled debug for brick level log but nothing writing to it.</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Can you tell me is any other ways to troubleshoot or logs to look??</font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
<br>
From: </font><font size="1" face="sans-serif">Shaik
Salam/HYD/TCS</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">"Amar
Tumballi Suryanarayan" <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/22/2019
12:06 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Bugs] Bricks are going offline unable to recover with heal/start force
commands</font><font size="3"> <br>
</font>
<hr noshade><font size="2" face="sans-serif"><br>
<br>
Hi Surya,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
I have enabled DEBUG mode for brick level. But nothing writing to brick
log.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
gluster volume set vol_3442e86b6d994a14de73f1b8c82cf0b8 diagnostics.brick-log-level
DEBUG</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
sh-4.2# pwd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
/var/log/glusterfs/bricks</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1</font><font size="3">
</font><font size="2" face="sans-serif"><br>
-rw-------. 1 root root </font><font size="2" color="red" face="sans-serif">0</font><font size="2" face="sans-serif">
Jan 20 02:46 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
BR</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Salam</font><font size="3"> <br>
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
<br>
<br>
From: </font><font size="1" face="sans-serif">"Amar
Tumballi Suryanarayan" <</font><a href="mailto:atumball@redhat.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>atumball@redhat.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><font size="1" face="sans-serif">"Shaik
Salam" <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="1" color="blue" face="sans-serif"><u>shaik.salam@tcs.com</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Cc: </font><font size="1" face="sans-serif">"</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">
List" <</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="1" face="sans-serif">></font><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/22/2019
11:38 AM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Re:
[Bugs] Bricks are going offline unable to recover with heal/start force
commands</font><font size="3"> <br>
</font>
<hr noshade><font size="2" color="#ff8141"><b><br>
<br>
<br>
"External email. Open with Caution"</b></font><font size="3"> <br>
Hi Shaik, <br>
<br>
Can you check what is there in brick logs? They are located in /var/log/glusterfs/bricks/*?
<br>
<br>
Looks like the samba hooks script failed, but that shouldn't matter in
this use case. <br>
<br>
Also, I see that you are trying to setup heketi to provision volumes, which
means you may be using gluster in container usecases. If you are still
in 'PoC' phase, can you give </font><a href="https://github.com/gluster/gcs" target="_blank"><font size="3" color="blue"><u>https://github.com/gluster/gcs</u></font></a><font size="3">
a try? That makes the deployment and the stack little simpler. <br>
<br>
-Amar <br>
<br>
<br>
<br>
<br>
On Tue, Jan 22, 2019 at 11:29 AM Shaik Salam <</font><a href="mailto:shaik.salam@tcs.com" target="_blank"><font size="3" color="blue"><u>shaik.salam@tcs.com</u></font></a><font size="3">>
wrote: </font><font size="2" face="sans-serif"><br>
Can anyone respond how to recover bricks apart from heal/start force according
to below events from logs.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Please let me know any other logs required.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Thanks in advance.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
BR</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Salam</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
<br>
<br>
<br>
From: </font><font size="1" face="sans-serif">Shaik
Salam/HYD/TCS</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
To: </font><a href="mailto:bugs@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>bugs@gluster.org</u></font></a><font size="1" face="sans-serif">,
</font><a href="mailto:gluster-users@gluster.org" target="_blank"><font size="1" color="blue" face="sans-serif"><u>gluster-users@gluster.org</u></font></a><font size="3">
</font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Date: </font><font size="1" face="sans-serif">01/21/2019
10:03 PM</font><font size="3"> </font><font size="1" color="#5f5f5f" face="sans-serif"><br>
Subject: </font><font size="1" face="sans-serif">Bricks
are going offline unable to recover with heal/start force commands</font><font size="3">
<br>
</font>
<hr noshade><font size="2" face="sans-serif"><br>
<br>
Hi,</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Bricks are in offline and unable to recover with following commands</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
gluster volume heal <vol-name></font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
gluster volume start <vol-name> force</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
But still bricks are offline.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Gluster process
TCP Port RDMA Port Online
Pid</font><font size="3"> </font><font size="2" face="sans-serif"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="sans-serif"><br>
_ca57f326195c243be2380ce4e42a4191/brick_952</font><font size="3"> </font><font size="2" face="sans-serif"><br>
d75fd193c7209c9a81acbc23a3747/brick 49166
0 Y 269</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font><font size="3"> </font><font size="2" face="sans-serif"><br>
_d5f17487744584e3652d3ca943b0b91b/brick_e15</font><font size="3"> </font><font size="2" face="sans-serif"><br>
c12cceae12c8ab7782dd57cf5b6c1/brick N/A
N/A N N/A</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Brick 192.168.3.15:/var/lib/heketi/mounts/v</font><font size="3"> </font><font size="2" face="sans-serif"><br>
g_462ea199185376b03e4b0317363bb88c/brick_17</font><font size="3"> </font><font size="2" face="sans-serif"><br>
36459d19e8aaa1dcb5a87f48747d04/brick 49173
0 Y 225</font><font size="3">
</font><font size="2" face="sans-serif"><br>
Self-heal Daemon on localhost
N/A N/A Y
45826</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Self-heal Daemon on 192.168.3.6
N/A N/A Y
65196</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Self-heal Daemon on 192.168.3.15 N/A
N/A Y
52915</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
------------------------------------------------------------------------------</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
<br>
We can see following events from when we start forcing volumes</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605)
[0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.555068] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a)
[0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563)
[0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5]
) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.389049] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume]
0-management: Received status volume req for volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346839] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends]
0-glusterd: Received cli list req</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
We can see following events from when we heal volumes.</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
[2019-01-21 08:20:07.576070] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:20:07.580225] I [cli-rpc-ops.c:9182:gf_cli_heal_volume_cbk]
0-cli: Received resp to heal volume</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:20:07.580326] I [input.c:31:cli_batch] 0-: Exiting with:
-1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.423311] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463648] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463718] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:30.463859] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:33.427710] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.581555] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk]
0-cli: Received resp to start volume</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:34.581678] I [input.c:31:cli_batch] 0-: Exiting with:
0</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.345351] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.387992] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.388059] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.388138] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
[2019-01-21 08:22:53.394737] I [input.c:31:cli_batch] 0-: Exiting with:
0</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.304688] I [cli.c:768:main] 0-cli: Started running
gluster with version 4.1.5</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346319] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346389] I [socket.c:2632:socket_event_handler] 0-transport:
EPOLLERR - disconnecting now</font><font size="3"> </font><font size="2" face="sans-serif"><br>
[2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs:
error returned while attempting to connect to host:(null), port:0</font><font size="3">
</font><font size="2" face="sans-serif"><br>
<br>
<br>
<br>
Please let us know steps to recover bricks.</font><font size="3"> </font><font size="2" face="sans-serif"><br>
<br>
<br>
BR</font><font size="3"> </font><font size="2" face="sans-serif"><br>
Salam</font><font size="3"> <br>
=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you <br>
_______________________________________________<br>
Bugs mailing list</font><font size="3" color="blue"><u><br>
</u></font><a href="mailto:Bugs@gluster.org" target="_blank"><font size="3" color="blue"><u>Bugs@gluster.org</u></font></a><font size="3" color="blue"><u><br>
</u></font><a href="https://lists.gluster.org/mailman/listinfo/bugs" target="_blank"><font size="3" color="blue"><u>https://lists.gluster.org/mailman/listinfo/bugs</u></font></a><font size="3">
<br>
<br>
<br>
-- <br>
Amar Tumballi (amarts) <br>
_______________________________________________<br>
Gluster-users mailing list</font><font size="3" color="blue"><u><br>
</u></font><a href="mailto:Gluster-users@gluster.org" target="_blank"><font size="3" color="blue"><u>Gluster-users@gluster.org</u></font></a><font size="3" color="blue"><u><br>
</u></font><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank"><font size="3" color="blue"><u>https://lists.gluster.org/mailman/listinfo/gluster-users</u></font></a><font size="3">
<br>
<br>
<br>
-- <br>
Thanks, <br>
Sanju <br>
<br>
<br>
-- <br>
Thanks, <br>
Sanju <br>
<br>
<br>
-- <br>
Thanks, <br>
Sanju </font>
<br>
<br>
<br><font size="3">-- </font>
<br><font size="3">Thanks,</font>
<br><font size="3">Sanju</font>
<br></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Thanks,<br></div>Sanju<br></div></div>