<div dir="ltr"><div dir="ltr">Hi Shaik,<div><br></div><div>Can you check what is there in brick logs? They are located in /var/log/glusterfs/bricks/*? </div><div><br></div><div>Looks like the samba hooks script failed, but that shouldn't matter in this use case.</div><div><br></div><div>Also, I see that you are trying to setup heketi to provision volumes, which means you may be using gluster in container usecases. If you are still in 'PoC' phase, can you give <a href="https://github.com/gluster/gcs">https://github.com/gluster/gcs</a> a try? That makes the deployment and the stack little simpler.</div><div><br></div><div>-Amar<br></div><div><br></div><div><br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 22, 2019 at 11:29 AM Shaik Salam <<a href="mailto:shaik.salam@tcs.com">shaik.salam@tcs.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font size="2" face="sans-serif">Can anyone respond how to recover bricks
apart from heal/start force according to below events from logs.</font>
<br><font size="2" face="sans-serif">Please let me know any other logs required.</font>
<br><font size="2" face="sans-serif">Thanks in advance.</font>
<br>
<br><font size="2" face="sans-serif">BR</font>
<br><font size="2" face="sans-serif">Salam</font>
<br>
<br>
<br>
<br><font size="1" color="#5f5f5f" face="sans-serif">From:
</font><font size="1" face="sans-serif">Shaik Salam/HYD/TCS</font>
<br><font size="1" color="#5f5f5f" face="sans-serif">To:
</font><font size="1" face="sans-serif"><a href="mailto:bugs@gluster.org" target="_blank">bugs@gluster.org</a>, <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a></font>
<br><font size="1" color="#5f5f5f" face="sans-serif">Date:
</font><font size="1" face="sans-serif">01/21/2019 10:03 PM</font>
<br><font size="1" color="#5f5f5f" face="sans-serif">Subject:
</font><font size="1" face="sans-serif">Bricks are going
offline unable to recover with heal/start force commands</font>
<br>
<hr noshade>
<br>
<br><font size="2" face="sans-serif">Hi,</font>
<br>
<br><font size="2" face="sans-serif">Bricks are in offline and unable
to recover with following commands</font>
<br>
<br><font size="2" face="sans-serif">gluster volume heal <vol-name></font>
<br>
<br><font size="2" face="sans-serif">gluster volume start <vol-name>
force</font>
<br>
<br><font size="2" face="sans-serif">But still bricks are offline.</font>
<br>
<br>
<br><font size="2" face="sans-serif">sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="sans-serif">Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="sans-serif">Gluster process
TCP Port RDMA Port Online Pid</font>
<br><font size="2" face="sans-serif">------------------------------------------------------------------------------</font>
<br><font size="2" face="sans-serif">Brick 192.168.3.6:/var/lib/heketi/mounts/vg</font>
<br><font size="2" face="sans-serif">_ca57f326195c243be2380ce4e42a4191/brick_952</font>
<br><font size="2" face="sans-serif">d75fd193c7209c9a81acbc23a3747/brick
49166 0
Y 269</font>
<br><font size="2" face="sans-serif">Brick 192.168.3.5:/var/lib/heketi/mounts/vg</font>
<br><font size="2" face="sans-serif">_d5f17487744584e3652d3ca943b0b91b/brick_e15</font>
<br><font size="2" face="sans-serif">c12cceae12c8ab7782dd57cf5b6c1/brick
N/A N/A
N N/A</font>
<br><font size="2" face="sans-serif">Brick 192.168.3.15:/var/lib/heketi/mounts/v</font>
<br><font size="2" face="sans-serif">g_462ea199185376b03e4b0317363bb88c/brick_17</font>
<br><font size="2" face="sans-serif">36459d19e8aaa1dcb5a87f48747d04/brick
49173 0
Y 225</font>
<br><font size="2" face="sans-serif">Self-heal Daemon on localhost
N/A N/A
Y 45826</font>
<br><font size="2" face="sans-serif">Self-heal Daemon on 192.168.3.6
N/A N/A
Y 65196</font>
<br><font size="2" face="sans-serif">Self-heal Daemon on 192.168.3.15
N/A N/A
Y 52915</font>
<br>
<br><font size="2" face="sans-serif">Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="sans-serif">------------------------------------------------------------------------------</font>
<br>
<br>
<br><font size="2" face="sans-serif">We can see following events from when
we start forcing volumes</font>
<br>
<br><font size="2" face="sans-serif">/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a]
-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management:
Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8
--first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:34.555068] E [run.c:241:runner_log]
(-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a]
-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563]
-->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management:
Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.389049] I [MSGID:
106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume vol_3442e86b6d994a14de73f1b8c82cf0b8</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:23:25.346839] I [MSGID:
106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd:
Received cli list req</font>
<br>
<br>
<br><font size="2" face="sans-serif">We can see following events from when
we heal volumes.</font>
<br>
<br><font size="2" face="sans-serif">[2019-01-21 08:20:07.576070] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-glusterfs: error returned while attempting to connect to host:(null),
port:0</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:20:07.580225] I [cli-rpc-ops.c:9182:gf_cli_heal_volume_cbk]
0-cli: Received resp to heal volume</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:20:07.580326] I [input.c:31:cli_batch]
0-: Exiting with: -1</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:30.423311] I [cli.c:768:main]
0-cli: Started running gluster with version 4.1.5</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:30.463648] I [MSGID:
101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:30.463718] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:30.463859] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-glusterfs: error returned while attempting to connect to host:(null),
port:0</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:33.427710] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:34.581555] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk]
0-cli: Received resp to start volume</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:34.581678] I [input.c:31:cli_batch]
0-: Exiting with: 0</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.345351] I [cli.c:768:main]
0-cli: Started running gluster with version 4.1.5</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.387992] I [MSGID:
101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.388059] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.388138] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-glusterfs: error returned while attempting to connect to host:(null),
port:0</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:22:53.394737] I [input.c:31:cli_batch]
0-: Exiting with: 0</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:23:25.304688] I [cli.c:768:main]
0-cli: Started running gluster with version 4.1.5</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:23:25.346319] I [MSGID:
101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:23:25.346389] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now</font>
<br><font size="2" face="sans-serif">[2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-glusterfs: error returned while attempting to connect to host:(null),
port:0</font>
<br>
<br>
<br>
<br><font size="2" face="sans-serif">Please let us know steps to recover
bricks.</font>
<br>
<br>
<br><font size="2" face="sans-serif">BR</font>
<br><font size="2" face="sans-serif">Salam</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>_______________________________________________<br>
Bugs mailing list<br>
<a href="mailto:Bugs@gluster.org" target="_blank">Bugs@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/bugs" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/bugs</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>