<div dir="ltr"><div dir="ltr">Atin,<div>I have copied the content of 'gfs-tst' from vol folder in another node. when starting service again fails with error msg in glusterd.log file.</div><div><br></div><div><div>[2019-01-15 20:16:59.513023] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd version 4.1.6 (args: /usr/local/sbin/glusterd -p /var/run/glusterd.pid)</div><div>[2019-01-15 20:16:59.517164] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536</div><div>[2019-01-15 20:16:59.517264] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory</div><div>[2019-01-15 20:16:59.517283] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory</div><div>[2019-01-15 20:16:59.521508] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]</div><div>[2019-01-15 20:16:59.521544] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device</div><div>[2019-01-15 20:16:59.521562] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed</div><div>[2019-01-15 20:16:59.521629] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed</div><div>[2019-01-15 20:16:59.521648] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport</div><div>[2019-01-15 20:17:00.529390] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40100</div><div>[2019-01-15 20:17:00.608354] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: d6bf51a7-c296-492f-8dac-e81efa9dd22d</div><div>[2019-01-15 20:17:00.650911] W [MSGID: 106425] [glusterd-store.c:2643:glusterd_store_retrieve_bricks] 0-management: failed to get statfs() call on brick /media/disk4/brick4 [No such file or directory]</div><div>[2019-01-15 20:17:00.691240] I [MSGID: 106498] [glusterd-handler.c:3614:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0</div><div>[2019-01-15 20:17:00.691307] W [MSGID: 106061] [glusterd-handler.c:3408:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout</div><div>[2019-01-15 20:17:00.691331] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600</div><div>[2019-01-15 20:17:00.692547] E [MSGID: 106187] [glusterd-store.c:4662:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore</div><div>[2019-01-15 20:17:00.692582] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again</div><div>[2019-01-15 20:17:00.692597] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed</div><div>[2019-01-15 20:17:00.692607] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed</div><div>[2019-01-15 20:17:00.693004] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/local/sbin/glusterd(glusterfs_volumes_init+0xc2) [0x409f52] -->/usr/local/sbin/glusterd(glusterfs_process_volfp+0x151) [0x409e41] -->/usr/local/sbin/glusterd(cleanup_and_exit+0x5f) [0x40942f] ) 0-: received signum (-1), shutting down</div></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 16, 2019 at 4:34 PM Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">This is a case of partial write of a transaction and as the host ran out of space for the root partition where all the glusterd related configurations are persisted, the transaction couldn't be written and hence the new (replaced) brick's information wasn't persisted in the configuration. The workaround for this is to copy the content of /var/lib/glusterd/vols/gfs-tst/ from one of the nodes in the trusted storage pool to the node where glusterd service fails to come up and post that restarting the glusterd service should be able to make peer status reporting all nodes healthy and connected.</div><br><div class="gmail_quote"><div dir="ltr">On Wed, Jan 16, 2019 at 3:49 PM Amudhan P <<a href="mailto:amudhan83@gmail.com" target="_blank">amudhan83@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi,</div><div><br></div><div>In short, when I started glusterd service I am getting following error msg in the glusterd.log file in one server.</div><div>what needs to be done?</div><div><br></div><div>error logged in glusterd.log</div><div><br></div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.956053] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd version 4.1.6 (args: /usr/local/sbin/glusterd -p /var/run/glusterd.pid)</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960131] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960193] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960212] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964437] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964474] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964491] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964560] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964579] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:14.967681] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40100</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:14.973931] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: d6bf51a7-c296-492f-8dac-e81efa9dd22d</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046620] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/vols/gfs-tst/bricks/IP.3:-media-disk3-brick3. [No such file or directory]</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046685] E [MSGID: 106201] [glusterd-store.c:3384:glusterd_store_retrieve_volumes] 0-management: Unable to restore volume: gfs-tst</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046718] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046732] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046741] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.047171] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/local/sbin/glusterd(glusterfs_volumes</div><div><br></div><div><br></div><div><br></div><div>In long, I am trying to simulate a situation. where volume stoped abnormally and </div><div>entire cluster restarted with some missing disks.</div><div><br></div><div>My test cluster is set up with 3 nodes and each has four disks, I have setup a volume with disperse 4+2. </div><div>In Node-3 2 disks have failed, to replace I have shutdown all system</div><div><br></div><div>below are the steps done.</div><div><br></div><div>1. umount from client machine</div><div>2. shutdown all system by running `shutdown -h now` command ( without stopping volume and stop service)</div><div>3. replace faulty disk in Node-3</div><div>4. powered ON all system</div><div>5. format replaced drives, and mount all drives</div><div>6. start glusterd service in all node (success)</div><div>7. Now running `voulume status` command from node-3</div><div><span style="white-space:pre-wrap">        </span>output : [2019-01-15 16:52:17.718422] : v status : FAILED : Staging failed on 0083ec0c-40bf-472a-a128-458924e56c96. Please check log file for details.</div><div>8. running `voulume start gfs-tst` command from node-3</div><div><span style="white-space:pre-wrap">        </span>output : [2019-01-15 16:53:19.410252] : v start gfs-tst : FAILED : Volume gfs-tst already started</div><div><br></div><div>9. running `gluster v status` in other node. showing all brick available but 'self-heal daemon' not running</div><div><span style="white-space:pre-wrap">        </span>@gfstst-node2:~$ sudo gluster v status</div><div><span style="white-space:pre-wrap">        </span>Status of volume: gfs-tst</div><div><span style="white-space:pre-wrap">        </span>Gluster process TCP Port RDMA Port Online Pid</div><div><span style="white-space:pre-wrap">        </span>------------------------------------------------------------------------------</div><div><span style="white-space:pre-wrap">        </span>Brick IP.2:/media/disk1/brick1 49152 0 Y 1517</div><div><span style="white-space:pre-wrap">        </span>Brick IP.4:/media/disk1/brick1 49152 0 Y 1668</div><div><span style="white-space:pre-wrap">        </span>Brick IP.2:/media/disk2/brick2 49153 0 Y 1522</div><div><span style="white-space:pre-wrap">        </span>Brick IP.4:/media/disk2/brick2 49153 0 Y 1678</div><div><span style="white-space:pre-wrap">        </span>Brick IP.2:/media/disk3/brick3 49154 0 Y 1527</div><div><span style="white-space:pre-wrap">        </span>Brick IP.4:/media/disk3/brick3 49154 0 Y 1677</div><div><span style="white-space:pre-wrap">        </span>Brick IP.2:/media/disk4/brick4 49155 0 Y 1541</div><div><span style="white-space:pre-wrap">        </span>Brick IP.4:/media/disk4/brick4 49155 0 Y 1683</div><div><span style="white-space:pre-wrap">        </span>Self-heal Daemon on localhost N/A N/A Y 2662</div><div><span style="white-space:pre-wrap">        </span>Self-heal Daemon on IP.4 N/A N/A Y 2786</div><div><br></div><div>10. in the above output 'volume already started'. so, running `reset-brick` command</div><div> v reset-brick gfs-tst IP.3:/media/disk3/brick3 IP.3:/media/disk3/brick3 commit force</div><div><br></div><div><span style="white-space:pre-wrap">        </span>output : [2019-01-15 16:57:37.916942] : v reset-brick gfs-tst IP.3:/media/disk3/brick3 IP.3:/media/disk3/brick3 commit force : FAILED : /media/disk3/brick3 is already part of a volume </div><div><br></div><div>11. reset-brick command was not working, so, tried stopping volume and start with force command </div><div><span style="white-space:pre-wrap">        </span>output : [2019-01-15 17:01:04.570794] : v start gfs-tst force : FAILED : Pre-validation failed on localhost. Please check log file for details</div><div><br></div><div>12. now stopped service in all node and tried starting again. except node-3 other nodes service started successfully without any issues.</div><div><br></div><div><span style="white-space:pre-wrap">        </span>in node-3 receiving following message.</div><div><br></div><div><span style="white-space:pre-wrap">        </span>sudo service glusterd start</div><div><span style="white-space:pre-wrap">        </span> * Starting glusterd service glusterd [fail]</div><div><span style="white-space:pre-wrap">        </span>/usr/local/sbin/glusterd: option requires an argument -- 'f'</div><div><span style="white-space:pre-wrap">        </span>Try `glusterd --help' or `glusterd --usage' for more information.</div><div><br></div><div>13. checking glusterd log file found that OS drive was running out of space</div><div><span style="white-space:pre-wrap">        </span>output : [2019-01-15 16:51:37.210792] W [MSGID: 101012] [store.c:372:gf_store_save_value] 0-management: fflush failed. [No space left on device]</div><div><span style="white-space:pre-wrap">                </span> [2019-01-15 16:51:37.210874] E [MSGID: 106190] [glusterd-store.c:1058:glusterd_volume_exclude_options_write] 0-management: Unable to write volume values for gfs-tst</div><div><br></div><div>14. cleared some space in OS drive but still, service is not running. below is the error logged in glusterd.log</div><div><br></div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.956053] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd version 4.1.6 (args: /usr/local/sbin/glusterd -p /var/run/glusterd.pid)</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960131] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960193] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.960212] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964437] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964474] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964491] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964560] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:13.964579] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:14.967681] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40100</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:14.973931] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: d6bf51a7-c296-492f-8dac-e81efa9dd22d</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046620] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/vols/gfs-tst/bricks/IP.3:-media-disk3-brick3. [No such file or directory]</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046685] E [MSGID: 106201] [glusterd-store.c:3384:glusterd_store_retrieve_volumes] 0-management: Unable to restore volume: gfs-tst</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046718] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046732] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.046741] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed</div><div><span style="white-space:pre-wrap">        </span>[2019-01-15 17:50:15.047171] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/local/sbin/glusterd(glusterfs_volumes_init+0xc2) [0x409f52] -->/usr/local/sbin/glusterd(glusterfs_process_volfp+0x151) [0x409e41] -->/usr/local/sbin/glusterd(cleanup_and_exit+0x5f) [0x40942f] ) 0-: received signum (-1), shutting down</div><div><br></div><div><br></div><div>15. In other node running `volume status' still shows bricks node3 is live </div><div> but 'peer status' showing node-3 disconnected</div><div><br></div><div>@gfstst-node2:~$ sudo gluster v status</div><div>Status of volume: gfs-tst</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick IP.2:/media/disk1/brick1 49152 0 Y 1517</div><div>Brick IP.4:/media/disk1/brick1 49152 0 Y 1668</div><div>Brick IP.2:/media/disk2/brick2 49153 0 Y 1522</div><div>Brick IP.4:/media/disk2/brick2 49153 0 Y 1678</div><div>Brick IP.2:/media/disk3/brick3 49154 0 Y 1527</div><div>Brick IP.4:/media/disk3/brick3 49154 0 Y 1677</div><div>Brick IP.2:/media/disk4/brick4 49155 0 Y 1541</div><div>Brick IP.4:/media/disk4/brick4 49155 0 Y 1683</div><div>Self-heal Daemon on localhost N/A N/A Y 2662</div><div>Self-heal Daemon on IP.4 N/A N/A Y 2786</div><div><br></div><div>Task Status of Volume gfs-tst</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div><div><br></div><div><br></div><div>root@gfstst-node2:~$ sudo gluster pool list</div><div>UUID Hostname State</div><div>d6bf51a7-c296-492f-8dac-e81efa9dd22d IP.3 Disconnected</div><div>c1cbb58e-3ceb-4637-9ba3-3d28ef20b143 IP.4 Connected</div><div>0083ec0c-40bf-472a-a128-458924e56c96 localhost Connected</div><div><br></div><div>root@gfstst-node2:~$ sudo gluster peer status</div><div>Number of Peers: 2</div><div><br></div><div>Hostname: IP.3</div><div>Uuid: d6bf51a7-c296-492f-8dac-e81efa9dd22d</div><div>State: Peer in Cluster (Disconnected)</div><div><br></div><div>Hostname: IP.4</div><div>Uuid: c1cbb58e-3ceb-4637-9ba3-3d28ef20b143</div><div>State: Peer in Cluster (Connected)</div><div><br></div><div><br></div><div>regards</div><div>Amudhan</div></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>
</blockquote></div>