<div dir="ltr"><div><div>You're hitting a race here. By the time glusterd tries to resolve the address of one of the remote bricks of a particular volume, the n/w interface is not up by that time. We have fixed this issue in mainline and 3.12 branch through the following commit:<br><br>commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a<br>Author: Gaurav Yadav <<a href="mailto:gyadav@redhat.com">gyadav@redhat.com</a>><br>Date: Tue Jul 18 16:23:18 2017 +0530<br><br> glusterd : glusterd fails to start when peer's network interface is down<br> <br> Problem:<br> glusterd fails to start on nodes where glusterd tries to come up even<br> before network is up.<br> <br> Fix:<br> On startup glusterd tries to resolve brick path which is based on<br> hostname/ip, but in the above scenario when network interface is not<br> up, glusterd is not able to resolve the brick path using ip_address or<br> hostname With this fix glusterd will use UUID to resolve brick path.<br> <br> Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710<br> BUG: 1472267<br> Signed-off-by: Gaurav Yadav <<a href="mailto:gyadav@redhat.com">gyadav@redhat.com</a>><br> Reviewed-on: <a href="https://review.gluster.org/17813">https://review.gluster.org/17813</a><br> Smoke: Gluster Build System <<a href="mailto:jenkins@build.gluster.org">jenkins@build.gluster.org</a>><br> Reviewed-by: Prashanth Pai <<a href="mailto:ppai@redhat.com">ppai@redhat.com</a>><br> CentOS-regression: Gluster Build System <<a href="mailto:jenkins@build.gluster.org">jenkins@build.gluster.org</a>><br> Reviewed-by: Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br><br><br><br></div>Note : 3.12 release is planned by end of this month.<br><br></div>~Atin<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 17, 2017 at 2:45 PM, ismael mondiu <span dir="ltr"><<a href="mailto:mondiu@hotmail.com" target="_blank">mondiu@hotmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div id="m_3032336403500242004divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif" dir="ltr">
<p>Hi Team,</p>
<p>I noticed that glusterd is never starting when i reboot my Redhat 7.1 server. </p>
<p>Service is enabled but don't works.</p>
<p>I tested with gluster 3.10.4 & gluster 3.10.5 and the problem still exists.</p>
<p><br>
</p>
<p>When i started the service manually this works.</p>
<p>I'va also tested on Redhat 6.6 server and gluster 3.10.4 and this works fine.</p>
<p>The problem seems to be related to Redhat 7.1 </p>
<p><br>
</p>
<p>This is à known issue ? if yes , can you tell me what's is the workaround?</p>
<p><br>
</p>
<p>Thanks</p>
<p><br>
</p>
<p>Some logs here</p>
<p><br>
</p>
<p></p>
<p>[root@~]# systemctl status glusterd<br>
● glusterd.service - GlusterFS, a clustered file-system server<br>
Loaded: loaded (/usr/lib/systemd/system/<wbr>glusterd.service; enabled; vendor preset: disabled)<br>
Active: failed (Result: exit-code) since Thu 2017-08-17 11:04:00 CEST; 2min 9s ago<br>
Process: 851 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=1/FAILURE)</p>
<p>Aug 17 11:03:59 dvihcasc0r systemd[1]: Starting GlusterFS, a clustered file-system server...<br>
Aug 17 11:04:00 dvihcasc0r systemd[1]: glusterd.service: control process exited, code=exited status=1<br>
Aug 17 11:04:00 dvihcasc0r systemd[1]: Failed to start GlusterFS, a clustered file-system server.<br>
Aug 17 11:04:00 dvihcasc0r systemd[1]: Unit glusterd.service entered failed state.<br>
Aug 17 11:04:00 dvihcasc0r systemd[1]: glusterd.service failed.<br>
</p>
<p></p>
<p><br>
</p>
<p>******************************<wbr>******************************<wbr>****************************</p>
<p><span> /var/log/glusterfs/glusterd.<wbr>log</span></p>
<p>******************************<wbr>******************************<wbr>******************************<wbr>**</p>
<p><br>
</p>
<p></p>
<div>2017-08-17 09:04:00.202529] I [MSGID: 106478] [glusterd.c:1449:init] 0-management: Maximum allowed open file descriptors set to 65536<br>
[2017-08-17 09:04:00.202573] I [MSGID: 106479] [glusterd.c:1496:init] 0-management: Using /var/lib/glusterd as working directory<br>
[2017-08-17 09:04:00.365134] E [rpc-transport.c:283:rpc_<wbr>transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.10.5/<wbr>rpc-transport/rdma.so: cannot open shared object file: No such file or directory<br>
[2017-08-17 09:04:00.365161] W [rpc-transport.c:287:rpc_<wbr>transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine<br>
[2017-08-17 09:04:00.365195] W [rpcsvc.c:1661:rpcsvc_create_<wbr>listener] 0-rpc-service: cannot create listener, initing the transport failed<br>
[2017-08-17 09:04:00.365206] E [MSGID: 106243] [glusterd.c:1720:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport<br>
[2017-08-17 09:04:00.464314] I [MSGID: 106228] [glusterd.c:500:glusterd_<wbr>check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory]<br>
[2017-08-17 09:04:00.510412] I [MSGID: 106513] [glusterd-store.c:2197:<wbr>glusterd_restore_op_version] 0-glusterd: retrieved op-version: 31004<br>
[2017-08-17 09:04:00.711413] I [MSGID: 106194] [glusterd-store.c:3776:<wbr>glusterd_store_retrieve_<wbr>missed_snaps_list] 0-management: No missed snaps list.<br>
[2017-08-17 09:04:00.756731] E [MSGID: 106187] [glusterd-store.c:4559:<wbr>glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore<br>
[2017-08-17 09:04:00.756787] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again<br>
[2017-08-17 09:04:00.756802] E [MSGID: 101066] [graph.c:325:glusterfs_graph_<wbr>init] 0-management: initializing translator failed<br>
[2017-08-17 09:04:00.756816] E [MSGID: 101176] [graph.c:681:glusterfs_graph_<wbr>activate] 0-graph: init failed<br>
[2017-08-17 09:04:00.766584] W [glusterfsd.c:1332:cleanup_<wbr>and_exit] (-->/usr/sbin/glusterd(<wbr>glusterfs_volumes_init+0xfd) [0x7f9bdef4cabd] -->/usr/sbin/glusterd(<wbr>glusterfs_process_volfp+0x1b1) [0x7f9bdef4c961] -->/usr/sbin/glusterd(cleanup_<wbr>and_exit+0x6b) [0x7f9bdef4be4b]
) 0-: received signum (1), shutting down<br>
</div>
<p></p>
<p>******************************<wbr>******************************<wbr>******************************</p>
<p></p>
<div>[root@~]# uptime<br>
11:13:55 up 10 min, 1 user, load average: 0.00, 0.02, 0.04</div>
<p></p>
<p><br>
</p>
<p>******************************<wbr>******************************<wbr>******************************</p>
<p><br>
</p>
</div>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>