<div dir="ltr"><div>Marcus,<br></div>Can you share server-side  <span style="font-family:monospace,monospace">gluster peer probe</span> and client-side <span style="font-family:monospace,monospace">mount</span> command-lines.<br><br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 10, 2018 at 12:36 AM, Marcus Pedersén <span dir="ltr">&lt;<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">




<div dir="ltr" style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi all!</p>
<p>I have setup a replicated/distributed gluster cluster 2 x (2 + 1).</p>
<p>Centos 7 and gluster version 3.12.6 on server.<br>
</p>
<p>All machines have two network interfaces and connected to two different networks,</p>
<p><a href="http://10.10.0.0/16" target="_blank">10.10.0.0/16</a> (with hostnames in /etc/hosts, gluster version 3.12.6)</p>
<p><a href="http://192.168.67.0/24" target="_blank">192.168.67.0/24</a> (with ldap, gluster version 3.13.1)</p>
<p>Gluster cluster was created on the <a href="http://10.10.0.0/16" target="_blank">10.10.0.0/16</a> net, gluster peer probe ...and so on.</p>
<p>All nodes are available on both networks and have the same names on both networks.</p>
<p><br>
</p>
<p>Now to my problem, the gluster cluster is mounted on multiple clients on the <a href="http://192.168.67.0/24" target="_blank">192.168.67.0/24</a> net</p>
<p>and a process was running on one of the clients, reading and writing to files.
</p>
<p>At the same time I mounted the cluster on <wbr>a client on the <a href="http://10.10.0.0/16" target="_blank">10.10.0.0/16</a> net and started to create</p>
<p>and edit files on the cluster. Around the same time the process on the 192-net stopped without any</p>
<p>specific errors. Started other processes on the 192-net and continued to make changes on the 10-net
</p>
<p>and got the same behavior with stopping processes on the 192-net.</p>
<p><br>
</p>
<p>Is there any known problems with this type of setup?</p>
<p>How do I proceed to figure out a solution as I need access from both networks?<br>
</p>
<p><br>
</p>
<p>Following error shows a couple of times on server (systemd -&gt; glusterd):</p>
<p>[2018-04-09 11:46:46.254071] C [mem-pool.c:613:mem_pools_<wbr>init_early] 0-mem-pool: incorrect order of mem-pool initialization (init_done=3)<br>
</p>
<p><br>
</p>
<p>Client logs:</p>
<p>Client on 192-net: </p>
<p>[2018-04-09 11:35:31.402979] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 5-urd-gds-volume-client-1: Connected to urd-gds-volume-client-1, attached to remote volume &#39;/urd-gds/gluster&#39;.<br>
[2018-04-09 11:35:31.403019] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 5-urd-gds-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.403051] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 5-urd-gds-volume-snapd-client: Connected to urd-gds-volume-snapd-client, attached to remote volume &#39;snapd-urd-gds-vo\<br>
lume&#39;.<br>
[2018-04-09 11:35:31.403091] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 5-urd-gds-volume-snapd-client: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.403271] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-3: Server lk version = 1<br>
[2018-04-09 11:35:31.403325] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-4: Server lk version = 1<br>
[2018-04-09 11:35:31.403349] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-0: Server lk version = 1<br>
[2018-04-09 11:35:31.403367] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-2: Server lk version = 1<br>
[2018-04-09 11:35:31.403616] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-1: Server lk version = 1<br>
[2018-04-09 11:35:31.403751] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 5-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 11:35:31.404174] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-snapd-client: Server lk version = 1<br>
[2018-04-09 11:35:31.405030] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 5-urd-gds-volume-client-5: Connected to urd-gds-volume-client-5, attached to remote volume &#39;/urd-gds/gluster2&#39;.<br>
[2018-04-09 11:35:31.405069] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 5-urd-gds-volume-client-5: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.405585] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 5-urd-gds-volume-client-5: Server lk version = 1<br>
[2018-04-09 11:42:29.622006] I [fuse-bridge.c:4835:fuse_<wbr>graph_sync] 0-fuse: switched to graph 5<br>
[2018-04-09 11:42:29.627533] I [MSGID: 109005] [dht-selfheal.c:2458:dht_<wbr>selfheal_directory] 5-urd-gds-volume-dht: Directory selfheal failed: Unable to form layout for directory /<br>
[2018-04-09 11:42:29.627935] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-0: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628013] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-1: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628047] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-2: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628069] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-0: disconnected from urd-gds-volume-client-0. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.628077] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-3: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628184] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.628191] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.628272] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-5: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628382] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-snapd-client: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.632749] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_<wbr>handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.637294] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 11:42:29.637330] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:42:29.641674] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-snapd-client: disconnected from urd-gds-volume-snapd-client. Client process will keep trying to connect to glust\<br>
erd until brick&#39;s port is available<br>
[2018-04-09 11:42:29.641701] E [MSGID: 108006] [afr-common.c:5143:__afr_<wbr>handle_child_down_event] 2-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.</p>
<p><br>
</p>
<p>Other client on 192-net:</p>
<p>[2018-04-09 14:13:57.816783] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-client-1: Server lk version = 1<br>
[2018-04-09 14:13:57.817092] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 0-urd-gds-volume-client-3: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.817208] I [rpc-clnt.c:1994:rpc_clnt_<wbr>reconfig] 0-urd-gds-volume-client-4: changing port to 49152 (from 0)<br>
[2018-04-09 14:13:57.817388] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-2: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.817623] I [rpc-clnt.c:1994:rpc_clnt_<wbr>reconfig] 0-urd-gds-volume-client-5: changing port to 49153 (from 0)<br>
[2018-04-09 14:13:57.817658] I [rpc-clnt.c:1994:rpc_clnt_<wbr>reconfig] 0-urd-gds-volume-snapd-client: changing port to 49153 (from 0)<br>
[2018-04-09 14:13:57.822047] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-4: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.823419] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-5: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.823613] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-3: Connected to urd-gds-volume-client-3, attached to remote volume &#39;/urd-gds/gluster&#39;.<br>
[2018-04-09 14:13:57.823634] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-3: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.823684] I [MSGID: 108005] [afr-common.c:5066:__afr_<wbr>handle_child_up_event] 0-urd-gds-volume-replicate-1: Subvolume &#39;urd-gds-volume-client-3&#39; came back up; going online.<br>
[2018-04-09 14:13:57.825689] W [socket.c:3216:socket_connect] 0-urd-gds-volume-snapd-client: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.825845] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-client-3: Server lk version = 1<br>
[2018-04-09 14:13:57.825873] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 0-urd-gds-volume-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826270] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 0-urd-gds-volume-client-4: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826414] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 0-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826562] I [MSGID: 114057] [client-handshake.c:1484:<wbr>select_server_supported_<wbr>programs] 0-urd-gds-volume-snapd-client: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.827226] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-2: Connected to urd-gds-volume-client-2, attached to remote volume &#39;/urd-gds/gluster1&#39;.<br>
[2018-04-09 14:13:57.827245] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-2: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827594] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-4: Connected to urd-gds-volume-client-4, attached to remote volume &#39;/urd-gds/gluster&#39;.<br>
[2018-04-09 14:13:57.827630] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-4: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827750] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-5: Connected to urd-gds-volume-client-5, attached to remote volume &#39;/urd-gds/gluster2&#39;.<br>
[2018-04-09 14:13:57.827775] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 0-urd-gds-volume-client-5: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827782] I [MSGID: 114046] [client-handshake.c:1231:<wbr>client_setvolume_cbk] 0-urd-gds-volume-snapd-client: Connected to urd-gds-volume-snapd-client, attached to remote volume &#39;snapd-urd-gds-vo\<br>
lume&#39;.<br>
[2018-04-09 14:13:57.827802] I [MSGID: 114047] [client-handshake.c:1242:<wbr>client_setvolume_cbk] 0-urd-gds-volume-snapd-client: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.829136] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-client-2: Server lk version = 1<br>
[2018-04-09 14:13:57.829173] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-client-5: Server lk version = 1<br>
[2018-04-09 14:13:57.829180] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-client-4: Server lk version = 1<br>
[2018-04-09 14:13:57.829210] I [MSGID: 114035] [client-handshake.c:202:<wbr>client_set_lk_version_cbk] 0-urd-gds-volume-snapd-client: Server lk version = 1<br>
[2018-04-09 14:13:57.829295] I [fuse-bridge.c:4205:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26<br>
[2018-04-09 14:13:57.829320] I [fuse-bridge.c:4835:fuse_<wbr>graph_sync] 0-fuse: switched to graph 0<br>
[2018-04-09 14:13:57.833539] I [MSGID: 109005] [dht-selfheal.c:2458:dht_<wbr>selfheal_directory] 0-urd-gds-volume-dht: Directory selfheal failed: Unable to form layout for directory /<br>
</p>
<p><br>
</p>
<p>Client on 10-net:</p>
<p>[2018-04-09 11:35:31.113283] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:35:31.113289] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 11:35:31.113289] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:35:31.113351] E [MSGID: 108006] [afr-common.c:5006:__afr_<wbr>handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 11:35:31.113367] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:35:31.113492] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:35:31.113500] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 2-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 11:35:31.113511] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 11:35:31.113554] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 2-urd-gds-volume-snapd-client: disconnected from urd-gds-volume-snapd-client. Client process will keep trying to connect to glust\<br>
erd until brick&#39;s port is available<br>
[2018-04-09 11:35:31.113567] E [MSGID: 108006] [afr-common.c:5006:__afr_<wbr>handle_child_down_event] 2-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:05:35.111892] I [fuse-bridge.c:4835:fuse_<wbr>graph_sync] 0-fuse: switched to graph 5<br>
[2018-04-09 12:05:35.116187] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-0: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116214] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-1: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116223] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-0: disconnected from urd-gds-volume-client-0. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116227] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-2: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116252] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116257] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-3: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116258] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116273] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116273] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 0-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 12:05:35.116288] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-5: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116393] E [MSGID: 108006] [afr-common.c:5006:__afr_<wbr>handle_child_down_event] 0-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:05:35.116397] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116574] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116575] I [MSGID: 114018] [client.c:2285:client_rpc_<wbr>notify] 0-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick&#39;s port is available<br>
[2018-04-09 12:05:35.116592] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 0-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 12:05:35.116646] E [MSGID: 108006] [afr-common.c:5006:__afr_<wbr>handle_child_down_event] 0-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:13:18.767382] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 5-urd-gds-volume-dht: renaming /interbull/backup/scripts/<wbr>backup/gsnapshotctl.sh (hash=urd-gds-volume-<wbr>replicate-0/cache=urd-gds-<wbr>volum\<br>
e-replicate-0) =&gt; /interbull/backup/scripts/<wbr>backup/gsnapshotctl.sh~ (hash=urd-gds-volume-<wbr>replicate-1/cache=&lt;nul&gt;)<br>
[2018-04-09 13:34:54.031860] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 5-urd-gds-volume-dht: renaming /interbull/backup/scripts/<wbr>backup/bkp_gluster_to_ribston.<wbr>sh (hash=urd-gds-volume-<wbr>replicate-0/cache=urd\<br>
-gds-volume-replicate-0) =&gt; /interbull/backup/scripts/<wbr>backup/bkp_gluster_to_ribston.<wbr>sh~ (hash=urd-gds-volume-<wbr>replicate-1/cache=urd-gds-<wbr>volume-replicate-0)</p>
<p><br>
</p>
<p> <br>
</p>
<p>Many thanks in advance!!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus</p>
<p><br>
</p>
<p>--<br>
******************************<wbr>********************<br>
* Marcus Pedersén                      <wbr>          *<br>
* System administrator                 <wbr>          *<br>
******************************<wbr>********************<br>
* Interbull Centre                        <wbr>       *<br>
* ================              <wbr>                 *<br>
* Department of Animal Breeding &amp; Genetics — SLU *<br>
* Box 7023, SE-750 07                            *<br>
* Uppsala, Sweden                        <wbr>        *<br>
******************************<wbr>********************<br>
* Visiting address:                      <wbr>        *<br>
* Room 55614, Ulls väg 26, Ultuna                *<br>
* Uppsala                       <wbr>                 *<br>
* Sweden                        <wbr>                 *<br>
*                             <wbr>                   *<br>
* Tel: +46-(0)18-67 1962                         *<br>
*                             <wbr>                   *<br>
******************************<wbr>********************<br>
*     ISO 9001 Bureau Veritas No SE004561-1      *<br>
******************************<wbr>********************<br>
<br>
</p>
</div>

<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Milind<br><br></div></div></div></div>
</div>