<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none"><!--P{margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr" style="font-size:12pt;color:#000000;background-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hi all!</p>
<p>I have setup a replicated/distributed gluster cluster 2 x (2 &#43; 1).</p>
<p>Centos 7 and gluster version 3.12.6 on server.<br>
</p>
<p>All machines have two network interfaces and connected to two different networks,</p>
<p>10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)</p>
<p>192.168.67.0/24 (with ldap, gluster version 3.13.1)</p>
<p>Gluster cluster was created on the 10.10.0.0/16 net,&nbsp;gluster&nbsp;peer probe&nbsp;...and so on.</p>
<p>All nodes are available on both networks and have the same names on both networks.</p>
<p><br>
</p>
<p>Now to my problem, the gluster cluster is mounted on multiple clients on the 192.168.67.0/24 net</p>
<p>and a process was running on one of the clients, reading and writing to files.
</p>
<p>At the same time&nbsp;I&nbsp;mounted&nbsp;the&nbsp;cluster&nbsp;on&nbsp;a&nbsp;client&nbsp;on the&nbsp;10.10.0.0/16 net and started&nbsp;to&nbsp;create</p>
<p>and&nbsp;edit files&nbsp;on the&nbsp;cluster.&nbsp;Around the same time the process on the 192-net stopped without any</p>
<p>specific errors. Started other processes on the 192-net and continued to make changes on the 10-net
</p>
<p>and got the same behavior with stopping processes on the 192-net.</p>
<p><br>
</p>
<p>Is there any known problems with this type of setup?</p>
<p>How do I proceed to figure out a solution as I need access from both networks?<br>
</p>
<p><br>
</p>
<p>Following error shows a couple of times on server (systemd -&gt; glusterd):</p>
<p>[2018-04-09 11:46:46.254071] C [mem-pool.c:613:mem_pools_init_early] 0-mem-pool: incorrect order of mem-pool initialization (init_done=3)<br>
</p>
<p><br>
</p>
<p>Client logs:</p>
<p>Client on 192-net:&nbsp;</p>
<p>[2018-04-09 11:35:31.402979] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-client-1: Connected to urd-gds-volume-client-1, attached to remote volume '/urd-gds/gluster'.<br>
[2018-04-09 11:35:31.403019] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.403051] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-snapd-client: Connected to urd-gds-volume-snapd-client, attached to remote volume 'snapd-urd-gds-vo\<br>
lume'.<br>
[2018-04-09 11:35:31.403091] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-snapd-client: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.403271] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-3: Server lk version = 1<br>
[2018-04-09 11:35:31.403325] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-4: Server lk version = 1<br>
[2018-04-09 11:35:31.403349] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-0: Server lk version = 1<br>
[2018-04-09 11:35:31.403367] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-2: Server lk version = 1<br>
[2018-04-09 11:35:31.403616] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-1: Server lk version = 1<br>
[2018-04-09 11:35:31.403751] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 5-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 11:35:31.404174] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-snapd-client: Server lk version = 1<br>
[2018-04-09 11:35:31.405030] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-client-5: Connected to urd-gds-volume-client-5, attached to remote volume '/urd-gds/gluster2'.<br>
[2018-04-09 11:35:31.405069] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-client-5: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 11:35:31.405585] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-5: Server lk version = 1<br>
[2018-04-09 11:42:29.622006] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: switched to graph 5<br>
[2018-04-09 11:42:29.627533] I [MSGID: 109005] [dht-selfheal.c:2458:dht_selfheal_directory] 5-urd-gds-volume-dht: Directory selfheal failed: Unable to form layout for directory /<br>
[2018-04-09 11:42:29.627935] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-0: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628013] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-1: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628047] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-2: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628069] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-0: disconnected from urd-gds-volume-client-0. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.628077] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-3: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628184] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.628191] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.628272] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-5: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.628382] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-snapd-client: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 11:42:29.632749] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.637294] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 11:42:29.637330] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:42:29.641674] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-snapd-client: disconnected from urd-gds-volume-snapd-client. Client process will keep trying to connect to glust\<br>
erd until brick's port is available<br>
[2018-04-09 11:42:29.641701] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.</p>
<p><br>
</p>
<p>Other client on 192-net:</p>
<p>[2018-04-09 14:13:57.816783] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-1: Server lk version = 1<br>
[2018-04-09 14:13:57.817092] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 0-urd-gds-volume-client-3: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.817208] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 0-urd-gds-volume-client-4: changing port to 49152 (from 0)<br>
[2018-04-09 14:13:57.817388] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-2: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.817623] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 0-urd-gds-volume-client-5: changing port to 49153 (from 0)<br>
[2018-04-09 14:13:57.817658] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 0-urd-gds-volume-snapd-client: changing port to 49153 (from 0)<br>
[2018-04-09 14:13:57.822047] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-4: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.823419] W [socket.c:3216:socket_connect] 0-urd-gds-volume-client-5: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.823613] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-3: Connected to urd-gds-volume-client-3, attached to remote volume '/urd-gds/gluster'.<br>
[2018-04-09 14:13:57.823634] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-3: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.823684] I [MSGID: 108005] [afr-common.c:5066:__afr_handle_child_up_event] 0-urd-gds-volume-replicate-1: Subvolume 'urd-gds-volume-client-3' came back up; going online.<br>
[2018-04-09 14:13:57.825689] W [socket.c:3216:socket_connect] 0-urd-gds-volume-snapd-client: Error disabling sockopt IPV6_V6ONLY: &quot;Protocol not available&quot;<br>
[2018-04-09 14:13:57.825845] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-3: Server lk version = 1<br>
[2018-04-09 14:13:57.825873] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 0-urd-gds-volume-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826270] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 0-urd-gds-volume-client-4: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826414] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 0-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.826562] I [MSGID: 114057] [client-handshake.c:1484:select_server_supported_programs] 0-urd-gds-volume-snapd-client: Using Program GlusterFS 3.3, Num (1298437), Version (330)<br>
[2018-04-09 14:13:57.827226] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-2: Connected to urd-gds-volume-client-2, attached to remote volume '/urd-gds/gluster1'.<br>
[2018-04-09 14:13:57.827245] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-2: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827594] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-4: Connected to urd-gds-volume-client-4, attached to remote volume '/urd-gds/gluster'.<br>
[2018-04-09 14:13:57.827630] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-4: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827750] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-5: Connected to urd-gds-volume-client-5, attached to remote volume '/urd-gds/gluster2'.<br>
[2018-04-09 14:13:57.827775] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-5: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.827782] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-snapd-client: Connected to urd-gds-volume-snapd-client, attached to remote volume 'snapd-urd-gds-vo\<br>
lume'.<br>
[2018-04-09 14:13:57.827802] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-snapd-client: Server and Client lk-version numbers are not same, reopening the fds<br>
[2018-04-09 14:13:57.829136] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-2: Server lk version = 1<br>
[2018-04-09 14:13:57.829173] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-5: Server lk version = 1<br>
[2018-04-09 14:13:57.829180] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-4: Server lk version = 1<br>
[2018-04-09 14:13:57.829210] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-snapd-client: Server lk version = 1<br>
[2018-04-09 14:13:57.829295] I [fuse-bridge.c:4205:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26<br>
[2018-04-09 14:13:57.829320] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: switched to graph 0<br>
[2018-04-09 14:13:57.833539] I [MSGID: 109005] [dht-selfheal.c:2458:dht_selfheal_directory] 0-urd-gds-volume-dht: Directory selfheal failed: Unable to form layout for directory /<br>
</p>
<p><br>
</p>
<p>Client&nbsp;on&nbsp;10-net:</p>
<p>[2018-04-09 11:35:31.113283] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:35:31.113289] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 11:35:31.113289] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:35:31.113351] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 11:35:31.113367] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:35:31.113492] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:35:31.113500] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 2-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 11:35:31.113511] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 11:35:31.113554] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-snapd-client: disconnected from urd-gds-volume-snapd-client. Client process will keep trying to connect to glust\<br>
erd until brick's port is available<br>
[2018-04-09 11:35:31.113567] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:05:35.111892] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: switched to graph 5<br>
[2018-04-09 12:05:35.116187] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-0: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116214] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-1: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116223] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-0: disconnected from urd-gds-volume-client-0. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116227] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-2: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116252] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-1: disconnected from urd-gds-volume-client-1. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116257] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-3: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116258] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116273] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116273] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 0-urd-gds-volume-replicate-0: Client-quorum is not met<br>
[2018-04-09 12:05:35.116288] I [MSGID: 114021] [client.c:2369:notify] 0-urd-gds-volume-client-5: current graph is no longer active, destroying rpc_client<br>
[2018-04-09 12:05:35.116393] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:05:35.116397] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116574] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116575] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-5: disconnected from urd-gds-volume-client-5. Client process will keep trying to connect to glusterd unti\<br>
l brick's port is available<br>
[2018-04-09 12:05:35.116592] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 0-urd-gds-volume-replicate-1: Client-quorum is not met<br>
[2018-04-09 12:05:35.116646] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-urd-gds-volume-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.<br>
[2018-04-09 12:13:18.767382] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 5-urd-gds-volume-dht: renaming /interbull/backup/scripts/backup/gsnapshotctl.sh (hash=urd-gds-volume-replicate-0/cache=urd-gds-volum\<br>
e-replicate-0) =&gt; /interbull/backup/scripts/backup/gsnapshotctl.sh~ (hash=urd-gds-volume-replicate-1/cache=&lt;nul&gt;)<br>
[2018-04-09 13:34:54.031860] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 5-urd-gds-volume-dht: renaming /interbull/backup/scripts/backup/bkp_gluster_to_ribston.sh (hash=urd-gds-volume-replicate-0/cache=urd\<br>
-gds-volume-replicate-0) =&gt; /interbull/backup/scripts/backup/bkp_gluster_to_ribston.sh~ (hash=urd-gds-volume-replicate-1/cache=urd-gds-volume-replicate-0)</p>
<p><br>
</p>
<p>&nbsp;<br>
</p>
<p>Many thanks in advance!!</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus</p>
<p><br>
</p>
<p>--<br>
**************************************************<br>
* Marcus Pedersén&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* System administrator&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
**************************************************<br>
* Interbull Centre&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* ================&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Department of Animal Breeding &amp; Genetics &#8212; SLU *<br>
* Box 7023, SE-750 07&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Uppsala, Sweden&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
**************************************************<br>
* Visiting address:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Room 55614, Ulls väg 26, Ultuna&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Uppsala&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Sweden&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
* Tel: &#43;46-(0)18-67 1962&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
**************************************************<br>
*&nbsp;&nbsp;&nbsp;&nbsp; ISO 9001 Bureau Veritas No SE004561-1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *<br>
**************************************************<br>
<br>
</p>
</body>
</html>