<div dir="ltr"><div><div><div>Could you share the following information:<br><br></div>1. gluster --version<br></div>2. output of gluster volume status<br></div>3. glusterd log and all brick log files from the node where bricks didn&#39;t come up.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <span dir="ltr">&lt;<a href="mailto:hawk@tbi.univie.ac.at" target="_blank">hawk@tbi.univie.ac.at</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
our systems have suffered a host failure in a replica three setup.<br>
The host needed a complete reinstall. I followed the RH guide to<br>
&#39;replace a host with the same hostname&#39;<br>
(<a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts" rel="noreferrer" target="_blank">https://access.redhat.com/<wbr>documentation/en-us/red_hat_<wbr>gluster_storage/3/html/<wbr>administration_guide/sect-<wbr>replacing_hosts</a>).<br>
<br>
The machine has the same OS (CentOS 7). The new machine got a minor<br>
version number newer gluster packages<br>
(glusterfs-3.12.6-1.el7.x86_<wbr>64) than the others<br>
(glusterfs-3.12.5-2.el7.x86_<wbr>64).<br>
<br>
The guide told me to create /var/lib/glusterd/<a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.<wbr>info</a> with the<br>
UUID from the old host.<br>
Then I copied /var/lib/glusterd/peers/&lt;uuid&gt; files from the two<br>
other hosts to the new (except the uuid file from the old host).<br>
I created all the brick directories as present on the other<br>
machines. Empty of course. And I set the volume-id extended<br>
attribute to the value retrieved from the running hosts.<br>
<br>
On one of the old hosts I mounted each export, created and removed a<br>
directory, set and removed an extended attribute as the guide<br>
suggested to trigger self healing.<br>
<br>
After that I started the gluster daemon (systemctl start glusterd<br>
glusterfsd).<br>
<br>
The new host list other peers as connected (and vice versa) but no<br>
brick processes are started. So the replacement bricks are not in<br>
use and no healing is done.<br>
<br>
I checked the logs and searched online but couldn&#39;t find a reason<br>
why the brick processes are not running or how to get them running.<br>
<br>
Is there a way to get the brick processes started? (Preferably not<br>
shutting down the other hosts since they are in use)<br>
Does anyone have a different approach to replace a faulty host?<br>
<br>
Thanks in advance!<br>
Cheers<br>
Richard<br>
<br>
<br>
<br>
Here is the glusterd.log. I&#39;ve seen the disconnect messages but no<br>
reason why.<br>
<br>
/var/log/glusterd.log<br>
[2018-03-20 13:34:01.333423] I [MSGID: 100030]<br>
[glusterfsd.c:2524:main] 0-/usr/sbin/glusterd: Started running<br>
/usr/sbin/glusterd version 3.12.6 (args: /usr/sbin/glusterd -p<br>
/var/run/glusterd.pid --log-level INFO)<br>
[2018-03-20 13:34:01.339203] I [MSGID: 106478]<br>
[glusterd.c:1423:init] 0-management: Maximum allowed open file<br>
descriptors set to 65536<br>
[2018-03-20 13:34:01.339243] I [MSGID: 106479]<br>
[glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as<br>
working directory<br>
[2018-03-20 13:34:01.339256] I [MSGID: 106479]<br>
[glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid<br>
file working directory<br>
[2018-03-20 13:34:01.343809] E<br>
[rpc-transport.c:283:rpc_<wbr>transport_load] 0-rpc-transport:<br>
/usr/lib64/glusterfs/3.12.6/<wbr>rpc-transport/rdma.so: cannot open<br>
shared object file: No such file or directory<br>
[2018-03-20 13:34:01.343836] W<br>
[rpc-transport.c:287:rpc_<wbr>transport_load] 0-rpc-transport: volume<br>
&#39;rdma.management&#39;: transport-type &#39;rdma&#39; is not valid or not found<br>
on this machine<br>
[2018-03-20 13:34:01.343847] W<br>
[rpcsvc.c:1682:rpcsvc_create_<wbr>listener] 0-rpc-service: cannot create<br>
listener, initing the transport failed<br>
[2018-03-20 13:34:01.343855] E [MSGID: 106243]<br>
[glusterd.c:1769:init] 0-management: creation of 1 listeners failed,<br>
continuing with succeeded transport<br>
[2018-03-20 13:34:01.344594] I [MSGID: 106228]<br>
[glusterd.c:499:glusterd_<wbr>check_gsync_present] 0-glusterd:<br>
geo-replication module not installed in the system [No such file or<br>
directory]<br>
[2018-03-20 13:34:01.344936] I [MSGID: 106513]<br>
[glusterd-store.c:2241:<wbr>glusterd_restore_op_version] 0-glusterd:<br>
retrieved op-version: 31202<br>
[2018-03-20 13:34:01.471227] I [MSGID: 106498]<br>
[glusterd-handler.c:3603:<wbr>glusterd_friend_add_from_<wbr>peerinfo]<br>
0-management: connect returned 0<br>
[2018-03-20 13:34:01.471297] I [MSGID: 106498]<br>
[glusterd-handler.c:3603:<wbr>glusterd_friend_add_from_<wbr>peerinfo]<br>
0-management: connect returned 0<br>
[2018-03-20 13:34:01.471325] W [MSGID: 106062]<br>
[glusterd-handler.c:3400:<wbr>glusterd_transport_inet_<wbr>options_build]<br>
0-glusterd: Failed to get tcp-user-timeout<br>
[2018-03-20 13:34:01.471351] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:01.471412] W [MSGID: 101002]<br>
[options.c:995:xl_opt_<wbr>validate] 0-management: option<br>
&#39;address-family&#39; is deprecated, preferred is<br>
&#39;transport.address-family&#39;, continuing with correction<br>
[2018-03-20 13:34:01.474137] W [MSGID: 106062]<br>
[glusterd-handler.c:3400:<wbr>glusterd_transport_inet_<wbr>options_build]<br>
0-glusterd: Failed to get tcp-user-timeout<br>
[2018-03-20 13:34:01.474161] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:01.474238] W [MSGID: 101002]<br>
[options.c:995:xl_opt_<wbr>validate] 0-management: option<br>
&#39;address-family&#39; is deprecated, preferred is<br>
&#39;transport.address-family&#39;, continuing with correction<br>
[2018-03-20 13:34:01.476646] I [MSGID: 106544]<br>
[glusterd.c:158:glusterd_uuid_<wbr>init] 0-management: retrieved UUID:<br>
e4ed3102-9794-494b-af36-<wbr>d767d8a72678<br>
Final graph:<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
  1: volume management<br>
  2:     type mgmt/glusterd<br>
  3:     option rpc-auth.auth-glusterfs on<br>
  4:     option rpc-auth.auth-unix on<br>
  5:     option rpc-auth.auth-null on<br>
  6:     option transport.listen-backlog 10<br>
  7:     option rpc-auth-allow-insecure on<br>
  8:     option event-threads 1<br>
  9:     option ping-timeout 0<br>
 10:     option transport.socket.read-fail-log off<br>
 11:     option transport.socket.keepalive-<wbr>interval 2<br>
 12:     option transport.socket.keepalive-<wbr>time 10<br>
 13:     option transport-type rdma<br>
 14:     option working-directory /var/lib/glusterd<br>
 15: end-volume<br>
 16:<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
[2018-03-20 13:34:01.476895] I [MSGID: 101190]<br>
[event-epoll.c:613:event_<wbr>dispatch_epoll_worker] 0-epoll: Started<br>
thread with index 1<br>
[2018-03-20 13:34:12.197917] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:486:__<wbr>glusterd_friend_add_cbk] 0-glusterd:<br>
Received ACC from uuid: 0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c, host:<br>
borg-sphere-three, port: 0<br>
[2018-03-20 13:34:12.198929] C [MSGID: 106003]<br>
[glusterd-server-quorum.c:354:<wbr>glusterd_do_volume_quorum_<wbr>action]<br>
0-management: Server quorum regained for volume engine. Starting<br>
local bricks.<br>
[2018-03-20 13:34:12.199166] I<br>
[glusterd-utils.c:5941:<wbr>glusterd_brick_start] 0-management: starting<br>
a fresh brick process for brick /srv/gluster_engine/brick<br>
[2018-03-20 13:34:12.202498] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.208389] C [MSGID: 106003]<br>
[glusterd-server-quorum.c:354:<wbr>glusterd_do_volume_quorum_<wbr>action]<br>
0-management: Server quorum regained for volume export. Starting<br>
local bricks.<br>
[2018-03-20 13:34:12.208622] I<br>
[glusterd-utils.c:5941:<wbr>glusterd_brick_start] 0-management: starting<br>
a fresh brick process for brick /srv/gluster_export/brick<br>
[2018-03-20 13:34:12.211426] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.216722] C [MSGID: 106003]<br>
[glusterd-server-quorum.c:354:<wbr>glusterd_do_volume_quorum_<wbr>action]<br>
0-management: Server quorum regained for volume iso. Starting local<br>
bricks.<br>
[2018-03-20 13:34:12.216906] I<br>
[glusterd-utils.c:5941:<wbr>glusterd_brick_start] 0-management: starting<br>
a fresh brick process for brick /srv/gluster_iso/brick<br>
[2018-03-20 13:34:12.219439] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.224400] C [MSGID: 106003]<br>
[glusterd-server-quorum.c:354:<wbr>glusterd_do_volume_quorum_<wbr>action]<br>
0-management: Server quorum regained for volume plexus. Starting<br>
local bricks.<br>
[2018-03-20 13:34:12.224555] I<br>
[glusterd-utils.c:5941:<wbr>glusterd_brick_start] 0-management: starting<br>
a fresh brick process for brick /srv/gluster_plexus/brick<br>
[2018-03-20 13:34:12.226902] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.231689] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-nfs: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.231986] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: nfs<br>
already stopped<br>
[2018-03-20 13:34:12.232047] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: nfs<br>
service is stopped<br>
[2018-03-20 13:34:12.232082] I [MSGID: 106600]<br>
[glusterd-nfs-svc.c:82:<wbr>glusterd_nfssvc_manager] 0-management:<br>
nfs/server.so xlator is not installed<br>
[2018-03-20 13:34:12.232165] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-glustershd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:12.238970] I [MSGID: 106568]<br>
[glusterd-proc-mgmt.c:87:<wbr>glusterd_proc_stop] 0-management: Stopping<br>
glustershd daemon running in pid: 3554<br>
[2018-03-20 13:34:13.239224] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: glustershd<br>
service is stopped<br>
[2018-03-20 13:34:13.239365] I [MSGID: 106567]<br>
[glusterd-svc-mgmt.c:197:<wbr>glusterd_svc_start] 0-management: Starting<br>
glustershd service<br>
[2018-03-20 13:34:14.243040] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-quotad: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.243817] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: quotad<br>
already stopped<br>
[2018-03-20 13:34:14.243866] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: quotad<br>
service is stopped<br>
[2018-03-20 13:34:14.243928] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-bitd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.244474] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: bitd<br>
already stopped<br>
[2018-03-20 13:34:14.244514] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: bitd<br>
service is stopped<br>
[2018-03-20 13:34:14.244589] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-scrub: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.245123] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: scrub<br>
already stopped<br>
[2018-03-20 13:34:14.245169] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: scrub<br>
service is stopped<br>
[2018-03-20 13:34:14.260266] I<br>
[glusterd-utils.c:5941:<wbr>glusterd_brick_start] 0-management: starting<br>
a fresh brick process for brick /srv/gluster_navaar/brick<br>
[2018-03-20 13:34:14.263172] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-management: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.271938] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-snapd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.272146] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-snapd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.272366] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-snapd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.272562] I<br>
[rpc-clnt.c:1044:rpc_clnt_<wbr>connection_init] 0-snapd: setting<br>
frame-timeout to 600<br>
[2018-03-20 13:34:14.273000] I [MSGID: 106492]<br>
[glusterd-handler.c:2718:__<wbr>glusterd_handle_friend_update]<br>
0-glusterd: Received friend update from uuid:<br>
0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c<br>
[2018-03-20 13:34:14.273770] I [MSGID: 106502]<br>
[glusterd-handler.c:2763:__<wbr>glusterd_handle_friend_update]<br>
0-management: Received my uuid as Friend<br>
[2018-03-20 13:34:14.273907] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:701:__<wbr>glusterd_friend_update_cbk] 0-management:<br>
Received ACC from uuid: 0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c<br>
[2018-03-20 13:34:14.277313] I [socket.c:2474:socket_event_<wbr>handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-03-20 13:34:14.280409] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>engine/brick has disconnected<br>
from glusterd.<br>
[2018-03-20 13:34:14.283608] I [socket.c:2474:socket_event_<wbr>handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-03-20 13:34:14.286608] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>export/brick has disconnected<br>
from glusterd.<br>
[2018-03-20 13:34:14.289765] I [socket.c:2474:socket_event_<wbr>handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-03-20 13:34:14.292523] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>iso/brick has disconnected from<br>
glusterd.<br>
[2018-03-20 13:34:14.295494] I [socket.c:2474:socket_event_<wbr>handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-03-20 13:34:14.298261] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>plexus/brick has disconnected<br>
from glusterd.<br>
[2018-03-20 13:34:14.298421] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:486:__<wbr>glusterd_friend_add_cbk] 0-glusterd:<br>
Received ACC from uuid: 0e8b912a-bcff-4b33-88c6-<wbr>428b3e658440, host:<br>
borg-sphere-one, port: 0<br>
[2018-03-20 13:34:14.298935] I<br>
[glusterd-utils.c:5847:<wbr>glusterd_brick_start] 0-management:<br>
discovered already-running brick /srv/gluster_engine/brick<br>
[2018-03-20 13:34:14.298958] I [MSGID: 106143]<br>
[glusterd-pmap.c:295:pmap_<wbr>registry_bind] 0-pmap: adding brick<br>
/srv/gluster_engine/brick on port 49152<br>
[2018-03-20 13:34:14.299037] I<br>
[glusterd-utils.c:5847:<wbr>glusterd_brick_start] 0-management:<br>
discovered already-running brick /srv/gluster_export/brick<br>
[2018-03-20 13:34:14.299051] I [MSGID: 106143]<br>
[glusterd-pmap.c:295:pmap_<wbr>registry_bind] 0-pmap: adding brick<br>
/srv/gluster_export/brick on port 49153<br>
[2018-03-20 13:34:14.299117] I<br>
[glusterd-utils.c:5847:<wbr>glusterd_brick_start] 0-management:<br>
discovered already-running brick /srv/gluster_iso/brick<br>
[2018-03-20 13:34:14.299130] I [MSGID: 106143]<br>
[glusterd-pmap.c:295:pmap_<wbr>registry_bind] 0-pmap: adding brick<br>
/srv/gluster_iso/brick on port 49154<br>
[2018-03-20 13:34:14.299208] I<br>
[glusterd-utils.c:5847:<wbr>glusterd_brick_start] 0-management:<br>
discovered already-running brick /srv/gluster_plexus/brick<br>
[2018-03-20 13:34:14.299223] I [MSGID: 106143]<br>
[glusterd-pmap.c:295:pmap_<wbr>registry_bind] 0-pmap: adding brick<br>
/srv/gluster_plexus/brick on port 49155<br>
[2018-03-20 13:34:14.299292] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: nfs<br>
already stopped<br>
[2018-03-20 13:34:14.299344] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: nfs<br>
service is stopped<br>
[2018-03-20 13:34:14.299365] I [MSGID: 106600]<br>
[glusterd-nfs-svc.c:82:<wbr>glusterd_nfssvc_manager] 0-management:<br>
nfs/server.so xlator is not installed<br>
[2018-03-20 13:34:14.302501] I [MSGID: 106568]<br>
[glusterd-proc-mgmt.c:87:<wbr>glusterd_proc_stop] 0-management: Stopping<br>
glustershd daemon running in pid: 3896<br>
[2018-03-20 13:34:15.302703] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: glustershd<br>
service is stopped<br>
[2018-03-20 13:34:15.302798] I [MSGID: 106567]<br>
[glusterd-svc-mgmt.c:197:<wbr>glusterd_svc_start] 0-management: Starting<br>
glustershd service<br>
[2018-03-20 13:34:15.305136] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: quotad<br>
already stopped<br>
[2018-03-20 13:34:15.305172] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: quotad<br>
service is stopped<br>
[2018-03-20 13:34:15.305384] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: bitd<br>
already stopped<br>
[2018-03-20 13:34:15.305406] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: bitd<br>
service is stopped<br>
[2018-03-20 13:34:15.305599] I [MSGID: 106132]<br>
[glusterd-proc-mgmt.c:83:<wbr>glusterd_proc_stop] 0-management: scrub<br>
already stopped<br>
[2018-03-20 13:34:15.305618] I [MSGID: 106568]<br>
[glusterd-svc-mgmt.c:229:<wbr>glusterd_svc_stop] 0-management: scrub<br>
service is stopped<br>
[2018-03-20 13:34:15.323512] I [socket.c:2474:socket_event_<wbr>handler]<br>
0-transport: EPOLLERR - disconnecting now<br>
[2018-03-20 13:34:15.326856] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>navaar/brick has disconnected<br>
from glusterd.<br>
[2018-03-20 13:34:15.329968] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:701:__<wbr>glusterd_friend_update_cbk] 0-management:<br>
Received ACC from uuid: 0e8b912a-bcff-4b33-88c6-<wbr>428b3e658440<br>
[2018-03-20 13:34:15.330024] I [MSGID: 106163]<br>
[glusterd-handshake.c:1316:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack]<br>
0-management: using the op-version 31202<br>
[2018-03-20 13:34:15.335968] I [MSGID: 106490]<br>
[glusterd-handler.c:2540:__<wbr>glusterd_handle_incoming_<wbr>friend_req]<br>
0-glusterd: Received probe from uuid:<br>
0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c<br>
[2018-03-20 13:34:15.336908] I [MSGID: 106493]<br>
[glusterd-handler.c:3800:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to borg-sphere-three (0), ret: 0, op_ret: 0<br>
[2018-03-20 13:34:15.340577] I [MSGID: 106144]<br>
[glusterd-pmap.c:396:pmap_<wbr>registry_remove] 0-pmap: removing brick<br>
/srv/gluster_engine/brick on port 49152<br>
[2018-03-20 13:34:15.340669] E [socket.c:2369:socket_connect_<wbr>finish]<br>
0-management: connection to<br>
/var/run/gluster/<wbr>7c88a1ced3d7819183c1b755621327<wbr>53.socket failed<br>
(Connection reset by peer); disconnecting socket<br>
[2018-03-20 13:34:15.343472] E [socket.c:2369:socket_connect_<wbr>finish]<br>
0-management: connection to<br>
/var/run/gluster/<wbr>92f05640572fdb863e0d3655821a92<wbr>21.socket failed<br>
(Connection reset by peer); disconnecting socket<br>
[2018-03-20 13:34:15.346173] E [socket.c:2369:socket_connect_<wbr>finish]<br>
0-management: connection to<br>
/var/run/gluster/<wbr>855c85c59ce6144e0cdaadc081dab5<wbr>74.socket failed<br>
(Connection reset by peer); disconnecting socket<br>
[2018-03-20 13:34:15.351476] W [socket.c:593:__socket_rwv]<br>
0-management: readv on<br>
/var/run/gluster/<wbr>2ac0088f40227ca69fb39d3c98e51d<wbr>2d.socket failed (No<br>
data available)<br>
[2018-03-20 13:34:15.354084] I [MSGID: 106005]<br>
[glusterd-handler.c:6071:__<wbr>glusterd_brick_rpc_notify] 0-management:<br>
Brick borg-sphere-two:/srv/gluster_<wbr>plexus/brick has disconnected<br>
from glusterd.<br>
[2018-03-20 13:34:15.354184] I [MSGID: 106144]<br>
[glusterd-pmap.c:396:pmap_<wbr>registry_remove] 0-pmap: removing brick<br>
/srv/gluster_plexus/brick on port 49155<br>
[2018-03-20 13:34:15.354222] I [MSGID: 106492]<br>
[glusterd-handler.c:2718:__<wbr>glusterd_handle_friend_update]<br>
0-glusterd: Received friend update from uuid:<br>
0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c<br>
[2018-03-20 13:34:15.354597] I [MSGID: 106502]<br>
[glusterd-handler.c:2763:__<wbr>glusterd_handle_friend_update]<br>
0-management: Received my uuid as Friend<br>
[2018-03-20 13:34:15.354645] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:701:__<wbr>glusterd_friend_update_cbk] 0-management:<br>
Received ACC from uuid: 0acd0bff-c38f-4c49-82da-<wbr>4112d22dfd2c<br>
[2018-03-20 13:34:15.354670] I [MSGID: 106144]<br>
[glusterd-pmap.c:396:pmap_<wbr>registry_remove] 0-pmap: removing brick<br>
/srv/gluster_export/brick on port 49153<br>
[2018-03-20 13:34:15.354789] I [MSGID: 106144]<br>
[glusterd-pmap.c:396:pmap_<wbr>registry_remove] 0-pmap: removing brick<br>
/srv/gluster_iso/brick on port 49154<br>
[2018-03-20 13:34:15.354905] I [MSGID: 106492]<br>
[glusterd-handler.c:2718:__<wbr>glusterd_handle_friend_update]<br>
0-glusterd: Received friend update from uuid:<br>
0e8b912a-bcff-4b33-88c6-<wbr>428b3e658440<br>
[2018-03-20 13:34:15.354927] I [MSGID: 106502]<br>
[glusterd-handler.c:2763:__<wbr>glusterd_handle_friend_update]<br>
0-management: Received my uuid as Friend<br>
[2018-03-20 13:34:15.355536] I [MSGID: 106163]<br>
[glusterd-handshake.c:1316:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack]<br>
0-management: using the op-version 31202<br>
[2018-03-20 13:34:15.359667] I [MSGID: 106490]<br>
[glusterd-handler.c:2540:__<wbr>glusterd_handle_incoming_<wbr>friend_req]<br>
0-glusterd: Received probe from uuid:<br>
0e8b912a-bcff-4b33-88c6-<wbr>428b3e658440<br>
[2018-03-20 13:34:15.360277] I [MSGID: 106493]<br>
[glusterd-handler.c:3800:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to borg-sphere-one (0), ret: 0, op_ret: 0<br>
[2018-03-20 13:34:15.361113] I [MSGID: 106492]<br>
[glusterd-handler.c:2718:__<wbr>glusterd_handle_friend_update]<br>
0-glusterd: Received friend update from uuid:<br>
0e8b912a-bcff-4b33-88c6-<wbr>428b3e658440<br>
[2018-03-20 13:34:15.361151] I [MSGID: 106502]<br>
[glusterd-handler.c:2763:__<wbr>glusterd_handle_friend_update]<br>
0-management: Received my uuid as Friend<br>
<br>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>