[Gluster-users] Gluster 3.7.6 add new node state Peer Rejected (Connected)
Steve Dainard
sdainard at spd1.com
Mon Feb 29 21:31:06 UTC 2016
Re:bug. I have a feeling some of these issues may have been caused by the
gluster network adapters on the nodes having different MTU values during
the initial peer, and also when attempting a rebalance/fix-layout.
Thanks, after some stupidity on my part, gluster03 is up and running again.
I have several replica 3 volumes on gluster01/02/03 for ovirt.
Unfortunately I took gluster01 and gluster03 offline while trying to fix
this issue, but somehow not all of my VM's in ovirt crashed. The output of
'gluster volume heal vm-storage info' was:
# gluster volume heal vm-storage info
Brick 10.0.231.50:/mnt/lv-vm-storage/vm-storage
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/573c218f-1cc9-450d-a141-23d1d968d32c/033bd358-3d24-4bd3-963a-f789e54c131b
- Possibly undergoing heal
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/55bd9b2c-83c5-4d0c-9b2a-f9d49badc9cb/94e9cfcb-0e98-4b7f-8d99-181e635c2d12
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/aa98e418-dcea-431f-b94f-d323fbdf732f/c5c3417c-ca0b-426b-8589-39145ed4a866
- Possibly undergoing heal
/__DIRECT_IO_TEST__
Number of entries: 4
Brick 10.0.231.51:/mnt/lv-vm-storage/vm-storage
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/aa98e418-dcea-431f-b94f-d323fbdf732f/c5c3417c-ca0b-426b-8589-39145ed4a866
- Possibly undergoing heal
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/573c218f-1cc9-450d-a141-23d1d968d32c/033bd358-3d24-4bd3-963a-f789e54c131b
- Possibly undergoing heal
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/55bd9b2c-83c5-4d0c-9b2a-f9d49badc9cb/94e9cfcb-0e98-4b7f-8d99-181e635c2d12
/__DIRECT_IO_TEST__
Number of entries: 4
Brick 10.0.231.52:/mnt/lv-vm-storage/vm-storage
<gfid:41d37ab1-eda6-4e72-9edc-359bc5823431> - Possibly undergoing heal
/a5a83df1-47e2-4927-9add-079199ca7ef8/images/aa98e418-dcea-431f-b94f-d323fbdf732f/c5c3417c-ca0b-426b-8589-39145ed4a866
- Possibly undergoing heal
<gfid:56909e28-97b6-487a-a6c3-eabf09d19a1e>
<gfid:70b2c75a-561d-4aa0-b5f7-23b90e684caf>
Number of entries: 4
I then tried to run heal which says it was unsuccessful:
# gluster volume heal vm-storage full
Launching heal operation to perform full self heal on volume vm-storage has
been unsuccessful
# gluster volume heal vm-storage
Commit failed on 10.0.231.54. Please check log file for details.
Commit failed on 10.0.231.55. Please check log file for details.
Commit failed on 10.0.231.53. Please check log file for details.
glfheal-vm-storage.log:
[2016-02-29 21:03:47.754355] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2016-02-29 21:13:32.583649] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
Now if I run heal info again, I see no split-brains:
# gluster volume heal vm-storage info
Brick 10.0.231.50:/mnt/lv-vm-storage/vm-storage
Number of entries: 0
Brick 10.0.231.51:/mnt/lv-vm-storage/vm-storage
Number of entries: 0
Brick 10.0.231.52:/mnt/lv-vm-storage/vm-storage
Number of entries: 0
# gluster volume heal vm-storage info split-brain
Brick 10.0.231.50:/mnt/lv-vm-storage/vm-storage
Number of entries in split-brain: 0
Brick 10.0.231.51:/mnt/lv-vm-storage/vm-storage
Number of entries in split-brain: 0
Brick 10.0.231.52:/mnt/lv-vm-storage/vm-storage
Number of entries in split-brain: 0
But I really don't trust this, as some of the VM's continued to run, even
when only 1 of 3 vm-storage bricks were available. Although there may not
have been any IO.
So how do I fix:
# gluster volume heal vm-storage full
Launching heal operation to perform full self heal on volume vm-storage has
been unsuccessful
Is this the normal output, if the the nodes below don't have a brick for
this volume?
# gluster volume heal vm-storage
Commit failed on 10.0.231.54. Please check log file for details.
Commit failed on 10.0.231.55. Please check log file for details.
Commit failed on 10.0.231.53. Please check log file for details.
And how else can I verify integrity of the replica 3 volumes? Stop all the
VM's and run md5sum on the brick files?
On Mon, Feb 29, 2016 at 12:12 PM, Rafi Kavungal Chundattu Parambil <
rkavunga at redhat.com> wrote:
> You have to figure out the difference in volinfo in the peers . and
> rectify it. Or simply you can reduce version in vol info by one in node3
> and restating the glusterd will solve the problem.
>
> But I would be more interested to figure out why glusterd crashed.
>
> 1) Can you paste back trace of the core generated.
> 2) Can you paste the op-version for all the nodes.
> 3) Can you mention steps you did that lead to crash? Seems like you added
> a brick .
> 4) If possible can you recollect the order in which you added the peers
> and the version. Also the upgrade sequence.
>
> May be you can race a bug in bugzilla with the information.
>
> Regards
> Rafi KC
> On 1 Mar 2016 12:58 am, Steve Dainard <sdainard at spd1.com> wrote:
>
> I changed quota-version=1 on the two new nodes, and was able to join the
> cluster. I also rebooted the two new nodes and everything came up correctly.
>
> Then I triggered a rebalance fix-layout and one of the original cluster
> members (node gluster03) glusterd crashed. I restarted glusterd and was
> connected but after a few minutes I'm left with:
>
> # gluster peer status
> Number of Peers: 5
>
> Hostname: 10.0.231.51
> Uuid: b01de59a-4428-486b-af49-cb486ab44a07
> State: Peer in Cluster (Connected)
>
> Hostname: 10.0.231.52
> Uuid: 75143760-52a3-4583-82bb-a9920b283dac
> *State: Peer Rejected (Connected)*
>
> Hostname: 10.0.231.53
> Uuid: 2c0b8bb6-825a-4ddd-9958-d8b46e9a2411
> State: Peer in Cluster (Connected)
>
> Hostname: 10.0.231.54
> Uuid: 408d88d6-0448-41e8-94a3-bf9f98255d9c
> State: Peer in Cluster (Connected)
>
> Hostname: 10.0.231.55
> Uuid: 9c155c8e-2cd1-4cfc-83af-47129b582fd3
> State: Peer in Cluster (Connected)
>
> I see in the logs (attached) there is now a cksum error:
>
> [2016-02-29 19:16:42.082256] E [MSGID: 106010]
> [glusterd-utils.c:2717:glusterd_compare_friend_volume] 0-management:
> Version of Cksums storage differ. local cksum = 50348222, remote cksum =
> 50348735 on peer 10.0.231.55
> [2016-02-29 19:16:42.082298] I [MSGID: 106493]
> [glusterd-handler.c:3780:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to 10.0.231.55 (0), ret: 0
> [2016-02-29 19:16:42.092535] I [MSGID: 106493]
> [glusterd-rpc-ops.c:480:__glusterd_friend_add_cbk] 0-glusterd: Received RJT
> from uuid: 2c0b8bb6-825a-4ddd-9958-d8b46e9a2411, host: 10.0.231.53, port: 0
> [2016-02-29 19:16:42.096036] I [MSGID: 106143]
> [glusterd-pmap.c:229:pmap_registry_bind] 0-pmap: adding brick
> /mnt/lv-export-domain-storage/export-domain-storage on port 49153
> [2016-02-29 19:16:42.097296] I [MSGID: 106143]
> [glusterd-pmap.c:229:pmap_registry_bind] 0-pmap: adding brick
> /mnt/lv-vm-storage/vm-storage on port 49155
> [2016-02-29 19:16:42.100727] I [MSGID: 106163]
> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 30700
> [2016-02-29 19:16:42.108495] I [MSGID: 106490]
> [glusterd-handler.c:2539:__glusterd_handle_incoming_friend_req] 0-glusterd:
> Received probe from uuid: 2c0b8bb6-825a-4ddd-9958-d8b46e9a2411
> [2016-02-29 19:16:42.109295] E [MSGID: 106010]
> [glusterd-utils.c:2717:glusterd_compare_friend_volume] 0-management:
> Version of Cksums storage differ. local cksum = 50348222, remote cksum =
> 50348735 on peer 10.0.231.53
> [2016-02-29 19:16:42.109338] I [MSGID: 106493]
> [glusterd-handler.c:3780:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to 10.0.231.53 (0), ret: 0
> [2016-02-29 19:16:42.119521] I [MSGID: 106143]
> [glusterd-pmap.c:229:pmap_registry_bind] 0-pmap: adding brick
> /mnt/lv-env-modules/env-modules on port 49157
> [2016-02-29 19:16:42.122856] I [MSGID: 106143]
> [glusterd-pmap.c:229:pmap_registry_bind] 0-pmap: adding brick
> /mnt/raid6-storage/storage on port 49156
> [2016-02-29 19:16:42.508104] I [MSGID: 106493]
> [glusterd-rpc-ops.c:480:__glusterd_friend_add_cbk] 0-glusterd: Received RJT
> from uuid: b01de59a-4428-486b-af49-cb486ab44a07, host: 10.0.231.51, port: 0
> [2016-02-29 19:16:42.519403] I [MSGID: 106163]
> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 30700
> [2016-02-29 19:16:42.524353] I [MSGID: 106490]
> [glusterd-handler.c:2539:__glusterd_handle_incoming_friend_req] 0-glusterd:
> Received probe from uuid: b01de59a-4428-486b-af49-cb486ab44a07
> [2016-02-29 19:16:42.524999] E [MSGID: 106010]
> [glusterd-utils.c:2717:glusterd_compare_friend_volume] 0-management:
> Version of Cksums storage differ. local cksum = 50348222, remote cksum =
> 50348735 on peer 10.0.231.51
> [2016-02-29 19:16:42.525038] I [MSGID: 106493]
> [glusterd-handler.c:3780:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to 10.0.231.51 (0), ret: 0
> [2016-02-29 19:16:42.592523] I [MSGID: 106493]
> [glusterd-rpc-ops.c:480:__glusterd_friend_add_cbk] 0-glusterd: Received RJT
> from uuid: 408d88d6-0448-41e8-94a3-bf9f98255d9c, host: 10.0.231.54, port: 0
> [2016-02-29 19:16:42.599518] I [MSGID: 106163]
> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 30700
> [2016-02-29 19:16:42.604821] I [MSGID: 106490]
> [glusterd-handler.c:2539:__glusterd_handle_incoming_friend_req] 0-glusterd:
> Received probe from uuid: 408d88d6-0448-41e8-94a3-bf9f98255d9c
> [2016-02-29 19:16:42.605458] E [MSGID: 106010]
> [glusterd-utils.c:2717:glusterd_compare_friend_volume] 0-management:
> Version of Cksums storage differ. local cksum = 50348222, remote cksum =
> 50348735 on peer 10.0.231.54
> [2016-02-29 19:16:42.605492] I [MSGID: 106493]
> [glusterd-handler.c:3780:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to 10.0.231.54 (0), ret: 0
> [2016-02-29 19:16:42.621943] I [MSGID: 106163]
> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 30700
> [2016-02-29 19:16:42.628443] I [MSGID: 106490]
> [glusterd-handler.c:2539:__glusterd_handle_incoming_friend_req] 0-glusterd:
> Received probe from uuid: a965e782-39e2-41cc-a0d1-b32ecccdcd2f
> [2016-02-29 19:16:42.629079] E [MSGID: 106010]
> [glusterd-utils.c:2717:glusterd_compare_friend_volume] 0-management:
> Version of Cksums storage differ. local cksum = 50348222, remote cksum =
> 50348735 on peer 10.0.231.50
>
> On gluster01/02/04/05
> /var/lib/glusterd/vols/storage/cksum info=998305000
>
> On gluster03
> /var/lib/glusterd/vols/storage/cksum info=998305001
>
> How do I recover from this? Can I just stop glusterd on gluster03 and
> change the cksum value?
>
>
> On Thu, Feb 25, 2016 at 12:49 PM, Mohammed Rafi K C <rkavunga at redhat.com>
> wrote:
>
>>
>>
>> On 02/26/2016 01:53 AM, Mohammed Rafi K C wrote:
>>
>>
>>
>> On 02/26/2016 01:32 AM, Steve Dainard wrote:
>>
>> I haven't done anything more than peer thus far, so I'm a bit confused as
>> to how the volume info fits in, can you expand on this a bit?
>>
>> Failed commits? Is this split brain on the replica volumes? I don't get
>> any return from 'gluster volume heal <volname> info' on all the replica
>> volumes, but if I try a gluster volume heal <volname> full I get:
>> 'Launching heal operation to perform full self heal on volume <volname> has
>> been unsuccessful'.
>>
>>
>> forget about this. it is not for metadata selfheal .
>>
>>
>> I have 5 volumes total.
>>
>> 'Replica 3' volumes running on gluster01/02/03:
>> vm-storage
>> iso-storage
>> export-domain-storage
>> env-modules
>>
>> And one distributed only volume 'storage' info shown below:
>>
>> *From existing host gluster01/02:*
>> type=0
>> count=4
>> status=1
>> sub_count=0
>> stripe_count=1
>> replica_count=1
>> disperse_count=0
>> redundancy_count=0
>> version=25
>> transport-type=0
>> volume-id=26d355cb-c486-481f-ac16-e25390e73775
>> username=eb9e2063-6ba8-4d16-a54f-2c7cf7740c4c
>> password=
>> op-version=3
>> client-op-version=3
>> quota-version=1
>> parent_volname=N/A
>> restored_from_snap=00000000-0000-0000-0000-000000000000
>> snap-max-hard-limit=256
>> features.quota-deem-statfs=on
>> features.inode-quota=on
>> diagnostics.brick-log-level=WARNING
>> features.quota=on
>> performance.readdir-ahead=on
>> performance.cache-size=1GB
>> performance.stat-prefetch=on
>> brick-0=10.0.231.50:-mnt-raid6-storage-storage
>> brick-1=10.0.231.51:-mnt-raid6-storage-storage
>> brick-2=10.0.231.52:-mnt-raid6-storage-storage
>> brick-3=10.0.231.53:-mnt-raid6-storage-storage
>>
>> *From existing host gluster03/04:*
>> type=0
>> count=4
>> status=1
>> sub_count=0
>> stripe_count=1
>> replica_count=1
>> disperse_count=0
>> redundancy_count=0
>> version=25
>> transport-type=0
>> volume-id=26d355cb-c486-481f-ac16-e25390e73775
>> username=eb9e2063-6ba8-4d16-a54f-2c7cf7740c4c
>> password=
>> op-version=3
>> client-op-version=3
>> quota-version=1
>> parent_volname=N/A
>> restored_from_snap=00000000-0000-0000-0000-000000000000
>> snap-max-hard-limit=256
>> features.quota-deem-statfs=on
>> features.inode-quota=on
>> performance.stat-prefetch=on
>> performance.cache-size=1GB
>> performance.readdir-ahead=on
>> features.quota=on
>> diagnostics.brick-log-level=WARNING
>> brick-0=10.0.231.50:-mnt-raid6-storage-storage
>> brick-1=10.0.231.51:-mnt-raid6-storage-storage
>> brick-2=10.0.231.52:-mnt-raid6-storage-storage
>> brick-3=10.0.231.53:-mnt-raid6-storage-storage
>>
>> So far between gluster01/02 and gluster03/04 the configs are the same,
>> although the ordering is different for some of the features.
>>
>> On gluster05/06 the ordering is different again, and the quota-version=0
>> instead of 1.
>>
>>
>> This is why the peer shows as rejected. Can you check the op-version of
>> all the glusterd including the one which is in reject state. you can find
>> out the op-version here in /var/lib/glusterd/glusterd.info
>>
>>
>> If all the op-version are same and 3.7.6, then to work-around the issue,
>> you can manually make it quota-version=1, and restarting the glusterd will
>> solve the problem, But I would strongly recommend you to figure out the
>> RCA. May be you can file a bug for this.
>>
>> Rafi
>>
>>
>>
>> Rafi KC
>>
>>
>> *From new hosts gluster05/gluster06:*
>> type=0
>> count=4
>> status=1
>> sub_count=0
>> stripe_count=1
>> replica_count=1
>> disperse_count=0
>> redundancy_count=0
>> version=25
>> transport-type=0
>> volume-id=26d355cb-c486-481f-ac16-e25390e73775
>> username=eb9e2063-6ba8-4d16-a54f-2c7cf7740c4c
>> password=
>> op-version=3
>> client-op-version=3
>> quota-version=0
>> parent_volname=N/A
>> restored_from_snap=00000000-0000-0000-0000-000000000000
>> snap-max-hard-limit=256
>> performance.stat-prefetch=on
>> performance.cache-size=1GB
>> performance.readdir-ahead=on
>> features.quota=on
>> diagnostics.brick-log-level=WARNING
>> features.inode-quota=on
>> features.quota-deem-statfs=on
>> brick-0=10.0.231.50:-mnt-raid6-storage-storage
>> brick-1=10.0.231.51:-mnt-raid6-storage-storage
>> brick-2=10.0.231.52:-mnt-raid6-storage-storage
>> brick-3=10.0.231.53:-mnt-raid6-storage-storage
>>
>> Also, I forgot to mention that when I initially peer'd the two new hosts,
>> glusterd crashed on gluster03 and had to be restarted (log attached) but
>> has been fine since.
>>
>> Thanks,
>> Steve
>>
>> On Thu, Feb 25, 2016 at 11:27 AM, Mohammed Rafi K C <rkavunga at redhat.com>
>> wrote:
>>
>>>
>>>
>>> On 02/25/2016 11:45 PM, Steve Dainard wrote:
>>>
>>> Hello,
>>>
>>> I upgraded from 3.6.6 to 3.7.6 a couple weeks ago. I just peered 2 new
>>> nodes to a 4 node cluster and gluster peer status is:
>>>
>>> # gluster peer status *<-- from node gluster01*
>>> Number of Peers: 5
>>>
>>> Hostname: 10.0.231.51
>>> Uuid: b01de59a-4428-486b-af49-cb486ab44a07
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.0.231.52
>>> Uuid: 75143760-52a3-4583-82bb-a9920b283dac
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.0.231.53
>>> Uuid: 2c0b8bb6-825a-4ddd-9958-d8b46e9a2411
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.0.231.54 *<-- new node gluster05*
>>> Uuid: 408d88d6-0448-41e8-94a3-bf9f98255d9c
>>> *State: Peer Rejected (Connected)*
>>>
>>> Hostname: 10.0.231.55 *<-- new node gluster06*
>>> Uuid: 9c155c8e-2cd1-4cfc-83af-47129b582fd3
>>> *State: Peer Rejected (Connected)*
>>>
>>>
>>> Looks like your configuration files are mismatching, ie the checksum
>>> calculation differs on this two node than the others,
>>>
>>> Did you had any failed commit ?
>>>
>>> Compare your /var/lib/glusterd/<volname>/info of the failed node against
>>> good one, mostly you could see some difference.
>>>
>>> can you paste the /var/lib/glusterd/<volname>/info ?
>>>
>>> Regards
>>> Rafi KC
>>>
>>>
>>>
>>> I followed the write-up here:
>>> http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
>>> and the two new nodes peer'd properly but after a reboot of the two new
>>> nodes I'm seeing the same Peer Rejected (Connected) State.
>>>
>>> I've attached logs from an existing node, and the two new nodes.
>>>
>>> Thanks for any suggestions,
>>> Steve
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160229/166a4cef/attachment.html>
More information about the Gluster-users
mailing list