[Gluster-users] multiple peer entries in pool list
Atin Mukherjee
atin.mukherjee83 at gmail.com
Sat Sep 19 05:53:27 UTC 2015
Ideally a same node shouldn't have multiple UUIDs, which is unfortunately
true in your setup. I do not see any straight forward case which can lead
to this issue. What Gluster version are you using?
-Atin
Sent from one plus one
On Sep 19, 2015 4:40 AM, "John Casu" <jcasu at penguincomputing.com> wrote:
> Hi,
>
> we're seeing multiple entries for nodes in pool list & peer status, each
> associated with a unique UUID.
> Filesystem seems to be working, but we need to clean this up in
> anticipation of any future issues.
>
> The nodes with multiple UUIDs were previously disconnected, but they're
> good now.
>
> Does anyone know what's going on & how we can resolve this issue?
>
> thanks,
> -john c.
>
>
> [root at scyld glusterd]# gluster pool list
> UUID Hostname State
> d63dea41-10c4-4dfa-8e70-2f4a6fb217f6 n13-ib Connected
> bb1dd3d4-778f-41e4-b692-f2705eceed8b n7-ib Connected
> 244f21a1-6065-4512-afc5-3ce87d6924f3 n9-ib Connected
> 3a52b562-2653-4b09-b559-f340cda83991 n3-ib Connected
> d8701a8c-8435-4496-ba22-2e8da5408a86 n8-ib Connected
> 7a279b8f-5c71-427d-9696-b8ea7a629899 n4-ib Connected
> 7b4018d4-94b1-4c59-88bc-685ff7c24423 n2-ib Connected
> 8a5fe60a-3536-4eb2-80fd-cf23393fabb7 n11-ib Connected
> 9ae10c27-38e3-40ed-a0cd-d5911928263b n5-ib Connected
> 39ed8724-e18b-4ba1-b892-d4514f7cf444 n12-ib Connected
> 451629d9-611f-4625-b16d-8dccdee8f2e5 n1-ib Connected
> d18e204b-1769-426b-8178-463292b5cd85 n6-ib Connected
> 75ef8005-0e3d-4245-af3f-bd080521a512 n14-ib Connected
> 967b89b9-ddc4-4fd2-860f-e72d27e604c4 n15-ib Connected
> a2640233-fa3f-48ed-9c9a-327473a0444e n10-ib Connected
> efcc8de0-4e5a-4cbf-946b-9aa5e44b84b2 n0-ib Connected
> e6ec2f29-f915-4c4b-82d6-75bbfe208344 n0-ib Connected
> 84db5aa6-118a-44ae-af8a-ff25d1a96df1 n1-ib Connected
> b8f6143d-416a-495d-860c-67a57cf2aad6 n15-ib Connected
> f96c87d8-0c44-4b91-b610-10f737957755 n15-ib Connected
> e83a8114-a5fb-41ad-aec1-e826c4dc12b6 localhost Connected
>
> [root at scyld glusterd]# gluster volume info
>
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 31fc64ec-7ee0-46c0-a8a9-560dcecf08e9
> Status: Started
> Number of Bricks: 8 x 2 = 16
> Transport-type: tcp
> Bricks:
> Brick1: n0-ib:/export/store0/brick
> Brick2: n1-ib:/export/store0/brick
> Brick3: n2-ib:/export/store0/brick
> Brick4: n3-ib:/export/store0/brick
> Brick5: n4-ib:/export/store0/brick
> Brick6: n5-ib:/export/store0/brick
> Brick7: n6-ib:/export/store0/brick
> Brick8: n7-ib:/export/store0/brick
> Brick9: n8-ib:/export/store0/brick
> Brick10: n9-ib:/export/store0/brick
> Brick11: n10-ib:/export/store0/brick
> Brick12: n11-ib:/export/store0/brick
> Brick13: n12-ib:/export/store0/brick
> Brick14: n13-ib:/export/store0/brick
> Brick15: n14-ib:/export/store0/brick
> Brick16: n15-ib:/export/store0/brick
> Options Reconfigured:
> nfs.disable: on
>
>
>
> [root at scyld glusterd]# gluster peer status
> Number of Peers: 20
>
> Hostname: n13-ib
> Uuid: d63dea41-10c4-4dfa-8e70-2f4a6fb217f6
> State: Peer in Cluster (Connected)
>
> Hostname: n7-ib
> Uuid: bb1dd3d4-778f-41e4-b692-f2705eceed8b
> State: Peer in Cluster (Connected)
>
> Hostname: n9-ib
> Uuid: 244f21a1-6065-4512-afc5-3ce87d6924f3
> State: Peer in Cluster (Connected)
>
> Hostname: n3-ib
> Uuid: 3a52b562-2653-4b09-b559-f340cda83991
> State: Peer in Cluster (Connected)
>
> Hostname: n8-ib
> Uuid: d8701a8c-8435-4496-ba22-2e8da5408a86
> State: Peer in Cluster (Connected)
>
> Hostname: n4-ib
> Uuid: 7a279b8f-5c71-427d-9696-b8ea7a629899
> State: Peer in Cluster (Connected)
>
> Hostname: n2-ib
> Uuid: 7b4018d4-94b1-4c59-88bc-685ff7c24423
> State: Peer in Cluster (Connected)
>
> Hostname: n11-ib
> Uuid: 8a5fe60a-3536-4eb2-80fd-cf23393fabb7
> State: Peer in Cluster (Connected)
>
> Hostname: n5-ib
> Uuid: 9ae10c27-38e3-40ed-a0cd-d5911928263b
> State: Peer in Cluster (Connected)
>
> Hostname: n12-ib
> Uuid: 39ed8724-e18b-4ba1-b892-d4514f7cf444
> State: Peer in Cluster (Connected)
>
> Hostname: n1-ib
> Uuid: 451629d9-611f-4625-b16d-8dccdee8f2e5
> State: Peer in Cluster (Connected)
>
> Hostname: n6-ib
> Uuid: d18e204b-1769-426b-8178-463292b5cd85
> State: Peer in Cluster (Connected)
>
> Hostname: n14-ib
> Uuid: 75ef8005-0e3d-4245-af3f-bd080521a512
> State: Peer in Cluster (Connected)
>
> Hostname: n15-ib
> Uuid: 967b89b9-ddc4-4fd2-860f-e72d27e604c4
> State: Peer in Cluster (Connected)
>
> Hostname: n10-ib
> Uuid: a2640233-fa3f-48ed-9c9a-327473a0444e
> State: Peer in Cluster (Connected)
>
> Hostname: n0-ib
> Uuid: efcc8de0-4e5a-4cbf-946b-9aa5e44b84b2
> State: Peer in Cluster (Connected)
> Other names:
> n3-ib
>
> Hostname: n0-ib
> Uuid: e6ec2f29-f915-4c4b-82d6-75bbfe208344
> State: Peer in Cluster (Connected)
>
> Hostname: n1-ib
> Uuid: 84db5aa6-118a-44ae-af8a-ff25d1a96df1
> State: Peer in Cluster (Connected)
>
> Hostname: n15-ib
> Uuid: b8f6143d-416a-495d-860c-67a57cf2aad6
> State: Peer in Cluster (Connected)
>
> Hostname: n15-ib
> Uuid: f96c87d8-0c44-4b91-b610-10f737957755
> State: Peer in Cluster (Connected)
>
>
>
>
>
> On 9/18/15 12:59 PM, Bidwell, Christopher wrote:
>
>> I can do a generalized setup of gluster, but I'm not sure how to read
>> this error and how to fix it. Can anyone provide some
>> assistance?
>>
>> [2015-09-18 19:56:27.169245] E [MSGID: 108008]
>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>> 0-MAGWEB-replicate-0: Gfid mismatch detected for
>> <70a8289c-4d94-4c4c-be17-a4a60c1907b8/frn20150906sq.min>,
>> 411f330d-a7bf-48fd-a4fb-63e90730585e on MAGWEB-client-1 and
>> ed14b3ca-1dea-4126-bf4a-a26283b03190 on MAGWEB-client-0. Skipping
>> conservative merge on the file.
>>
>> Here is my volume info:
>> Volume Name: MAGWEB
>> Type: Replicate
>> Volume ID: a2f0fbb9-fea0-498d-9c58-1589a2c364e7
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 172.23.10.1:/home/www
>> Brick2: 172.23.10.2:/home/www
>> Options Reconfigured:
>> features.inode-quota: off
>> features.quota: off
>> diagnostics.client-log-level: ERROR
>> performance.readdir-ahead: on
>> nfs.disable: true
>> --
>>
>> Thanks!
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
> --
>
> John Casu | Principal Solutions Architect
> ---------------------------------------
> Penguin Computing
> 45800 Northport Loop West
> Fremont, CA 94538
> t. 831-840-0142
> e. jcasu at penguincomputing.com
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150919/d73a3219/attachment.html>
More information about the Gluster-users
mailing list