[Gluster-users] (3.1.6-1) How should I add extra peers to existing file systems ?
Mohit Anchlia
mohitanchlia at gmail.com
Mon Aug 15 23:52:40 UTC 2011
Logs are generally in /var/log/gluster
Since you are playing with it. I would suggest this:
1) run peer detach for all the servers
2) from server 1 ->3 and 3->1 make sure ports are open and iptables
are turned off.
3) remove config files under /etc/glusterd
4) run your tests again.
On Mon, Aug 15, 2011 at 4:28 PM, Tomoaki Sato <tsato at valinux.co.jp> wrote:
> Thanks, Mohit
>
> (2011/08/16 8:05), Mohit Anchlia wrote:
>>
>> What's in your logs?
>
> I can obtain logs needed. could you tell me the instruction to take the
> logs?
>
>>
>> Did you have foo-3-private before in your gluster cluster ever or
>> adding this host for the first time?
>
> It was first time.
> All foo-X-private has no entries in /etc/glusterd/peers/ and
> /etc/glusterd/vols/.
>
>>
>> Try gluster peer detach and then remove any left over configuration in
>> /etc/glusterd config directory. After that try again and see if that
>> works.
>
> [root at vhead-010 ~]# date
> Tue Aug 16 08:17:49 JST 2011
> [root at vhead-010 ~]# cat a.sh
> #!/bin/bash
> for i in foo-{1..3}-private
> do
> ssh ${i} service glusterd stop
> ssh ${i} rm -rf /etc/glusterd/peers/*
> ssh ${i} rm -rf /etc/glusterd/vols/*
> ssh ${i} service glusterd start
> ssh ${i} find /etc/glusterd
> done
> [root at vhead-010 ~]# bash a.sh
> Stopping glusterd:[ OK ]
> Starting glusterd:[ OK ]
> /etc/glusterd
> /etc/glusterd/glusterd.info
> /etc/glusterd/nfs
> /etc/glusterd/nfs/nfs-server.vol
> /etc/glusterd/nfs/run
> /etc/glusterd/peers
> /etc/glusterd/vols
> Stopping glusterd:[ OK ]
> Starting glusterd:[ OK ]
> /etc/glusterd
> /etc/glusterd/glusterd.info
> /etc/glusterd/nfs
> /etc/glusterd/nfs/nfs-server.vol
> /etc/glusterd/nfs/run
> /etc/glusterd/peers
> /etc/glusterd/vols
> Stopping glusterd:[ OK ]
> Starting glusterd:[ OK ]
> /etc/glusterd
> /etc/glusterd/glusterd.info
> /etc/glusterd/nfs
> /etc/glusterd/nfs/nfs-server.vol
> /etc/glusterd/nfs/run
> /etc/glusterd/peers
> /etc/glusterd/vols
> [root at vhead-010 ~]# ssh foo-1-private
> [root at localhost ~]# gluster peer probe foo-2-private
> Probe successful
> [root at localhost ~]# gluster peer status
> Number of Peers: 1
>
> Hostname: foo-2-private
> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
> State: Peer in Cluster (Connected)
> [root at localhost ~]# gluster volume create foo foo-1-private:/mnt/brick
> Creation of volume foo has been successful. Please start the volume to
> access da
> ta.
> [root at localhost ~]# gluster volume start foo
> Starting volume foo has been successful
> [root at localhost ~]# gluster volume add-brick foo foo-2-private:/mnt/brick
> Add Brick successful
> [root at localhost ~]# gluster peer probe foo-3-private
> Probe successful
> [root at localhost ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: foo-2-private
> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
> State: Peer in Cluster (Connected)
>
> Hostname: foo-3-private
> Uuid: 7fb98dac-fef7-4b33-837c-6483a767ec3e
> State: Peer Rejected (Connected)
> [root at localhost ~]# cat /var/log/glusterfs/.cmd_log_history
> ...
> [2011-08-16 08:20:28.862619] peer probe : on host foo-2-private:24007
> [2011-08-16 08:20:28.912419] peer probe : on host foo-2-private:24007 FAILED
> [2011-08-16 08:20:58.382350] Volume create : on volname: foo attempted
> [2011-08-16 08:20:58.382461] Volume create : on volname: foo type:DEFAULT
> count:
> 1 bricks: foo-1-private:/mnt/brick
> [2011-08-16 08:20:58.384674] Volume create : on volname: foo SUCCESS
> [2011-08-16 08:21:04.831772] volume start : on volname: foo SUCCESS
> [2011-08-16 08:21:22.682292] Volume add-brick : on volname: foo attempted
> [2011-08-16 08:21:22.682385] Volume add-brick : volname: foo type DEFAULT
> count:
> 1 bricks: foo-2-private:/mnt/brick
> [2011-08-16 08:21:22.682499] Volume add-brick : on volname: foo SUCCESS
> [2011-08-16 08:21:39.124574] peer probe : on host foo-3-private:24007
> [2011-08-16 08:21:39.135609] peer probe : on host foo-3-private:24007 FAILED
>
> Tomo
>
>>
>>
>>
>> On Mon, Aug 15, 2011 at 3:37 PM, Tomoaki Sato<tsato at valinux.co.jp> wrote:
>>>
>>> Hi,
>>>
>>> following instructions work fine with 3.1.5-1 but with 3.1.6-1.
>>>
>>> 1. make a new file system without peers. [OK]
>>>
>>> foo-1-private# gluster volume create foo foo-1-private:/mnt/brick
>>> foo-1-private# gluster volume start foo
>>> foo-1-private# gluster peer status
>>> No peers present
>>> foo-1-private#
>>>
>>> 2. add a peer to the file system. [NG]
>>>
>>> foo-1-private# gluster peer probe foo-2-private
>>> Probe successful
>>> foo-1-private# gluster peer status
>>> Number of Peers: 1
>>>
>>> Hostname: foo-2-private
>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>> State: Peer Rejected (Connected)
>>> foo-1-private# gluster volume add-brick foo foo-2-private:/mnt/brick
>>> Host foo-2-private not connected
>>> foo-1-private#
>>>
>>>
>>> following instructions work fine even with 3.1.6-1.
>>>
>>> 1. make a new file system with single peer. [OK]
>>>
>>> foo-1-private# gluster peer status
>>> No peer presents
>>> foo-1-private# gluster peer probe foo-2-private
>>> Probe successful
>>> foo-1-private# gluster peer status
>>> Number of Peers: 1
>>>
>>> Hostname: foo-2-private
>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>> State: Peer in Cluster (Connected)
>>> foo-1-private# gluster volume create foo foo-1-private:/mnt/brick
>>> Creation of volume foo has been successful. Please start the volume to
>>> access data.
>>> foo-1-private# gluster volume start foo
>>> Starting volume foo has been successful
>>> foo-1-private# gluster volume add-brick foo foo-2-private:/mnt/brick
>>> Add Brick successful
>>> foo-1-private#
>>>
>>> But ...
>>>
>>> 2. add a peer to the file system. [NG]
>>>
>>> foo-1-private# gluster peer probe foo-3-private
>>> Probe successful
>>> foo-1-private# gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: foo-2-private
>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: foo-3-private
>>> Uuid: 7fb98dac-fef704b33-837c-6483a767ec3e
>>> State: Peer Rejected (Connected)
>>> foo-1-private# gluster volume add-brick foo foo-3-private:/mnt/brick
>>> Host foo-3-private not connected
>>> foo-1-private#
>>>
>>> How should I add extra peers to existing file systems ?
>>>
>>> Best,
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>
>
More information about the Gluster-users
mailing list