[Gluster-users] (3.1.6-1) How should I add extra peers to existing file systems ?

Tomoaki Sato tsato at valinux.co.jp
Tue Aug 16 03:18:09 UTC 2011


Mohit

I've tried same test and reproduce the 'Peer Rejected' status.
please find config files and log files in attached taz.


[root at vhead-010 ~]# date
Tue Aug 16 11:55:15 JST 2011
[root at vhead-010 ~]# cat a.sh
#!/bin/bash
for i in foo-{1..3}-private
do
         ssh ${i} service glusterd stop
         ssh ${i} 'find /etc/glusterd -type f|xargs rm -f'
         ssh ${i} rm -rf /etc/glusterd/vols/*
         ssh ${i} service iptables stop
         ssh ${i} cp /dev/null /var/log/glusterfs/nfs.log
         ssh ${i} cp /dev/null /var/log/glusterfs/bricks/mnt-brick.log
         ssh ${i} cp /dev/null /var/log/glusterfs/.cmd_log_history
         ssh ${i} cp /dev/null /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
         ssh ${i} service glusterd start
         ssh ${i} find /etc/glusterd
         ssh ${i} service glusterd status
done
[root at vhead-010 ~]# bash a.sh
Stopping glusterd:[  OK  ]
Flushing firewall rules: [  OK  ]
Setting chains to policy ACCEPT: filter [  OK  ]
Unloading iptables modules: [  OK  ]
Starting glusterd:[  OK  ]
/etc/glusterd
/etc/glusterd/glusterd.info
/etc/glusterd/nfs
/etc/glusterd/nfs/run
/etc/glusterd/peers
/etc/glusterd/vols
glusterd (pid 15617) is running...
Stopping glusterd:[  OK  ]
Flushing firewall rules: [  OK  ]
Setting chains to policy ACCEPT: filter [  OK  ]
Unloading iptables modules: [  OK  ]
Starting glusterd:[  OK  ]
/etc/glusterd
/etc/glusterd/glusterd.info
/etc/glusterd/nfs
/etc/glusterd/nfs/run
/etc/glusterd/peers
/etc/glusterd/vols
glusterd (pid 15147) is running...
Stopping glusterd:[  OK  ]
Flushing firewall rules: [  OK  ]
Setting chains to policy ACCEPT: filter [  OK  ]
Unloading iptables modules: [  OK  ]
Starting glusterd:[  OK  ]
/etc/glusterd
/etc/glusterd/glusterd.info
/etc/glusterd/nfs
/etc/glusterd/nfs/run
/etc/glusterd/peers
/etc/glusterd/vols
glusterd (pid 15177) is running...
[root at vhead-010 ~]# ssh foo-1-private
Last login: Tue Aug 16 09:51:49 2011 from dlp.local.valinux.co.jp
[root at localhost ~]# gluster peer probe foo-2-private
Probe successful
[root at localhost ~]# gluster peer status
Number of Peers: 1

Hostname: foo-2-private
Uuid: 20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
State: Peer in Cluster (Connected)
[root at localhost ~]# gluster volume create foo foo-1-private:/mnt/brick
Creation of volume foo has been successful. Please start the volume to access data.
[root at localhost ~]# gluster volume start foo
Starting volume foo has been successful
[root at localhost ~]# gluster volume add-brick foo foo-2-private:/mnt/brick
Add Brick successful
[root at localhost ~]# gluster peer probe foo-3-private
Probe successful
[root at localhost ~]# gluster peer status
Number of Peers: 2

Hostname: foo-2-private
Uuid: 20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
State: Peer in Cluster (Connected)

Hostname: foo-3-private
Uuid: 7587ae34-9209-484a-9576-3939e061720c
State: Peer Rejected (Connected)
[root at localhost ~]# exit
logout
Connection to foo-1-private closed.
[root at vhead-010 ~]# find foo_log_and_conf
foo_log_and_conf
foo_log_and_conf/foo-2-private
foo_log_and_conf/foo-2-private/glusterd
foo_log_and_conf/foo-2-private/glusterd/glusterd.info
foo_log_and_conf/foo-2-private/glusterd/nfs
foo_log_and_conf/foo-2-private/glusterd/nfs/nfs-server.vol
foo_log_and_conf/foo-2-private/glusterd/nfs/run
foo_log_and_conf/foo-2-private/glusterd/nfs/run/nfs.pid
foo_log_and_conf/foo-2-private/glusterd/peers
foo_log_and_conf/foo-2-private/glusterd/peers/461f6e21-90c4-4b6c-bda8-7b99bacb2722
foo_log_and_conf/foo-2-private/glusterd/vols
foo_log_and_conf/foo-2-private/glusterd/vols/foo
foo_log_and_conf/foo-2-private/glusterd/vols/foo/info
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
foo_log_and_conf/foo-2-private/glusterd/vols/foo/cksum
foo_log_and_conf/foo-2-private/glusterd/vols/foo/run
foo_log_and_conf/foo-2-private/glusterd/vols/foo/run/foo-2-private-mnt-brick.pid
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo-fuse.vol
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
foo_log_and_conf/foo-2-private/glusterfs
foo_log_and_conf/foo-2-private/glusterfs/nfs.log
foo_log_and_conf/foo-2-private/glusterfs/bricks
foo_log_and_conf/foo-2-private/glusterfs/bricks/mnt-brick.log
foo_log_and_conf/foo-2-private/glusterfs/.cmd_log_history
foo_log_and_conf/foo-2-private/glusterfs/etc-glusterfs-glusterd.vol.log
foo_log_and_conf/foo-1-private
foo_log_and_conf/foo-1-private/glusterd
foo_log_and_conf/foo-1-private/glusterd/glusterd.info
foo_log_and_conf/foo-1-private/glusterd/nfs
foo_log_and_conf/foo-1-private/glusterd/nfs/nfs-server.vol
foo_log_and_conf/foo-1-private/glusterd/nfs/run
foo_log_and_conf/foo-1-private/glusterd/nfs/run/nfs.pid
foo_log_and_conf/foo-1-private/glusterd/peers
foo_log_and_conf/foo-1-private/glusterd/peers/20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
foo_log_and_conf/foo-1-private/glusterd/peers/7587ae34-9209-484a-9576-3939e061720c
foo_log_and_conf/foo-1-private/glusterd/vols
foo_log_and_conf/foo-1-private/glusterd/vols/foo
foo_log_and_conf/foo-1-private/glusterd/vols/foo/info
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
foo_log_and_conf/foo-1-private/glusterd/vols/foo/cksum
foo_log_and_conf/foo-1-private/glusterd/vols/foo/run
foo_log_and_conf/foo-1-private/glusterd/vols/foo/run/foo-1-private-mnt-brick.pid
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo-fuse.vol
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
foo_log_and_conf/foo-1-private/glusterfs
foo_log_and_conf/foo-1-private/glusterfs/nfs.log
foo_log_and_conf/foo-1-private/glusterfs/bricks
foo_log_and_conf/foo-1-private/glusterfs/bricks/mnt-brick.log
foo_log_and_conf/foo-1-private/glusterfs/.cmd_log_history
foo_log_and_conf/foo-1-private/glusterfs/etc-glusterfs-glusterd.vol.log
foo_log_and_conf/foo-3-private
foo_log_and_conf/foo-3-private/glusterd
foo_log_and_conf/foo-3-private/glusterd/glusterd.info
foo_log_and_conf/foo-3-private/glusterd/nfs
foo_log_and_conf/foo-3-private/glusterd/nfs/run
foo_log_and_conf/foo-3-private/glusterd/peers
foo_log_and_conf/foo-3-private/glusterd/peers/461f6e21-90c4-4b6c-bda8-7b99bacb2722
foo_log_and_conf/foo-3-private/glusterd/vols
foo_log_and_conf/foo-3-private/glusterd/vols/foo
foo_log_and_conf/foo-3-private/glusterd/vols/foo/info
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
foo_log_and_conf/foo-3-private/glusterd/vols/foo/cksum
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo-fuse.vol
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
foo_log_and_conf/foo-3-private/glusterfs
foo_log_and_conf/foo-3-private/glusterfs/nfs.log
foo_log_and_conf/foo-3-private/glusterfs/bricks
foo_log_and_conf/foo-3-private/glusterfs/bricks/mnt-brick.log
foo_log_and_conf/foo-3-private/glusterfs/.cmd_log_history
foo_log_and_conf/foo-3-private/glusterfs/etc-glusterfs-glusterd.vol.log
[root at vhead-010 ~]# exit

Best,

(2011/08/16 9:35), Mohit Anchlia wrote:
> I should have also asked you to stop and delete volume before getting
> rid of gluster config files. Can you get rid of directories also
> inside vols and try to restart? It's trying to look for volume files
> that we just removed.
>
> Also, just disable iptables for now explicitly.
>
> On Mon, Aug 15, 2011 at 5:22 PM, Tomoaki Sato<tsato at valinux.co.jp>  wrote:
>>
>>> 1) run peer detach for all the servers
>>
>> done.
>>
>>> 2) from server 1 ->3 and 3->1 make sure ports are open and iptables
>>> are turned off.
>>
>> done.
>> by the way, the same test on 3.1.5-1 works fine with same environment.
>>
>>> 3) remove config files under /etc/glusterd
>>
>> please review following logs.
>>
>>> 4) run your tests again.
>>
>> I don't know why but glusterd service failed to start on all 3 hosts.
>>
>> [root at vhead-010 ~]# date
>> Tue Aug 16 09:12:53 JST 2011
>> [root at vhead-010 ~]# cat a.sh
>> #!/bin/bash
>> for i in foo-{1..3}-private
>> do
>>         ssh ${i} service glusterd stop
>>         ssh ${i} 'find /etc/glusterd -type f|xargs rm -f'
>>         ssh ${i} service iptables restart
>>         ssh ${i} iptables -vL
>>         ssh ${i} service glusterd start
>>         ssh ${i} find /etc/glusterd
>>         ssh ${i} service glusterd status
>> done
>> [root at vhead-010 ~]# bash a.sh
>> Stopping glusterd:[  OK  ]
>> Flushing firewall rules: [  OK  ]
>> Setting chains to policy ACCEPT: filter [  OK  ]
>> Unloading iptables modules: [  OK  ]
>> Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>> Starting glusterd:[  OK  ]
>> /etc/glusterd
>> /etc/glusterd/glusterd.info
>> /etc/glusterd/nfs
>> /etc/glusterd/nfs/run
>> /etc/glusterd/peers
>> /etc/glusterd/vols
>> /etc/glusterd/vols/foo
>> /etc/glusterd/vols/foo/bricks
>> /etc/glusterd/vols/foo/run
>> glusterd is stopped
>> Stopping glusterd:[  OK  ]
>> Flushing firewall rules: [  OK  ]
>> Setting chains to policy ACCEPT: filter [  OK  ]
>> Unloading iptables modules: [  OK  ]
>> Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>> Starting glusterd:[  OK  ]
>> /etc/glusterd
>> /etc/glusterd/glusterd.info
>> /etc/glusterd/nfs
>> /etc/glusterd/nfs/run
>> /etc/glusterd/peers
>> /etc/glusterd/vols
>> /etc/glusterd/vols/foo
>> /etc/glusterd/vols/foo/bricks
>> /etc/glusterd/vols/foo/run
>> glusterd is stopped
>> Stopping glusterd:[  OK  ]
>> Flushing firewall rules: [  OK  ]
>> Setting chains to policy ACCEPT: filter [  OK  ]
>> Unloading iptables modules: [  OK  ]
>> Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>>
>> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>>   pkts bytes target     prot opt in     out     source
>> destination
>> Starting glusterd:[  OK  ]
>> /etc/glusterd
>> /etc/glusterd/glusterd.info
>> /etc/glusterd/nfs
>> /etc/glusterd/nfs/run
>> /etc/glusterd/peers
>> /etc/glusterd/vols
>> /etc/glusterd/vols/foo
>> /etc/glusterd/vols/foo/bricks
>> /etc/glusterd/vols/foo/run
>> glusterd is stopped
>> [root at vhead-010 ~]# date
>> Tue Aug 16 09:13:20 JST 2011
>> [root at vhead-010 ~]# ssh foo-1-private
>> Last login: Tue Aug 16 09:06:57 2011 from dlp.local.valinux.co.jp
>> [root at localhost ~]# tail -20
>> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>> ...
>> [2011-08-16 09:13:01.85858] I [glusterd.c:304:init] 0-management: Using
>> /etc/glusterd as working directory
>> [2011-08-16 09:13:01.87294] E [rpc-transport.c:799:rpc_transport_load]
>> 0-rpc-transport:
>> /opt/glusterfs/3.1.6/lib64/glusterfs/3.1.6/rpc-transport/rdma.so: cannot
>> open shared object file: No such file or directory
>> [2011-08-16 09:13:01.87340] E [rpc-transport.c:803:rpc_transport_load]
>> 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not
>> valid or not found on this machine
>> [2011-08-16 09:13:01.87402] E
>> [glusterd-store.c:654:glusterd_store_handle_retrieve] 0-glusterd: Unable to
>> retrieve store handle for /etc/glusterd/glusterd.info, error: No such file
>> or directory
>> [2011-08-16 09:13:01.87422] E [glusterd-store.c:761:glusterd_retrieve_uuid]
>> 0-: Unable to get store handle!
>> [2011-08-16 09:13:01.87514] I [glusterd.c:95:glusterd_uuid_init] 0-glusterd:
>> generated UUID: c0cef9f9-a79e-4189-8955-d83927db9cee
>> [2011-08-16 09:13:01.87681] E
>> [glusterd-store.c:654:glusterd_store_handle_retrieve] 0-glusterd: Unable to
>> retrieve store handle for /etc/glusterd/vols/foo/info, error: No such file
>> or directory
>> [2011-08-16 09:13:01.87704] E
>> [glusterd-store.c:1328:glusterd_store_retrieve_volumes] 0-: Unable to
>> restore volume: foo
>
>> [2011-08-16 09:13:01.87732] E [xlator.c:843:xlator_init] 0-management:
>> Initialization of volume 'management' failed, review your volfile again
>> [2011-08-16 09:13:01.87751] E [graph.c:331:glusterfs_graph_init]
>> 0-management: initializing translator failed
>> [2011-08-16 09:13:01.87818] I [glusterfsd.c:712:cleanup_and_exit]
>> 0-glusterfsd: shutting down
>> [root at localhost ~]# exit
>>
>> Best,
>>
>> (2011/08/16 8:52), Mohit Anchlia wrote:
>>>
>>> Logs are generally in /var/log/gluster
>>>
>>> Since you are playing with it. I would suggest this:
>>>
>>> 1) run peer detach for all the servers
>>> 2) from server 1 ->3 and 3->1 make sure ports are open and iptables
>>> are turned off.
>>> 3) remove config files under /etc/glusterd
>>> 4) run your tests again.
>>>
>>> On Mon, Aug 15, 2011 at 4:28 PM, Tomoaki Sato<tsato at valinux.co.jp>    wrote:
>>>>
>>>> Thanks, Mohit
>>>>
>>>> (2011/08/16 8:05), Mohit Anchlia wrote:
>>>>>
>>>>> What's in your logs?
>>>>
>>>> I can obtain logs needed. could you tell me the instruction to take the
>>>> logs?
>>>>
>>>>>
>>>>> Did you have  foo-3-private before in your gluster cluster ever or
>>>>> adding this host for the first time?
>>>>
>>>> It was first time.
>>>> All foo-X-private has no entries in /etc/glusterd/peers/ and
>>>> /etc/glusterd/vols/.
>>>>
>>>>>
>>>>> Try gluster peer detach and then remove any left over configuration in
>>>>> /etc/glusterd config directory. After that try again and see if that
>>>>> works.
>>>>
>>>> [root at vhead-010 ~]# date
>>>> Tue Aug 16 08:17:49 JST 2011
>>>> [root at vhead-010 ~]# cat a.sh
>>>> #!/bin/bash
>>>> for i in foo-{1..3}-private
>>>> do
>>>>         ssh ${i} service glusterd stop
>>>>         ssh ${i} rm -rf /etc/glusterd/peers/*
>>>>         ssh ${i} rm -rf /etc/glusterd/vols/*
>>>>         ssh ${i} service glusterd start
>>>>         ssh ${i} find /etc/glusterd
>>>> done
>>>> [root at vhead-010 ~]# bash a.sh
>>>> Stopping glusterd:[  OK  ]
>>>> Starting glusterd:[  OK  ]
>>>> /etc/glusterd
>>>> /etc/glusterd/glusterd.info
>>>> /etc/glusterd/nfs
>>>> /etc/glusterd/nfs/nfs-server.vol
>>>> /etc/glusterd/nfs/run
>>>> /etc/glusterd/peers
>>>> /etc/glusterd/vols
>>>> Stopping glusterd:[  OK  ]
>>>> Starting glusterd:[  OK  ]
>>>> /etc/glusterd
>>>> /etc/glusterd/glusterd.info
>>>> /etc/glusterd/nfs
>>>> /etc/glusterd/nfs/nfs-server.vol
>>>> /etc/glusterd/nfs/run
>>>> /etc/glusterd/peers
>>>> /etc/glusterd/vols
>>>> Stopping glusterd:[  OK  ]
>>>> Starting glusterd:[  OK  ]
>>>> /etc/glusterd
>>>> /etc/glusterd/glusterd.info
>>>> /etc/glusterd/nfs
>>>> /etc/glusterd/nfs/nfs-server.vol
>>>> /etc/glusterd/nfs/run
>>>> /etc/glusterd/peers
>>>> /etc/glusterd/vols
>>>> [root at vhead-010 ~]# ssh foo-1-private
>>>> [root at localhost ~]# gluster peer probe foo-2-private
>>>> Probe successful
>>>> [root at localhost ~]# gluster peer status
>>>> Number of Peers: 1
>>>>
>>>> Hostname: foo-2-private
>>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>>> State: Peer in Cluster (Connected)
>>>> [root at localhost ~]# gluster volume create foo foo-1-private:/mnt/brick
>>>> Creation of volume foo has been successful. Please start the volume to
>>>> access da
>>>> ta.
>>>> [root at localhost ~]# gluster volume start foo
>>>> Starting volume foo has been successful
>>>> [root at localhost ~]# gluster volume add-brick foo foo-2-private:/mnt/brick
>>>> Add Brick successful
>>>> [root at localhost ~]# gluster peer probe foo-3-private
>>>> Probe successful
>>>> [root at localhost ~]# gluster peer status
>>>> Number of Peers: 2
>>>>
>>>> Hostname: foo-2-private
>>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> Hostname: foo-3-private
>>>> Uuid: 7fb98dac-fef7-4b33-837c-6483a767ec3e
>>>> State: Peer Rejected (Connected)
>>>> [root at localhost ~]# cat /var/log/glusterfs/.cmd_log_history
>>>> ...
>>>> [2011-08-16 08:20:28.862619] peer probe :  on host foo-2-private:24007
>>>> [2011-08-16 08:20:28.912419] peer probe : on host foo-2-private:24007
>>>> FAILED
>>>> [2011-08-16 08:20:58.382350] Volume create : on volname: foo attempted
>>>> [2011-08-16 08:20:58.382461] Volume create : on volname: foo type:DEFAULT
>>>> count:
>>>> 1 bricks: foo-1-private:/mnt/brick
>>>> [2011-08-16 08:20:58.384674] Volume create : on volname: foo SUCCESS
>>>> [2011-08-16 08:21:04.831772] volume start : on volname: foo SUCCESS
>>>> [2011-08-16 08:21:22.682292] Volume add-brick : on volname: foo attempted
>>>> [2011-08-16 08:21:22.682385] Volume add-brick : volname: foo type DEFAULT
>>>> count:
>>>> 1 bricks: foo-2-private:/mnt/brick
>>>> [2011-08-16 08:21:22.682499] Volume add-brick : on volname: foo SUCCESS
>>>> [2011-08-16 08:21:39.124574] peer probe :  on host foo-3-private:24007
>>>> [2011-08-16 08:21:39.135609] peer probe : on host foo-3-private:24007
>>>> FAILED
>>>>
>>>> Tomo
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Aug 15, 2011 at 3:37 PM, Tomoaki Sato<tsato at valinux.co.jp>
>>>>>   wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> following instructions work fine with 3.1.5-1 but with 3.1.6-1.
>>>>>>
>>>>>> 1. make a new file system without peers. [OK]
>>>>>>
>>>>>> foo-1-private# gluster volume create foo foo-1-private:/mnt/brick
>>>>>> foo-1-private# gluster volume start foo
>>>>>> foo-1-private# gluster peer status
>>>>>> No peers present
>>>>>> foo-1-private#
>>>>>>
>>>>>> 2. add a peer to the file system. [NG]
>>>>>>
>>>>>> foo-1-private# gluster peer probe foo-2-private
>>>>>> Probe successful
>>>>>> foo-1-private# gluster peer status
>>>>>> Number of Peers: 1
>>>>>>
>>>>>> Hostname: foo-2-private
>>>>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>>>>> State: Peer Rejected (Connected)
>>>>>> foo-1-private# gluster volume add-brick foo foo-2-private:/mnt/brick
>>>>>> Host foo-2-private not connected
>>>>>> foo-1-private#
>>>>>>
>>>>>>
>>>>>> following instructions work fine even with 3.1.6-1.
>>>>>>
>>>>>> 1. make a new file system with single peer. [OK]
>>>>>>
>>>>>> foo-1-private# gluster peer status
>>>>>> No peer presents
>>>>>> foo-1-private# gluster peer probe foo-2-private
>>>>>> Probe successful
>>>>>> foo-1-private# gluster peer status
>>>>>> Number of Peers: 1
>>>>>>
>>>>>> Hostname: foo-2-private
>>>>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>>>>> State: Peer in Cluster (Connected)
>>>>>> foo-1-private# gluster volume create foo foo-1-private:/mnt/brick
>>>>>> Creation of volume foo has been successful. Please start the volume to
>>>>>> access data.
>>>>>> foo-1-private# gluster volume start foo
>>>>>> Starting volume foo has been successful
>>>>>> foo-1-private# gluster volume add-brick foo foo-2-private:/mnt/brick
>>>>>> Add Brick successful
>>>>>> foo-1-private#
>>>>>>
>>>>>> But ...
>>>>>>
>>>>>> 2. add a peer to the file system. [NG]
>>>>>>
>>>>>> foo-1-private# gluster peer probe foo-3-private
>>>>>> Probe successful
>>>>>> foo-1-private# gluster peer status
>>>>>> Number of Peers: 2
>>>>>>
>>>>>> Hostname: foo-2-private
>>>>>> Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
>>>>>> State: Peer in Cluster (Connected)
>>>>>>
>>>>>> Hostname: foo-3-private
>>>>>> Uuid: 7fb98dac-fef704b33-837c-6483a767ec3e
>>>>>> State: Peer Rejected (Connected)
>>>>>> foo-1-private# gluster volume add-brick foo foo-3-private:/mnt/brick
>>>>>> Host foo-3-private not connected
>>>>>> foo-1-private#
>>>>>>
>>>>>> How should I add extra peers to existing file systems ?
>>>>>>
>>>>>> Best,
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>>
>>>>
>>>>
>>
>>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: foo_log_and_conf.taz
Type: application/octet-stream
Size: 10936 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110816/9b948d60/attachment.obj>


More information about the Gluster-users mailing list