[Gluster-users] Can't peer probe new host

Joel Young jdy at cryregarder.com
Mon Nov 18 23:48:28 UTC 2013


Vijay,

On Fri, Nov 15, 2013 at 5:39 AM, Vijay Bellur <vbellur at redhat.com> wrote:

> On 11/15/2013 07:08 PM, Vijay Bellur wrote:
>
>> Can you please provide the output of gluster volume info as well? It
>> does look like peer probe failed due to differences in this volume.
>>
>>  s/this\ volume/volume\ home/


Attached please find the gluster volume info both before and after peer
probe.  I'm not seeing anything on that front.

Joel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131118/e82b3391/attachment.html>
-------------- next part --------------

Volume Name: home
Type: Distributed-Replicate
Volume ID: 83fa39a6-6e68-4e1c-8fae-3c3e30b1bd66
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ir0:/lhome/gluster_home
Brick2: ir1:/lhome/gluster_home
Brick3: ir2:/lhome/gluster_home
Brick4: ir3:/raid/gluster_home
Options Reconfigured:
server.statedump-path: /tmp
performance.cache-size: 512MB
performance.client-io-threads: on
auth.allow: 38.68.239.*,10.10.1.*
 
Volume Name: corp
Type: Distribute
Volume ID: dc6829eb-5e97-4e31-bc1c-8526e0e5258c
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: ir0:/raid/gluster_corp
Brick2: ir1:/raid/gluster_corp
Brick3: ir2:/raid/gluster_corp
Options Reconfigured:
server.statedump-path: /tmp
performance.flush-behind: on
performance.write-behind-window-size: 8MB
performance.cache-size: 1GB
performance.client-io-threads: on
auth.allow: 38.68.239.*,10.10.1.*
 
Volume Name: work
Type: Distribute
Volume ID: 823816bb-2e60-4b37-a142-ba464a77bfdc
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: ir0:/raid/gluster_work
Brick2: ir1:/raid/gluster_work
Brick3: ir2:/raid/gluster_work
Options Reconfigured:
server.statedump-path: /tmp
performance.flush-behind: on
performance.write-behind-window-size: 3MB
performance.cache-size: 1GB
performance.client-io-threads: on
auth.allow: 10.10.1.*
-------------- next part --------------
Volume Name: home
Type: Distributed-Replicate
Volume ID: 83fa39a6-6e68-4e1c-8fae-3c3e30b1bd66
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ir0:/lhome/gluster_home
Brick2: ir1:/lhome/gluster_home
Brick3: ir2:/lhome/gluster_home
Brick4: ir3:/raid/gluster_home
Options Reconfigured:
server.statedump-path: /tmp
performance.cache-size: 512MB
performance.client-io-threads: on
auth.allow: 38.68.239.*,10.10.1.*
 
Volume Name: corp
Type: Distribute
Volume ID: dc6829eb-5e97-4e31-bc1c-8526e0e5258c
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: ir0:/raid/gluster_corp
Brick2: ir1:/raid/gluster_corp
Brick3: ir2:/raid/gluster_corp
Options Reconfigured:
server.statedump-path: /tmp
performance.flush-behind: on
performance.write-behind-window-size: 8MB
performance.cache-size: 1GB
performance.client-io-threads: on
auth.allow: 38.68.239.*,10.10.1.*
 
Volume Name: work
Type: Distribute
Volume ID: 823816bb-2e60-4b37-a142-ba464a77bfdc
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: ir0:/raid/gluster_work
Brick2: ir1:/raid/gluster_work
Brick3: ir2:/raid/gluster_work
Options Reconfigured:
server.statedump-path: /tmp
performance.flush-behind: on
performance.write-behind-window-size: 3MB
performance.cache-size: 1GB
performance.client-io-threads: on
auth.allow: 10.10.1.*


More information about the Gluster-users mailing list