[Gluster-users] Fwd: Unexpected behaviour during replication heal
Darren Austin
darren-lists at widgit.com
Wed Jun 29 10:14:00 UTC 2011
----- Forwarded Message -----
From: "Darren Austin" <darren-lists at widgit.com>
To: "Mohit Anchlia" <mohitanchlia at gmail.com>
Sent: Wednesday, 29 June, 2011 11:13:30 AM
Subject: Re: [Gluster-users] Unexpected behaviour during replication heal
----- Original Message -----
> Did you recently upgrade?
I was able to reproduce this problem on both 3.2.0 and 3.2.1.
It wasn't an upgrade situation - I deleted the volumes and re-created
them for each test.
> Can you also post gluster volume info and your gluster vol files?
'gluster volume info':
Volume Name: data-volume
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.234.158.226:/data
Brick2: 10.49.14.115:/data
glusterd.vol (same on both servers):
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume
Other vol files and brick info were posted with my first description of
the issue :)
HTH,
Darren.
-- Darren Austin - Systems Administrator, Widgit Software.
Tel: +44 (0)1926 333680. Web: http://www.widgit.com/
26 Queen Street, Cubbington, Warwickshire, CV32 7NA.
--
Darren Austin - Systems Administrator, Widgit Software.
Tel: +44 (0)1926 333680. Web: http://www.widgit.com/
26 Queen Street, Cubbington, Warwickshire, CV32 7NA.
More information about the Gluster-users
mailing list