[Bugs] [Bug 1207712] New: Input/Output error with disperse volume when geo-replication is started
bugzilla at redhat.com
bugzilla at redhat.com
Tue Mar 31 14:20:00 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1207712
Bug ID: 1207712
Summary: Input/Output error with disperse volume when
geo-replication is started
Product: GlusterFS
Version: mainline
Component: geo-replication
Assignee: bugs at gluster.org
Reporter: byarlaga at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Created attachment 1009088
--> https://bugzilla.redhat.com/attachment.cgi?id=1009088&action=edit
log file of the master
Description of problem:
======================
Input/Output error on disperse volume with geo-replication after start.
Version-Release number of selected component (if applicable):
=============================================================
[root at vertigo ~]# gluster --version
glusterfs 3.7dev built on Mar 31 2015 01:05:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
[root at vertigo ~]#
How reproducible:
=================
100%
Steps to Reproduce:
1. Create a 1x(4+2) disperse volume both for master and slave
2. Try to establish geo-replication b/w the volumes.
3. Once its started it thows out Input/Output error in the log file.
Actual results:
I/O error
Expected results:
Additional info:
================
[root at vertigo ~]# gluster v info geo-master
Volume Name: geo-master
Type: Disperse
Volume ID: fdb55cd4-34e7-4c15-a407-d9a831a09737
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: ninja:/rhs/brick1/geo-1
Brick2: vertigo:/rhs/brick1/geo-2
Brick3: ninja:/rhs/brick2/geo-3
Brick4: vertigo:/rhs/brick2/geo-4
Brick5: ninja:/rhs/brick3/geo-5
Brick6: vertigo:/rhs/brick3/geo-6
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
[root at vertigo ~]# gluster v status geo-master
Status of volume: geo-master
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/geo-1 49202 0 Y 4714
Brick vertigo:/rhs/brick1/geo-2 49203 0 Y 4643
Brick ninja:/rhs/brick2/geo-3 49203 0 Y 4731
Brick vertigo:/rhs/brick2/geo-4 49204 0 Y 4660
Brick ninja:/rhs/brick3/geo-5 49204 0 Y 4748
Brick vertigo:/rhs/brick3/geo-6 49205 0 Y 4677
NFS Server on localhost 2049 0 Y 5224
NFS Server on ninja 2049 0 Y 5090
Task Status of Volume geo-master
------------------------------------------------------------------------------
There are no active volume tasks
[root at vertigo ~]#
Slave configuration:
====================
[root at dhcp37-164 ~]# gluster v info
Volume Name: disperse-slave
Type: Disperse
Volume ID: 1cbbe781-ee69-4295-bd17-a1dff37637ab
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: dhcp37-164:/rhs/brick1/b1
Brick2: dhcp37-95:/rhs/brick1/b1
Brick3: dhcp37-164:/rhs/brick2/b2
Brick4: dhcp37-95:/rhs/brick2/b2
Brick5: dhcp37-164:/rhs/brick3/b3
Brick6: dhcp37-95:/rhs/brick3/b3
[root at dhcp37-164 ~]# gluster v status
Status of volume: disperse-slave
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick dhcp37-164:/rhs/brick1/b1 49152 0 Y 4066
Brick dhcp37-95:/rhs/brick1/b1 49152 0 Y 6988
Brick dhcp37-164:/rhs/brick2/b2 49153 0 Y 4083
Brick dhcp37-95:/rhs/brick2/b2 49153 0 Y 7005
Brick dhcp37-164:/rhs/brick3/b3 49154 0 Y 4100
Brick dhcp37-95:/rhs/brick3/b3 49154 0 Y 7022
NFS Server on localhost 2049 0 Y 4120
NFS Server on 10.70.37.95 2049 0 Y 7044
Task Status of Volume disperse-slave
------------------------------------------------------------------------------
There are no active volume tasks
[root at dhcp37-164 ~]#
Log file of the master will be attached.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list