[Gluster-users] volume replace-brick start is not working

Alessandro Ipe Alessandro.Ipe at meteo.be
Tue Jan 6 12:35:39 UTC 2015


Hi,


We have set up a "md1" volume using gluster 3.4.2 over 4 servers configured as distributed and replicated. Then, we upgraded smoohtly to 3.5.3, since it was mentionned that the command "volume replace-brick" is broken on 3.4.x. We added two more peers (after having read that the quota feature neede to be turn off for this command to succeed...).

We have then issued an
gluster volume replace-brick md1 193.190.249.113:/data/glusterfs/md1/brick1 193.190.249.122:/data/glusterfs/md1/brick1 start force
Then I did an 
gluster volume replace-brick md1 193.190.249.113:/data/glusterfs/md1/brick1 
193.190.249.122:/data/glusterfs/md1/brick1 abort
because nothing was happening.

However wheh trying to monitor the previous command by
gluster volume replace-brick md1 193.190.249.113:/data/glusterfs/md1/brick1 193.190.249.122:/data/glusterfs/md1/brick1 status
it outputs
volume replace-brick: failed: Another transaction could be in progress. Please try again after sometime.
and the following lines are written in cli.log
[2015-01-06 12:32:14.595387] I [socket.c:3645:socket_init] 0-glusterfs: SSL support is NOT enabled
[2015-01-06 12:32:14.595434] I [socket.c:3660:socket_init] 0-glusterfs: using system polling thread
[2015-01-06 12:32:14.595590] I [socket.c:3645:socket_init] 0-glusterfs: SSL support is NOT enabled
[2015-01-06 12:32:14.595606] I [socket.c:3660:socket_init] 0-glusterfs: using system polling thread
[2015-01-06 12:32:14.596013] I [cli-cmd-volume.c:1706:cli_check_gsync_present] 0-: geo-replication not installed
[2015-01-06 12:32:14.602165] I [cli-rpc-ops.c:2162:gf_cli_replace_brick_cbk] 0-cli: Received resp to replace brick
[2015-01-06 12:32:14.602248] I [input.c:36:cli_batch] 0-: Exiting with: -1

What am I doing wrong ?


Many thanks,


Alessandro.


gluster volume info md1 outputs:
Volume Name: md1
Type: Distributed-Replicate
Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/md1/brick1
Brick2: tsunami2:/data/glusterfs/md1/brick1
Brick3: tsunami3:/data/glusterfs/md1/brick1
Brick4: tsunami4:/data/glusterfs/md1/brick1
Options Reconfigured:
server.allow-insecure: on
cluster.read-hash-mode: 2
features.quota: off
nfs.disable: on
performance.cache-size: 512MB
performance.io-thread-count: 64
performance.flush-behind: off
performance.write-behind-window-size: 4MB
performance.write-behind: on

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150106/f27e08c0/attachment.html>


More information about the Gluster-users mailing list