[Gluster-users] problems after gluster volume remove-brick
Olav Peeters
opeeters at gmail.com
Wed Jan 21 12:18:48 UTC 2015
Hi,
two days ago is started a gluster volume remove-brick on a
Distributed-Replicate volume with 21 x 2 per node (3 in total).
I wanted to remove 4 bricks per node which are smaller than the others
(on each node I have 7 x 2TB disks and 4 x 500GB disks).
I am still on gluster 3.5.2. and I was not aware that using disks of
different sizes is only supported as of 3.6.x (am I correct?)
I started with 2 paired disks like so:
gluster volume remove-brick VOLNAME node03:/export/brick8node03
node02:/export/brick10node02 start
I followed the progress (which was very slow):
gluster volume remove-brick volume_name node03:/export/brick8node03
node02:/export/brick10node02 status
after a day the progress of node03:/export/brick8node03 showed
"completed", the other brick remained "in progress"
this morning several VM's with vdi's on the volume started showing disk
errors + a couple of gluserfs mounts returned a disk is full type of
error on the volume which is only ca. 41% filled with data currently.
via df -h I saw that most of the 500GB disk where indeed 100% full.
Others were meanwhile nearly empty..
Gluster seems to have gone nuts a bit during rebalancing the data.
I did a:
gluster volume remove-brick VOLNAME node03:/export/brick8node03
node02:/export/brick10node02 stop
and a:
gluster volume rebalance VOLNAME start
progress is again very slow and some of the disks/bricks which were ca.
98% are now 100% full.
The situation seems to be both getting worse in some cases and slowly
improving e.g. for another pair of bricks (from 100% to 97%).
There clearly has been some data corruption. Some VM's don't want to
boot anymore, throwing disk errors.
How do I proceed?
Wait a very long time for the rebalance to complete and hope that the
data corruption is automatically mended?
Upgrade to 3.6.x and hope that the issues (which might be related to me
using bricks of different sizes) are resolved and again risk a
remove-brick operation?
Should I rather do a:
gluster volume rebalance VOLNAME migrate-data start
Should I have done a replace-brick instead of a remove-brick operation
originally? I thought that replace-brick is becoming obsolete.
Thanks,
Olav
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150121/a864aa8e/attachment.html>
More information about the Gluster-users
mailing list