[Gluster-users] Volume rebalance issue
Kevin Lemonnier
lemonnierk at ulrar.net
Sun Feb 26 01:13:58 UTC 2017
> We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the volume, the VMs got corrupted.
> [...]
> Is it affecting all of Gluster versions ? is there any workaround or a volume setup that is not affected by this issue ?
Sure sounds like what corrupted everything for me a few months ago :). Had to spend the whole night
re-creating the VMs from backups, and explaining the dataloss and downtime to the clients wasn't easy.
Unfortunatly I believe they never managed to reproduce the issue, so I don't think it was ever fixed,
no. We are using 3.7.13 so downgrading won't help you, I don't know of any workaround.
We decided to just not expand volumes, when one is full we just create a new one instead of
adding bricks to the existing. Not ideal, but not a bid deal, at least yet. Since VMs are
easy enough to live migrate from one volume to another, it seemed like the easiest solution.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Digital signature
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170226/1c7b09e6/attachment.sig>
More information about the Gluster-users
mailing list