[Gluster-users] unbalance

M S Vishwanath Bhat msvbhat at gmail.com
Sat Feb 28 10:29:23 UTC 2015


MS
On 28 Feb 2015 13:27, "Jorick Astrego" <j.astrego at netbulae.eu> wrote:
>
>
> On 02/28/2015 08:36 AM, M S Vishwanath Bhat wrote:
>>
>>
>>
>> On 28 February 2015 at 02:17, Jorick Astrego <j.astrego at netbulae.eu>
wrote:
>>>
>>> Hi,
>>>
>>> When I have a 4 or 6 node distributed replicated volume, is there an
>>> easy way to unbalance the data? The rebalancing is very nice, but I was
>>> thinking of a scenario where I would have take half the nodes offline.
I
>>> could shift part or all of the data on one replicated volume and stuff
>>> it all on the other replicated volume.
>>
>>
>> I'm not sure what you meant by "unbalance the data" ?
>>
>> Do you want to replace the bricks (backend disks)? Or do you want to
move the data from one gluster volume to another gluster volume?
>>
>> * If you want to replace the bricks, there is "gluster replace-brick".
But there are some known issues with it. I would suggest "remove-brick" and
then "add-brick" and rebalance instead.
>
> Say I have two racks with a distributed replicated setup
>
>
>
>
> And I need to take server3 and server4 offline and I don't have
replacement bricks. How do I move File 2 to  Replicated Volume 0 while the
file is in use (VM storage or big MP4)
If you do remove brick of replicate volume 1, it will move (rebalance) ask
the data to replicate volume 0.
Note that it makes distribute replicate volume to pure replicate.

>
>
>> * If you want to move the data from one gluster volume to another, you
can just rsync the data from mountpoint of volume1 to mountpoint of volume2.
>
> If I rsync to the other volume, will gluster dynamically remap the file
and keep serving it? What about the old file left over on Server3 and
Server4

Okay, when I say volume, I actually meant mountpoint of one Gluster volume
to another Gluster volume mountpoint. But looks like that's not what you
want. Sorry for the confusion.

Best Regards,
MS
>
>
>> HTH
>>
>> //MS
>>
>>
>>
>>>
>>> There would have to be enough space and performance will suffer a lot,
>>> but at least it will continue to be available.
>>>
>>>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> Netbulae Virtualization Experts
> ________________________________
> Tel: 053 20 30 270
> info at netbulae.eu
> Staalsteden 4-3A
> KvK 08198180
> Fax: 053 20 30 271
> www.netbulae.eu
> 7547 TA Enschede
> BTW NL821234584B01
>
> ________________________________
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150228/51f1f667/attachment.html>


More information about the Gluster-users mailing list