[Gluster-users] Convert to Shard - Setting Guidance
gustave at dahlfamily.net
gustave at dahlfamily.net
Fri Jan 20 20:43:49 UTC 2017
I had a few different data points on the 512MB size as well as setting
the heal algorithm to full. Some of this information is old though so I
appreciate the feedback that you have given on what you are using.
I see Lindsay confirmed what I have witnessed while testing these
settings locally. The heal algorithm set to full only heals the shards
that have changed.
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Gluster_Storage/chap-Hosting_Virtual_Machine_Images_on_Red_Hat_Storage_volumes.html
http://blogs-ramesh.blogspot.com/2016/01/ovirt-and-gluster-hyperconvergence.html
http://lists.gluster.org/pipermail/gluster-users/2016-January/024945.html
> One question - how do you plan to convert the VM's?
>
> - setup a new volume and copy the VM images to that?
>
> - or change the shard setting inplace? (I don't think that would work)
Not a perfect plan but ...
I have home/OS split on these larger VM's (shared hosting/cPanel). My
plan is to do this piece by piece, as follows:
1. Create new images for /home. rsync in place and replace the drives
with a reboot of the VM.
2. The OS images. My intent would be to create a new and then do a
transfer through cPanel interface (skipping /home). Still deciding
whether to segment mysql to yet another image. I need to test that
further.
Smaller VM's I may just shut down for a few hours, rename and copy. I am
open to suggestions.
> You should be able to do that while your VMs are running. I guess it
> depends
> on your hypvervisor, but with KVM just moving the disk to a new
> filename while
> the VM is running should be enough, as it'll create a new file and copy
> the
> data, thus creating the shards.
> But it'll take a while for sure.
I would be interested to hear how you did this while running. On my
test setup, I have gone through the copy (rename) and it does work but
like you said it took quite awhile.
More information about the Gluster-users
mailing list