[Gluster-users] anyone using glusterfs as openstack cinder/nova storage?
Vijay Bellur
vbellur at redhat.com
Wed Dec 31 16:17:31 UTC 2014
On 12/30/2014 08:35 PM, Zhu,Chao wrote:
> hi,all,
> Â Â We are using glusterfs 3.5 on top of centos 6.5 with openstack
> version I;
> Â Â We have about ~50 nodes as storage server, and 300 nova computing
> nodes;Â
What version of 3.5.x is in use here? Are you using the fuse client or
libgfapi to access the gluster servers?
> Â Â One questoin we have is:
> Â Â When I add new servers into the storage pool, and I will have to
> do re-balance for the glusterfs cluster,(first fix-layout, and then
> migrate data);
> Â Â As for the VMs are always running and the file on glusterfs is
> always opened, does glusterfs support online file migration?( with vm
> continue running, aka, open file migration supported?)
> Â Â From the documentation it looks like it is ok, but in our
> environment, when we does the rebalance, lots of vm got disk read-only;
> Which is very bad;
Do you observe any errors in the gluster client log files when vm disks
go read-only? It would also be useful if you could file a bug report
with as many details if you notice a problem consistently. Some
guidelines for reporting a bug can be found here [1].
>
> Â Â A similar question is, when I have say 3 replica, one node goes
> down for HW maintenance and when it comes back online, How does it do
> the re-sync? As all the VMs disks are continuously being modified, and
> each VM disk is pretty big(usually 5gb-40gb); It would be nightmare to
> re-sync the whole brick replica?
>
self-heal makes use of a rsync like algorithm to synchronize files. Only
relevant regions of a file do get copied over the network for
synchronization of data.
Regards,
Vijay
[1]
http://www.gluster.org/community/documentation/index.php/Bug_reporting_guidelines
More information about the Gluster-users
mailing list