[Gluster-users] Block replication with glusterfs for NFS failover

Runar Ingebrigtsen runar at rin.no
Thu Oct 25 08:23:33 UTC 2012


thank you for taking time to explain so well.

Den 24. okt. 2012 11:56, skrev Brian Candler:
> On Wed, Oct 24, 2012 at 11:19:13AM +0200, Runar Ingebrigtsen wrote:
>> Hm, does this mean the whole file will be replicated each time it
>> changes?
> Nope. Gluster works at the POSIX filesystem layer, so commands like
> "seek(x); write(data)" would replicate as the same commands to both bricks.

Sweet. I love getting insight like this.

> There used to be an issue with healing, i.e. fixing up replicas after they
> have been offline for a while.  Prior to gluster 3.3 this involved locking
> the whole file, which if it was a VM image would make it unavailable until
> healing was complete.  Gluster 3.3.x does healing across ranges instead.
Does that mean the bauer-power article [1] about how healing fails is 

My emergency plan, in case of a longer downtime for one peer, was to 
remove the peer, format it and add it anew. This was due to the 
bauer-power article, but if I understand this now I don't need to worry? 
I do take backups, too, of course.

> However gluster 3.3.x is still not ideal as a VM backing store, because of
> the performance issues of going via the kernel and back out through the FUSE
> layer.  There are bleeding-edge patches to KVM which allow it to use
> libglusterfs to talk directly to the storage bricks, staying in userland:
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01745.html
I don't think VMware ESX is capable of using the glusterfs-client anyway.

> Or you could try using NFS to gluster's NFS server. Or you can boot from a
> gluster image, but mount a gluster volume within the VM for application
> data.

My plan is to try running VM's in VMware from NFS.


Best Regards
Runar Ingebrigtsen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121025/12c24eb2/attachment.html>

More information about the Gluster-users mailing list