[Gluster-users] Block replication with glusterfs for NFS failover

Brian Candler B.Candler at pobox.com
Wed Oct 24 06:42:21 UTC 2012


On Wed, Oct 24, 2012 at 12:47:36AM +0200, Runar Ingebrigtsen wrote:
> I'm sorry - I am aware of that. The part of the document I was
> meaning to reference was the block-by-block replication that was
> pointed out as a requirement for NFS connection handover. I should
> have pointed out what I meant better.

I don't think it was even accurate. What it probably meant was that NFS
failover requires inode numbers to be consistent between the two
filesystems, because inode numbers are used as part of the NFS file handle.
Block-by-block replication is one way to achieve that.

> >>    Can I somehow enable block-for-block replication with GlusterFS?
> >
> >No. You are reading documentation for something completely different: a pair
> >of machines synchronised at the block level using DRBD, in a master/slave
> >configuration (that is: all writes must be made on the master side, and the
> >block changes are replicated a la RAID1 but over a network).
> 
> Hm. I don't see how your reply indicates the lack of block-by-block
> replication in GlusterFS.

GlusterFS replication works at a different layer: each glusterfs brick sits
on top of a local filesystem, and the operations are at the level of files
(roughly "open file named X", "seek", "read file", "write file") rather than
block-level operations.

If you write to a replicated volume, this dispatches separate "write file"
operations to the bricks which comprise that volume.

The bricks are still separate filesystems; they could even be filesystems of
different types (xfs and ext4, say).  GlusterFS is unaware of the filesystem
type.

> I'm happy to annouce that it did turn out that, indeed,
> you can.
> The reason it didn't work was a User Error.
> 
> When you use UCarp for failover between two GlusterFS servers, the
> Virtual IP address stops responding for about 5 seconds when you
> unplug the UCarp master node. It then takes the NFS client about 45
> seconds more before it is able to use the GlusterFS/NFS mount on the
> UCarp secondary node.

Excellent news - glad you got it working.



More information about the Gluster-users mailing list