[Gluster-users] Replication volume "stop" on one server causesvolume inaccessible on peer over the boot cycle.

Maik Kulbe info at linux-web-development.de
Tue Sep 3 11:43:17 UTC 2013


> Hi Gluster Team,
>
> I am looking for solution of replication volume setup across two servers.
>
> Please refer below for the issue scenario.
>
> 1. Gluster replication volumes are created across two server.
> 2. When one server goes down, it tends to un-mount all the gluster volumes
> and bricks.
> 3. To un-mount all volumes and bricks, it needs to stop the volume in
> order to stop all gluster process running for each mount point.
> 4. If we stop the volume on one server(one which is rebooting), it stops
> the volume on other peer too, making volume inaccessible.

That is right. If you stop a volume, you stop it. If you are using the FUSE client and need HA, set up the gluster server packages on the client and add it as a peer to the cluster. Not a brick, just a peer. Instead of mounting like "mount -t glusterfs gluster-server-1.example.com:/your-volume /mnt/point" you can then exchange "glusterfs gluster-server-1.example.com" with "localhost". In the case of the failure of this node, the volume will continue to run and the mount point should continue to work - presuming your Gluster volume is set up correctly.


>
> Here, We expect other server to handle the I/O till peer server is coming
> up.
>
> Hoping for the suitable solution.
> Thanking in anticipation.
>
> Regards
> Sejal
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you



More information about the Gluster-users mailing list