[Gluster-users] New to GlusterFS
Joe Julian
joe at julianfamily.org
Wed Oct 23 05:32:13 UTC 2013
Yes.
Bobby Jacob <bobby.jacob at alshaya.com> wrote:
>Joe,
>
>You mentioned "If you shut down a server through the normal kill
>process":
>
>What do you mean by normal kill process. Is it just shutdown the server
>for maintanence after going a "service glusterd stop" and "service
>glusterfsd stop".
>
>Thanks & Regards,
>Bobby Jacob
>P SAVE TREES. Please don't print this e-mail unless you really need to.
>
>From: gluster-users-bounces at gluster.org
>[mailto:gluster-users-bounces at gluster.org] On Behalf Of Joe Julian
>Sent: Tuesday, October 22, 2013 4:12 PM
>To: gluster-users at gluster.org
>Subject: Re: [Gluster-users] New to GlusterFS
>
>The reason for the long (42 second) ping-timeout is because
>re-establishing fd's and locks can be a very expensive operation.
>Allowing a longer time to reestablish connections is logical, unless
>you have servers that frequently die.
>
>If you shut down a server through the normal kill process, the TCP
>connections will be closed properly. The client will be aware that the
>server is going away and there will be no timeout. This allows server
>maintenance without encountering that issue.
>
>One issue with a 42 second timeout is that ext4 may detect an error and
>remount itself read only should that happen while the VM is running.
>You can override this behavior by specifying the mount option,
>"errors=continue" in fstab ("errors=remount-ro" is the default). The
>default can be changed, as well, by changing the superblock option with
>tune2fs.
>On 10/22/2013 03:12 AM, John Mark Walker wrote:
>
>Hi JC,
>
>Yes, the default is a 42-second timeout for failover. You can configure
>that to be a smaller window.
>
>-JM
>On Oct 22, 2013 10:57 AM, "JC Putter"
><jcputter at gmail.com<mailto:jcputter at gmail.com>> wrote:
>Hi,
>
>I am new to GlusterFS, i am trying to accomplish something which i am
>not 100% sure is the correct use case but hear me out.
>
>I want to use GlusterFS to host KVM VM's, from what I've read this was
>not recommended due to poor write performance however since
>libgfapi/qemu 1.3 this is now viable ?
>
>
>Currently i'am testing out GlusterFS with two nodes, both running as
>server and client
>
>i have the following Volume:
>
>Volume Name: DATA
>Type: Replicate
>Volume ID: eaa7746b-a1c1-4959-ad7d-743ac519f86a
>Status: Started
>Number of Bricks: 1 x 2 = 2
>Transport-type: tcp
>Bricks:
>Brick1: glusterfs1.example.com:/data
>Brick2: glusterfs2.example.com:/data
>
>
>and mounting the brick locally on each server as /mnt/gluster,
>replication works and everything but as soon as i kill one node, the
>directory /mnt/gluster/ becomes unavailable for 30/40 seconds
>
>log shows
>
>[2013-10-22 11:55:48.055571] W [socket.c:514:__socket_rwv]
>0-DATA-client-0: readv failed (No data available)
>
>
>Thanks in advance!
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>_______________________________________________
>
>Gluster-users mailing list
>
>Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>
>http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131022/c47d97b1/attachment.html>
More information about the Gluster-users
mailing list