[Gluster-users] Client Handling of Elastic Clusters

Timothy Orme torme at ancestry.com
Wed Oct 16 18:13:54 UTC 2019

Yes, this makes the issue less likely, but doesn't make it impossible for something that is fully elastic.

For instance, if I had instead just started with A,B,C and then scaled out and in twice, all volfile servers would have potentially be destroyed and replaced.  I think the problem is that the selection of volfile servers is determined at mounting, rather than updating as the cluster changes.  There are ways to greatly reduce this issue, such as adding more backup servers, but it's still a possibility.

I think more important then, for me at least, is to have the option of failing when no volfile servers are remaining as it can produce incomplete views of the data.

From: Strahil <hunter86_bg at yahoo.com>
Sent: Tuesday, October 15, 2019 8:46 PM
To: Timothy Orme <torme at ancestry.com>; gluster-users <gluster-users at gluster.org>
Subject: [EXTERNAL] Re: [Gluster-users] Client Handling of Elastic Clusters

Hi Timothy,

Have you tried to mount on the client  via all servers :

mount -t glusterfs -o backup-volfile-servers=B:C:D:E:F A:/volume  /destination

Best Regards,
Strahil Nikolov

On Oct 15, 2019 22:05, Timothy Orme <torme at ancestry.com> wrote:

I'm trying to setup an elastic gluster cluster and am running into a few odd edge cases that I'm unsure how to address.  I'll try and walk through the setup as best I can.

If I have a replica 3 distributed-replicated volume, with 2 replicated volumes to start:

   Replica 1
   Replica 2

And the client mounts the volume with serverA as the primary volfile server, and B & C as the backups.

Then, if I perform a scale down event, it selects the first replica volume as the one to remove.  So I end up with a configuration like:

   Replica 2

Everything rebalances and works great.  However, at this point, the client has lost any connection with a volfile server.  It knows about D, E, and F, so my data is all fine, but it can no longer retrieve a volfile.  In the logs I see:

[2019-10-15 17:21:59.232819] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers

This becomes problematic when I try and scale back up, and add a replicated volume back in:

   Replica 2
   Replica 3

And then rebalance the volume.  Now, I have all my data present, but the client only knows about D,E,F, so when I run an `ls` on a directory, only about half of the files are returned, since the other half live on G,H,I which the client doesn't know about.  The data is still there, but it would require a re-mount at one of the new servers.

My question then, is there a way to have a more dynamic set of volfile servers? What would be great is if there was a way to tell the mount to fall back on the servers returned in the volfile itself in case the primary one goes away.

If there's not an easy way to do this, is there a flag on the mount helper that can cause the mount to die or error out in the event that it is unable to retrieve volfiles?  The problem now is that it sort of silently fails and returns incomplete file listings, which for my use cases can cause improper processing of that data.  I'd rather have it hard error than provide bad results silently obviously.

Hope that makes sense, if you need further clarity please let me know.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191016/0ac5207e/attachment.html>

More information about the Gluster-users mailing list