[Gluster-users] detecting replication issues

Joseph Lorenzini jaloren at gmail.com
Fri Feb 24 13:50:19 UTC 2017


Hi Mohammed,

You are right that mounting it this way will do the appropriate
replication. However, there are problems with that for my use case:

1. I want the mount /etc/fstab to be able to fail over to any one of the
three servers that I have. so if one server is down, the client can still
mount from servers 2 and 3.
2. i have configured SSL on the I/O path and I need to be able to configure
the client to use TLS when it connects to the brick. I was only able to get
that to work with: transport.socket.ssl-enabled off in the configuration
file.

In other words, I was only able to get HA during a mount and TLS to work by
using the volume config file and setting that in the /etc/fstab.

https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume

Is there a better way to handle this?

Thanks,
Joe

On Fri, Feb 24, 2017 at 6:24 AM, Mohammed Rafi K C <rkavunga at redhat.com>
wrote:

> Hi Joseph,
>
> I think there is gap in understanding your problem. Let me try to give
> more clear picture on this,
>
> First , couple of clarification points here
>
> 1) client graph is an internally generated configuration file based on
> your volume, that said you don't need to create or edit your own. If you
> want a 3-way replicated volume you have to mention that when you create the
> volume.
>
> 2) When you mount a gluster volume, you don't need to provide any client
> graph, you just need to give server hostname and volname, it will
> automatically fetches the graph and start working on it (so it does the
> replication based on the graph generated by gluster management daemon)
>
>
> Now let me briefly describe the procedure for creating a 3-way replicated
> volume
>
> 1) gluster volume create <volname> replica 3 <hostname>:/<brick_path1>
> <hostname>:/<brick_path2> <hostname>:/<brick_path3>
>
>      Note : if you give 3 more bricks , then it will create 2-way
> distributed 3 way replicated volume (you can increase the distribution by
> adding multiple if 3)
>
>      this step will automatically create the configuration file in
> /var/lib/glusterd/vols/<volname>/trusted-<volname>.tcp-fuse.vol
>
> 2) Now start the volume using gluster volume start <volname>
>
> 3) Fuse mount the volume in client machine using the command mount -t
> glusterfs <server_hostname>:/<volname>   /<mnt>
>
>     this will automatically fetches the configuration file and will do the
> replication. You don't need to do anything
>
>
> Let me know if this helps.
>
>
> Regards
>
> Rafi KC
>
>
> On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
>
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not find
> anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify that
> call graph so that replication will happen across multiple bricks. If its
> there, then there's a pretty severe organization issue in the documentation
> (I am pretty sure I ended up reading almost every page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the mount...and
> then only see it replicated to a single brick. It was only after much
> furious googling did I manage to figure out that that 1) i needed a client
> configuration file which should be specified in /etc/fstab and 2) that
> configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to cover
> all this. To be clear, I am sure this is obvious to a seasoned gluster user
> -- but it is not at all obvious to someone who is new to gluster such as
> myself.
>
> So I am an operations engineer. I like reproducible deployments and I like
> monitoring to alert me when something is wrong. Due to human error or a bug
> in our deployment code, its possible that something like not setting the
> client call graph properly could happen. I wanted a way to detect this
> problem so that if it does happen, it can be remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI command in
> a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C <
> <rkavunga at redhat.com>rkavunga at redhat.com> wrote:
>
>>
>>
>> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>>
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To ensure
>> any file changes (create/delete/modify) are replicated to all bricks, I
>> have this setting in my client configuration.
>>
>>  volume gv0-replicate-0
>>     type cluster/replicate
>>     subvolumes gv0-client-0 gv0-client-1 gv0-client-2
>> end-volume
>>
>> And that works as expected. My question is how one could detect if this
>> was not happening which could poise a severe problem with data consistency
>> and replication. For example, those settings could be omitted from the
>> client config and then the client will only write data to one brick and all
>> kinds of terrible things will start happening. I have not found a way the
>> gluster volume cli to detect when that kind of problem is occurring. For
>> example gluster volume heal <volname> info does not detect this problem.
>>
>> Is there any programmatic way to detect when this problem is occurring?
>>
>>
>> I couldn't understand how you will end up in this situation. There is
>> only one possibility (assuming there is no bug :) ), ie you changed the
>> client graph in a way that there is only one subvolume to replica server.
>>
>> To check that the simply way is, there is a xlator called meta, which
>> provides meta data information through mount point, similiar to linux proc
>> file system. So you can check the active graph through meta and see the
>> number of subvolumes for replica xlator
>>
>> for example : the directory   <mount point>/.meta/graphs/active/<volname>-replicate-0/subvolumes
>> will have entries for each replica clients , so in your case you should see
>> 3 directories.
>>
>>
>> Let me know if this helps.
>>
>> Regards
>> Rafi KC
>>
>>
>> Thanks,
>> Joe
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170224/3e618ab0/attachment.html>


More information about the Gluster-users mailing list