[Gluster-devel] Regarding Glusterfs file locking

Maaz Sheikh maaz.sheikh at impetus.com
Fri Feb 3 10:39:12 UTC 2023


Hi,
Greetings of the day,


We checked in GlusterFS documentation for two way replication on three storage devices(nodes). Please provide any solution for this. We did not find any straight forward information for this scenario.



As per documentation three storage devices(nodes) will work on three way replication which does not match our scaling requirement.


Any help is highly appreciated.

Thanks,
Maaz Sheikh
________________________________
From: Strahil Nikolov <hunter86_bg at yahoo.com>
Sent: Friday, February 3, 2023 4:15 AM
To: gluster-devel at gluster.org <gluster-devel at gluster.org>; gluster-users at gluster.org <gluster-users at gluster.org>; Maaz Sheikh <maaz.sheikh at impetus.com>
Cc: Rahul Kumar Sharma <rrsharma at impetus.com>; Sweta Dwivedi <sweta.dwivedi at impetus.com>; Pushpendra Garg <pushpendra.garg at impetus.com>
Subject: Re: [Gluster-devel] Regarding Glusterfs file locking

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.


As far as I remember there are only 2 types of locking in Linux:
- Advisory
- Mandatory

In order to use mandatory locking, you need to pass the "mand" mount option to the FUSE client(mount -o mand,<my other mount options> ...) and chmod g+s,g-x /<FUSE PATH>/<Target file>


Best Regards,
Strahil Nikolov
В сряда, 1 февруари 2023 г., 13:22:59 ч. Гринуич+2, Maaz Sheikh <maaz.sheikh at impetus.com> написа:


Team, please let us know if u have any feedback.
________________________________
From: Maaz Sheikh
Sent: Wednesday, January 25, 2023 4:51 PM
To: gluster-devel at gluster.org <gluster-devel at gluster.org>; gluster-users at gluster.org <gluster-users at gluster.org>
Subject: Regarding Glusterfs file locking

Hi,
Greetings of the day,

Our configuration is like:
We have installed both glusterFS server and GlusterFS client on node1 as well as node2. We have mounted node1 volume to both nodes.

Our use case is :
>From glusterFS node 1, we have to take an exclusive lock and open a file (which is a shared file between both the nodes) and we should write/read in that file.
>From glusterFS node 2, we should not be able to read/write that file.

Now the problem we are facing is:
>From node1, we are able to take an exclusive lock and the program has started writing in that shared file.
>From node2, we are able to read and write on that file which should not happen because node1 has already acquired the lock on that file.

Therefore, requesting you to please provide us a solution asap.

Thanks,
Maaz Sheikh
Associate Software Engineer
Impetus Technologies India

________________________________






NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk<https://secure-web.cisco.com/12AP9444t5x8N516uRFxGjkEcC2YQ2w5_wIbDi2IkmO3m35rqybmiYgFyAtK-OGCmD1aJMbn049ssyoF7dydPkLyasKAjhkOkSyUx5fvCJ6JBVUMX3JeZRS2qSvjqtK7kZE6PuF4WMY8FAGNjumGyQ1DlttwLCKoId5iJwpQyaxGw4I2QWvSNafvqqyObc2zU0dzV1Ayh_grbU1hNngsJyI-3exNeJhKA5v863C7dEOzDbTnq79LuyEIdfUUwQf9jE0fiUeKZ1sAOleH0kdeB9ZtNwrSLmRf_Q0YvxU45oceMyVrKHzWbE-6xxIAtL2nC/https%3A%2F%2Fmeet.google.com%2Fcpu-eiue-hvk>

Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-devel<https://secure-web.cisco.com/1f6dE-u697W7bHpXDrPamIzo6i0_BqyZw21v6MqByaqQXxNXfIu_8nDGQD8EEStnhIl-Z9rpRbcbOmmg9ZOkU1ATnFJWyzPFNRdREsAw2g-BW2quWfglxYjdcUYrf63ntrYgrg8ZEDOgMzp8pV0psisEjmHR57IuTgPjs7iZWes9nG_yBsP6yBmLPtWSKfIGj4Diu01fwJfIG3EKXlE4xtia9TqEAj7nTcAMx1_dqKyjCgDU7ZhN-S8XQ9RWlp7OVKQ0GEPM-CSJozOXukVWlM00zAGfmPVfQAI_DmCap5bB6BXhAiIB9LXqWWDi8nrR5/https%3A%2F%2Flists.gluster.org%2Fmailman%2Flistinfo%2Fgluster-devel>


________________________________






NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20230203/cc00f3c1/attachment.html>


More information about the Gluster-devel mailing list