[Gluster-users] Expecting to achieve atomic read in a FUSE mount of Gluster
Strahil Nikolov
hunter86_bg at yahoo.com
Sat May 30 15:04:46 UTC 2020
Hi Narenderan,
actually as you have a Distributed-Replicated (notice the distributed part here) volume - the behaviour is expected.
Short story:
In order to avoid that , you should check if you can scale up the 1 replica set (increase the number of disks in the raid controller) and if it is possible , to remove the second replica set (brick 4->6).
I think that a workaround is also possible - just create a script (on the main clients) that checks if both replica sets are available and in case of failure - to dismount the FUSE.
Of course the script should have the opposite functionality - if replica sets are available (2/3 bricks) to be mounted back.
Now the long story:
Gluster already has that functionality in the Dispersed (I am not talking about Distributed-Dispersed) volumes.
Dispersed volumes can be:
6 bricks with redundancy level 2 (4 +2)
11 bricks with redundancy level 3 (8 +3)
12 bricks with redundancy level 4 (8 + 4)
In your case if 2 bricks fail - the volume will be available without any disruption. Sadly there is no way to convert replicated to dispersed volume and based on your workload dispersed volume might not be suitable.
Best Regards,
Strahil Nikolov
В събота, 30 май 2020 г., 13:33:34 ч. Гринуич+3, Naranderan Ramakrishnan <rnaranbe at gmail.com> написа:
Dear developers/users,
Please suggest a solution for this.
Regards,
Naranderan R
On Fri, 22 May 2020 at 21:46, Naranderan Ramakrishnan <rnaranbe at gmail.com> wrote:
> Dear team,
> We are using Gluster(v 7.0) as our primary data storage system and faced an issue recently. Please find the details below.
>
> Simple Background:
> A 2x3(DxR) volume is mounted in a few main-clients via FUSE mount. From these main clients, a lot of sub-clients will consume the required subset of data(a folder) via rsync. These sub-clients will also produce data to these main clients via rsync which will be propagated Gluster. In a simplified form,
> Gluster(Brick1, Brick2 .. Brick6) --> Main-clients(FUSE mount of Gluster) --> Sub-clients(rsync from/to main-client)
>
> Issue:
> Due to some network issues, 2 bricks belong to the same replica sub-volume(say replica1) went unreachable from a main-client. This triggers 'client quorum is not met' - the client quorum policy is 'auto' & quorum-count is 2 due to this policy - so the replica1 went unavailable for this main-client.
> So dirs&files in this replica1 were not listed but replica2 dirs&files were listed in the mount-point of the main-client. But the sub-clients were not aware of these background issues, they have read the listed files(of replica2 only) which resulted in undesired and unintentional behaviors.
>
> Expectation:
> This is totally unexpected that subset of dirs&files will be available in a mount-point. A main-client should list either all the dirs & files or nothing. This is very critical to our application nature. Our application prefers consistency and atomicity to HA.
> It would be much better if there is an option to enable atomic read even during these kinds of unexpected issues. Please let us know how can we achieve this.
>
> Thanks in advance.
>
> Regards,
> Naranderan R
>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list