[Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
Mackay, Michael
mackay at progeny.net
Fri Apr 7 11:57:54 UTC 2017
I’ve updated my patch to work for glusterfs 3.10.0. I thought that targeting the latest stable baseline would be best.
Could I ask for a starting point to submit the change? I see a place to submit a change on git, but if you could point me to a starting point in the whole process I can take it from there, I believe. I want to make sure I’m following your process.
Thanks
Mike
From: Amar Tumballi [mailto:atumball at redhat.com]
Sent: Thursday, March 30, 2017 1:20 AM
To: Mackay, Michael
Cc: gluster-devel at gluster.org
Subject: (nwl) Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
On Wed, Mar 22, 2017 at 2:31 AM, Mackay, Michael <mackay at progeny.net<mailto:mackay at progeny.net>> wrote:
At the risk of repeating myself, the POSIX file system underpinnings are not a concern – that part is understood and handled.
I’m also not asking for help to solve this problem, again, to be clear. SE Linux is not an option. To summarize the point of my post:
I’ve gotten what I want to work. I have a small list of code changes to make it work. I wish to find out if the Gluster community is interested in the changes.
We are happy to take the code changes in. Please submit the changes.
Regards,
Amar
Thanks
Mike
From: Dustin Black [mailto:dblack at redhat.com<mailto:dblack at redhat.com>]
Sent: Tuesday, March 21, 2017 12:12 PM
To: Mackay, Michael
Cc: Saravanakumar Arumugam; Atin Mukherjee; gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: (nwl) Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
I don't see how you could accomplish what you're describing purely through the gluster code. The bricks are mounted on the servers as standard local POSIX file systems, so there is always the chance that something could change the data outside of Gluster's control.
This all seems overly-restrictive to me, given that your storage system should be locked-down from an administrative perspective as a best practice in the first place, limiting the risk of any brick-side corruption or in your case even writes/changes. But assuming that you have a compliance or other requirement that is forcing this configuration, why not simply mount the brick local file system as read only, and then also enable the existing Gluster read-only translator, providing two layers of protection against any writes? Of course this would also restrict any metadata actions on the Gluster side, which could be problematic for something like bitrot detection and could result in a lot of log noise, I'm guessing. And administratively someone could still get in and remount the bricks as r/w, so if you _really_ _really_ need it locked down you may also need selinux.
Dustin Black, RHCA
Senior Architect, Software-Defined Storage
Red Hat, Inc.
On Tue, Mar 21, 2017 at 10:52 AM, Mackay, Michael <mackay at progeny.net<mailto:mackay at progeny.net>> wrote:
Thanks for the help and advice so far. It’s difficult at times to describe what the use case is, so I’ll try here.
We need to make sure that no one can write to the physical volume in any way. We want to be able to be sure that it can’t be corrupted. We know from working with Gluster that we shouldn’t access the brick directly, and that’s part of the point. We want to make it so it’s impossible to write to the volume or the brick under any circumstances. At the same time, we like Gluster’s recovery capability, so if one of two copies of the data becomes unavailable (due to failure of the host server or maintenance) the other copy will still be up and available.
Essentially, the filesystem under the brick is a physically read-only disk that is set up at integration time and delivered read-only. We won’t want to change it after delivery, and (in this case for security) we want it to be immutable so we know we can rely on that data to be the same always, no matter what.
All users will get data from the Gluster mount and use it, but from the beginning it would be read-only.
A new deliver might have new data, or changed data, but that’s the only time it will change.
I want to repeat as well that we’ve identified changes in the code baseline that allow this to work, if interested.
I hope that provides the information you were looking for.
Mike
From: Saravanakumar Arumugam [mailto:sarumuga at redhat.com<mailto:sarumuga at redhat.com>]
Sent: Tuesday, March 21, 2017 10:18 AM
To: Mackay, Michael; Atin Mukherjee
Cc: gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
On 03/21/2017 07:33 PM, Mackay, Michael wrote:
“read-only xlator is loaded at gluster server (brick) stack. so once the volume is in place, you'd need to enable read-only option using volume set and then you should be able to mount the volume which would provide you the read-only access.”
OK, so fair enough, but is the physical volume on which the brick resides allowed to be on a r/o filesystem?
Again, it’s not just whether Gluster considers the volume to be read-only to clients, but whether the gluster brick and its underlying medium can be read-only.
No, it is only Gluster consider it as a read-only voume.
If you go and access the gluster brick directly, you will be able to write on it.
In general, you should avoid accessing the bricks directly.
Do you mean to say, like creating a gluster volume to be read-only even from the beginning ?
Can you tell about the use case ?
As I understand, user will write some data and wish all the data to be read-only. So, user set the volume to be read-only.
From: Atin Mukherjee [mailto:amukherj at redhat.com]
Sent: Tuesday, March 21, 2017 9:42 AM
To: Mackay, Michael
Cc: Samikshan Bairagya; gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: (nwl) Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
On Tue, Mar 21, 2017 at 6:06 PM, Mackay, Michael <mackay at progeny.net<mailto:mackay at progeny.net>> wrote:
Samikshan,
Thanks for your suggestion.
From what I understand, the read-only feature (which I had seen and researched) is a client option for mounting the filesystem. Unfortunately, we need the filesystem itself to be set up read-only, so that no one can modify it - in other words, we need to make sure that no client can mount it read/write. So, it has to be set up and started as r/o, and then the clients have no choice but to get a r/o copy.
read-only xlator is loaded at gluster server (brick) stack. so once the volume is in place, you'd need to enable read-only option using volume set and then you should be able to mount the volume which would provide you the read-only access.
Thanks
Mike
-----Original Message-----
From: Samikshan Bairagya [mailto:sbairagy at redhat.com<mailto:sbairagy at redhat.com>]
Sent: Monday, March 20, 2017 3:52 PM
To: Mackay, Michael
Cc: gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: Re: [Gluster-devel] Read-only option for a replicated (replication for fail-over) Gluster volume
On 03/21/2017 12:38 AM, Mackay, Michael wrote:
> Gluster folks:
>
> Our group has a need for a distributed filesystem, like Gluster, that can be mounted by clients as read-only. We wish to have the ability to seamless switch over to an alternate source of this read-only data if the default source fails.
> The reason for this ROFS is so that every client can get access to applications and data that we want to ensure stays the same (unmodifiable) for an entire system delivery life cycle. To date we've used read-only NFS mounts reasonably well, but its failover performance is not at all great.
>
> We also don't want a "WORM" sort of arrangement - we want to prevent any and all writes to the volume once it's up and shared.
>
> So, under the "how hard could it be" mantra, we took version 3.7.15 and poked around a bit until we got it to do just what we want. It was a minor mod to 'xlators/features/index/src/index.c' and 'xlators/storage/posix/src/posix-helpers.c', along with a "force" option when doing the volume start command.
>
> We would happily share the specific changes, and they seem to fit in the 3.10.0 code base too; the question for the group is, would such a capability be of interest to the Gluster baseline? Possibly a precursor question (since I don't have much experience in gluster-devel at all, so please forgive my approach if it's wrong) is, to whom should I pose this question, if it's not to this group?
>
> Thanks for your time and I'd be happy to provide any further information.
>
Hi Mike,
Have you checked the 'features.read-only' volume option. Apparently you can set it to on/off depending on whether you want your volume to be read-only or not. By default it is set to 'off'. The following would make your volume read-only for all clients accessing it:
# gluster volume set <VOLNAME> features.read-only on
Hope that helped.
~ Samikshan
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
--
~ Atin (atinm)
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170407/bdad3110/attachment-0001.html>
More information about the Gluster-devel
mailing list