[Gluster-users] [Gluster-devel] Feature Request: Lock Volume Settings
Ravishankar N
ravishankar at redhat.com
Mon Nov 14 16:15:16 UTC 2016
On 11/14/2016 05:57 PM, Atin Mukherjee wrote:
> This would be a straight forward thing to implement at glusterd,
> anyone up for it? If not, we will take this into consideration for
> GlusterD 2.0.
>
> On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C
> <rkavunga at redhat.com <mailto:rkavunga at redhat.com>> wrote:
>
> I think it is worth to implement a lock option.
>
> +1
>
>
> Rafi KC
>
>
> On 11/14/2016 06:12 AM, David Gossage wrote:
>> On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson
>> <lindsay.mathieson at gmail.com
>> <mailto:lindsay.mathieson at gmail.com>> wrote:
>>
>> As discussed recently, it is way to easy to make destructive
>> changes
>> to a volume,e.g change shard size. This can corrupt the data
>> with no
>> warnings and its all to easy to make a typo or access the
>> wrong volume
>> when doing 3am maintenance ...
>>
>> So I'd like to suggest something like the following:
>>
>> gluster volume lock <volname>
>>
I don't think this is a good idea. It would make more sense to give out
verbose warnings in the individual commands themselves. A volume lock
doesn't prevent users from unlocking and still inadvertently running
those commands without knowing the implications. The remove brick set of
commands provides verbose messages nicely:
$gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{4..6}
commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster
mount point before re-purposing the removed brick
My 2 cents,
Ravi
>>
>> Setting this would fail all:
>> - setting changes
>> - add bricks
>> - remove bricks
>> - delete volume
>>
>> gluster volume unlock <volname>
>>
>> would allow all changes to be made.
>>
>> Just a thought, open to alternate suggestions.
>>
>> Thanks
>>
>> +
>> sounds handy
>>
>> --
>> Lindsay
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> <http://www.gluster.org/mailman/listinfo/gluster-users>
> _______________________________________________ Gluster-devel
> mailing list Gluster-devel at gluster.org
> <mailto:Gluster-devel at gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-devel
> <http://www.gluster.org/mailman/listinfo/gluster-devel>
>
> --
> ~ Atin (atinm)
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161114/a95180d4/attachment.html>
More information about the Gluster-users
mailing list