[Gluster-users] [Gluster-devel] Feature Request: Lock Volume Settings
Joe Julian
joe at julianfamily.org
Mon Nov 14 18:28:36 UTC 2016
IMHO, if a command will result in data loss, fall it. Period.
It should never be ok for a filesystem to lose data. If someone wanted to do that with ext or xfs they would have to format.
On November 14, 2016 8:15:16 AM PST, Ravishankar N <ravishankar at redhat.com> wrote:
>On 11/14/2016 05:57 PM, Atin Mukherjee wrote:
>> This would be a straight forward thing to implement at glusterd,
>> anyone up for it? If not, we will take this into consideration for
>> GlusterD 2.0.
>>
>> On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C
>> <rkavunga at redhat.com <mailto:rkavunga at redhat.com>> wrote:
>>
>> I think it is worth to implement a lock option.
>>
>> +1
>>
>>
>> Rafi KC
>>
>>
>> On 11/14/2016 06:12 AM, David Gossage wrote:
>>> On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson
>>> <lindsay.mathieson at gmail.com
>>> <mailto:lindsay.mathieson at gmail.com>> wrote:
>>>
>>> As discussed recently, it is way to easy to make destructive
>>> changes
>>> to a volume,e.g change shard size. This can corrupt the data
>>> with no
>>> warnings and its all to easy to make a typo or access the
>>> wrong volume
>>> when doing 3am maintenance ...
>>>
>>> So I'd like to suggest something like the following:
>>>
>>> gluster volume lock <volname>
>>>
>
>
>I don't think this is a good idea. It would make more sense to give out
>
>verbose warnings in the individual commands themselves. A volume lock
>doesn't prevent users from unlocking and still inadvertently running
>those commands without knowing the implications. The remove brick set
>of
>commands provides verbose messages nicely:
>
>$gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{4..6}
>
>commit
>Removing brick(s) can result in data loss. Do you want to Continue?
>(y/n) y
>volume remove-brick commit: success
>Check the removed bricks to ensure all files are migrated.
>If files with data are found on the brick path, copy them via a gluster
>
>mount point before re-purposing the removed brick
>
>My 2 cents,
>Ravi
>
>
>>>
>>> Setting this would fail all:
>>> - setting changes
>>> - add bricks
>>> - remove bricks
>>> - delete volume
>>>
>>> gluster volume unlock <volname>
>>>
>>> would allow all changes to be made.
>>>
>>> Just a thought, open to alternate suggestions.
>>>
>>> Thanks
>>>
>>> +
>>> sounds handy
>>>
>>> --
>>> Lindsay
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>> _______________________________________________ Gluster-devel
>> mailing list Gluster-devel at gluster.org
>> <mailto:Gluster-devel at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> <http://www.gluster.org/mailman/listinfo/gluster-devel>
>>
>> --
>> ~ Atin (atinm)
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Gluster-devel mailing list
>Gluster-devel at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-devel
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161114/95269974/attachment.html>
More information about the Gluster-users
mailing list