[Gluster-Maintainers] [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)
Atin Mukherjee
amukherj at redhat.com
Wed May 3 12:22:26 UTC 2017
On Wed, May 3, 2017 at 3:41 PM, Raghavendra Talur <rtalur at redhat.com> wrote:
> On Tue, May 2, 2017 at 8:46 PM, Nithya Balachandran <nbalacha at redhat.com>
> wrote:
> >
> >
> > On 2 May 2017 at 16:59, Shyam <srangana at redhat.com> wrote:
> >>
> >> Talur,
> >>
> >> Please wait for this fix before releasing 3.10.2.
> >>
> >> We will take in the change to either prevent add-brick in
> >> sharded+distrbuted volumes, or throw a warning and force the use of
> --force
> >> to execute this.
>
> Agreed, I have filed bug and marked as blocker for 3.10.2.
> https://bugzilla.redhat.com/show_bug.cgi?id=1447608
>
>
> >>
> > IIUC, the problem is less the add brick operation and more the
> > rebalance/fix-layout. It is those that need to be prevented (as someone
> > could trigger those without an add-brick).
>
> Yes, that problem seems to be with fix-layout/rebalance and not add-brick.
> However, depending on how users have arranged their dir structure, a
> add-brick without a fix-layout might be useless for them.
>
> I also had a look at the code to see if I can do the cli/glusterd
> change myself. However, sharding is enabled just as a xlator and not
> added to glusterd_volinfo_t.
> If someone from dht team could work with glusterd team here it would
> fix the issue faster.
>
> Action item on Nithya/Atin to assign bug 1447608 to someone. I will
> wait for the fix for 3.10.2.
>
Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which
we'd need to decide (and are debating on) is that should we bypass this
validation with rebalance start force or not. What do others think?
> Thanks,
> Raghavendra Talur
>
> >
> > Nithya
> >>
> >> Let's get a bug going, and not wait for someone to report it in
> bugzilla,
> >> and also mark it as blocking 3.10.2 release tracker bug.
> >>
> >> Thanks,
> >> Shyam
> >>
> >> On 05/02/2017 06:20 AM, Pranith Kumar Karampuri wrote:
> >>>
> >>>
> >>>
> >>> On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri
> >>> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote:
> >>>
> >>> Yeah it is a good idea. I asked him to raise a bug and we can move
> >>> forward with it.
> >>>
> >>>
> >>> +Raghavendra/Nitya who can help with the fix.
> >>>
> >>>
> >>>
> >>> On Mon, May 1, 2017 at 9:07 PM, Joe Julian <joe at julianfamily.org
> >>> <mailto:joe at julianfamily.org>> wrote:
> >>>
> >>>
> >>> On 04/30/2017 01:13 AM, lemonnierk at ulrar.net
> >>> <mailto:lemonnierk at ulrar.net> wrote:
> >>>
> >>> So I was a little but luck. If I has all the hardware
> >>> part, probably i
> >>> would be firesd after causing data loss by using a
> >>> software marked as stable
> >>>
> >>> Yes, we lost our data last year to this bug, and it wasn't
> a
> >>> test cluster.
> >>> We still hear from it from our clients to this day.
> >>>
> >>> Is known that this feature is causing data loss and
> >>> there is no evidence or
> >>> no warning in official docs.
> >>>
> >>> I was (I believe) the first one to run into the bug, it
> >>> happens and I knew it
> >>> was a risk when installing gluster.
> >>> But since then I didn't see any warnings anywhere except
> >>> here, I agree
> >>> with you that it should be mentionned in big bold letters
> on
> >>> the site.
> >>>
> >>> Might even be worth adding a warning directly on the cli
> >>> when trying to
> >>> add bricks if sharding is enabled, to make sure no-one will
> >>> destroy a
> >>> whole cluster for a known bug.
> >>>
> >>>
> >>> I absolutely agree - or, just disable the ability to add-brick
> >>> with sharding enabled. Losing data should never be allowed.
> >>> _______________________________________________
> >>> Gluster-devel mailing list
> >>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> >>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >>> <http://lists.gluster.org/mailman/listinfo/gluster-devel>
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Pranith
> >>>
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Pranith
> >>>
> >>>
> >>> _______________________________________________
> >>> Gluster-devel mailing list
> >>> Gluster-devel at gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >>>
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel at gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20170503/92b47710/attachment.html>
More information about the maintainers
mailing list