[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?

Anand Avati anand.avati at gmail.com
Wed Sep 19 20:05:32 UTC 2012


The current behavior is intentional. There have been far too many instances
where users delete a volume, but fail to understand that those brick
directories still contain their data (and all the associated book-keeping
metadata like self-heal pending changelogs, partial dht hash ranges etc.)
-- and will not be happy when they find that the newly created volume using
those stale brick directories start misbehaving. This typically happens
when a user is trying out gluster for the first time (where volume creation
and deletion is frequent, while trying to get a hang of things) and result
in an ugly first experience.

For all you know, you yourself might have possibly ended in a situation
where you could have created a new volume with all the staleness (like the
hidden .glusterfs directory as well) from the previous volume silently
carried over and cause unintended behavior. The way I see it, your email
report is a positive result of the stale brick check having served its
purpose :-)

Avati

On Tue, Sep 18, 2012 at 11:29 AM, Lonni J Friedman <netllama at gmail.com>wrote:

> Hrmm, ok.  Shouldn't 'gluster volume delete ...' be smart enough to
> clean this up so that I don't have to do it manually?  Or
> alternatively, 'gluster volume create ...' should be able to figure
> out whether the path to a brick is really in use?
>
> As things stand now, the process is rather hacky when I have to issue
> the 'gluster volume delete ...' command, then manually clean up
> afterwards.  Hopefully this is something that will be addressed in a
> future release?
>
> thanks
>
> On Tue, Sep 18, 2012 at 11:26 AM, Kaleb Keithley <kkeithle at redhat.com>
> wrote:
> >
> > There are xattrs on the top-level directory of the old brick volume that
> gluster is detecting causing this.
> >
> > I personally always create my bricks on a subdir. If you do that you can
> simply rmdir/mkdir the directory when you want to delete a gluster volume.
> >
> > You can clear the xattrs or "nuke it from orbit" with mkfs on the volume
> device.
> >
> >
> > ----- Original Message -----
> > From: "Lonni J Friedman" <netllama at gmail.com>
> > To: gluster-users at gluster.org
> > Sent: Tuesday, September 18, 2012 2:03:35 PM
> > Subject: [Gluster-users] cannot create a new volume with a brick that
> used to be part of a deleted volume?
> >
> > Greetings,
> > I'm running v3.3.0 on Fedora16-x86_64.  I used to have a replicated
> > volume on two bricks.  This morning I deleted it successfully:
> > ########
> > [root at farm-ljf0 ~]# gluster volume stop gv0
> > Stopping volume will make its data inaccessible. Do you want to
> > continue? (y/n) y
> > Stopping volume gv0 has been successful
> > [root at farm-ljf0 ~]# gluster volume delete gv0
> > Deleting volume will erase all information about the volume. Do you
> > want to continue? (y/n) y
> > Deleting volume gv0 has been successful
> > [root at farm-ljf0 ~]# gluster volume info all
> > No volumes present
> > ########
> >
> > I then attempted to create a new volume using the same bricks that
> > used to be part of the (now) deleted volume, but it keeps refusing &
> > failing claiming that the brick is already part of a volume:
> > ########
> > [root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp
> > 10.31.99.165:/mnt/sdb1 10.31.99.166:/mnt/sdb1
> > /mnt/sdb1 or a prefix of it is already part of a volume
> > [root at farm-ljf1 ~]# gluster volume info all
> > No volumes present
> > ########
> >
> > Note farm-ljf0 is 10.31.99.165 and farm-ljf1 is 10.31.99.166.  I also
> > tried restarting glusterd (and glusterfsd) hoping that might clear
> > things up, but it had no impact.
> >
> > How can /mnt/sdb1 be part of a volume when there are no volumes present?
> > Is this a bug, or am I just missing something obvious?
> >
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120919/6333178c/attachment.html>


More information about the Gluster-users mailing list