[Gluster-devel] Regarding volume store cleanup at the time of 'volume delete'
Niels de Vos
ndevos at redhat.com
Thu Feb 14 08:38:24 UTC 2013
On Wed, Feb 13, 2013 at 09:47:20PM -0500, Krutika Dhananjay wrote:
> How about "marking" a volume as deleted by setting an
> extended attribute on the /var/lib/glusterd/vols/<volname>
> directory, effectively making glusterd perform the following actions for deleting a volume:
>
> a. Setting the extended attribute on the volume directory;
> b. Cleaning up the in-memory volume metadata; and
> c. cleaning up the volume store.
>
> Of course, this means that glusterd must check for the presence/absence of this extended
> attribute on the volume directories during initialisation, before reconstructing the in-memory volume info.
> This way, even if step (b) fails, glusterd restart won't fail.
>
> But then, this change has the following two problems:
> 1. the need to have support for extended attributes on "/var"; and
> 2. the name of a deleted volume cannot be re-used to create another volume if step b fails.
>
> OR
>
> How about replacing step (a) above with the following:
>
> a2. Renaming the volume directory to carry a ".deleted" extension ;
This sounds good to me. Maybe you can use .deleted~1 and so on for
subsequent deletes of a volume with the same name (similar to 'cp
--backup').
Cheers,
Niels
>
> and performing (b) and (c) as stated above, thereby overcoming problems (1) and (2) as well as the original problem?
>
> ----- Original Message -----
> From: "Niels de Vos" <ndevos at redhat.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "gluster-devel" <gluster-devel at nongnu.org>, "Amar Tumballi" <atumball at redhat.com>, "Anand Avati" <aavati at redhat.com>
> Sent: Wednesday, January 30, 2013 4:07:03 PM
> Subject: Re: [Gluster-devel] Regarding volume store cleanup at the time of 'volume delete'
>
> On Wed, Jan 30, 2013 at 03:33:20AM -0500, Krutika Dhananjay wrote:
> > I stand corrected. Point (ii) under "Possible solutions" is NOT a solution as the in-memory volume info
> > still remains despite removing the volume metadata in the backend manually.
> >
> > I would like to know your thoughts on the only solution I have at hand right now.
>
> How about adding a step before (a) and check if there are any unexpected
> files? If files exist, the user should be informed about them so that
> they can verify that they have reconfigured swift, for example. The
> whole procedure would then be aborted before any action took place that
> can render the configuration incomplete. When 'volume delete .. force'
> is executed, the files should be removed unconditionally.
>
> I think it is important to try to prevent that gluster commands break
> the configuration (of a volume or otherwise) without aborting when
> 'force' is not passed.
>
> Cheers,
> Niels
>
>
> >
> > ----- Original Message -----
> > From: "Krutika Dhananjay" <kdhananj at redhat.com>
> > To: "gluster-devel" <gluster-devel at nongnu.org>
> > Cc: "Amar Tumballi" <atumball at redhat.com>, "Anand Avati" <aavati at redhat.com>
> > Sent: Wednesday, January 30, 2013 11:57:42 AM
> > Subject: [Gluster-devel] Regarding volume store cleanup at the time of 'volume delete'
> >
> > Problem:
> > -------
> >
> > During 'volume delete' operation, the way glusterd cleans up the store for the volume being deleted is the following:
> >
> > a. removing brick files under /var/lib/glusterd/vols/<volname>/bricks/ ;
> > b. removing other files, if any, under /var/lib/glusterd/vols/<volname>/bricks/ ;
> > c. removing /var/lib/glusterd/vols/<volname>/bricks using rmdir(), now that it is empty;
> > d. removing any files under /var/lib/glusterd/vols/<volname>/ ;
> > e. removing (only) empty directories under /var/lib/glusterd/vols/<volname>/ (ex: run); and
> > f. eventually removing /var/lib/glusterd/vols/<volname> using rmdir(), now that it is (assumed to be) empty.
> >
> > In a scenario where the 'run' directory contains files other than the brick pidfiles (think swift.pid or
> > for that matter any file in future that could possibly be made to reside in 'run' directory), step (e)
> > fails for 'run', thereby causing (f) to fail as well.
> >
> > This means that glusterd does not clean up the volume store fully, causing
> > 1. volume delete to fail in commit phase, and
> > 2. subsequent attempts to start glusterd to fail because the volume metadata in the backend is incomplete.
> >
> > Possible solutions:
> > ------------------
> >
> > i. One way to fix this problem is to make glusterd remove all files under 'run' before step (e), similar to the way
> > it removes all files under 'bricks' in (b); or
> >
> > ii. The cli could be made to inform the user about 'run' directory being non-empty and perhaps even ask him/her
> > to remove all files under it and issue 'volume delete' again.
> >
> > Question:
> > --------
> >
> > Do you foresee any problems with doing things the way it is done in solution (i)?
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
> --
> Niels de Vos
> Sr. Software Maintenance Engineer
> Support Engineering Group
> Red Hat Global Support Services
--
Niels de Vos
Sr. Software Maintenance Engineer
Support Engineering Group
Red Hat Global Support Services
More information about the Gluster-devel
mailing list