[Gluster-devel] Regarding volume store cleanup at the time of 'volume delete'

Krutika Dhananjay kdhananj at redhat.com
Wed Jan 30 06:27:42 UTC 2013


Problem:
-------

During 'volume delete' operation, the way glusterd cleans up the store for the volume being deleted is the following:

a. removing brick files under /var/lib/glusterd/vols/<volname>/bricks/ ;
b. removing other files, if any, under /var/lib/glusterd/vols/<volname>/bricks/ ;
c. removing /var/lib/glusterd/vols/<volname>/bricks using rmdir(), now that it is empty;
d. removing any files under /var/lib/glusterd/vols/<volname>/ ;
e. removing (only) empty directories under /var/lib/glusterd/vols/<volname>/ (ex: run); and
f. eventually removing /var/lib/glusterd/vols/<volname> using rmdir(), now that it is (assumed to be) empty.

In a scenario where the 'run' directory contains files other than the brick pidfiles (think swift.pid or
for that matter any file in future that could possibly be made to reside in 'run' directory), step (e)
fails for 'run', thereby causing (f) to fail as well.

This means that glusterd does not clean up the volume store fully, causing 
1. volume delete to fail in commit phase, and
2. subsequent attempts to start glusterd to fail because the volume metadata in the backend is incomplete.

Possible solutions:
------------------

i. One way to fix this problem is to make glusterd remove all files under 'run' before step (e), similar to the way
   it removes all files under 'bricks' in (b); or

ii. The cli could be made to inform the user about 'run' directory being non-empty and perhaps even ask him/her
   to remove all files under it and issue 'volume delete' again.

Question:
--------

Do you foresee any problems with doing things the way it is done in solution (i)?




More information about the Gluster-devel mailing list