[Gluster-users] Using volumes during fix-layout after add/remove-brick

Dan Bretherton d.a.bretherton at reading.ac.uk
Sun Aug 7 12:20:20 UTC 2011


Hello All-
I regularly increase the size of volumes using "add-brick" followed by 
"rebalance VOLNAME fix-layout".  I usually allow normal use of an 
expanded volume (i.e reading and writing files) to continue while 
"fix-layout" is taking place, and I have not experienced any apparent 
problems as a result.  The documentation does not say that volumes 
cannot be used during "fix-layout" after "add-brick", but I would like 
to know for certain that this practice is safe.

I have a similar question about using volumes during"fix-layout" after 
"remove-brick"; is this a safe thing to do?  Being cautious I assume 
not, but it would be very useful to know if files can actually be safely 
written to a volume in this situation.  This weekend I had to shrink a 
volume using "remove-brick", and I am currently waiting for "fix-layout" 
to complete before copying the files from the removed brick into the 
shrunk volume.  The problem is that there are 2.5TB of data to copy, but 
fix-layout is still going after two days.  I was banking on completing 
the shrinking operation over the weekend and making the volume available 
for use again on Monday morning.  Therefore I would really like to know 
if I can start the copy now while fix-layout is still going on.

Incidentally, is there a way to estimate how long fix-layout will take 
for a particular volume?  I don't understand why fix-layout is taking so 
long for my shrunk volume.  According to the master_list.txt file I 
created recently during the GFID error fixing process, the volume in 
question has ~1.2 million paths, but "fix-layout VOLNAME status" shows 
that twice this number of layouts have been fixed already.

Regards
Dan.

-- 
Mr. D.A. Bretherton
Computer System Manager
Environmental Systems Science Centre
Harry Pitt Building
3 Earley Gate
University of Reading
Reading, RG6 6AL
UK

Tel. +44 118 378 5205
Fax: +44 118 378 6413




More information about the Gluster-users mailing list