[Gluster-users] Using volumes during fix-layout after add/remove-brick
Dan Bretherton
d.a.bretherton at reading.ac.uk
Tue Sep 6 09:53:35 UTC 2011
Hello Amar,
Thanks for your reply. That makes things a bit clearer.
I discovered that if you remove a pair of bricks and then immediately
add a new pair of bricks on another server, the fix-layout operation
then continues indefinitely. I didn't mention before that I'd done that
because I didn't want to muddy the waters with extra detail. However it
turns out that doing a remove-brick followed by add-brick does affect
fix-layout. This isn't something that many users will want to do, but I
thought you might want to investigate in case the problem crops up again.
I let the fix-layout carry on until it had fixed ten times as many
layouts as there were paths in the volume before stopping it. I ran a
test where I created a test file in every existing directory, checking
for file write errors and error messages in the log files, and then
began using the volume normally without any apparent problems.
Therefore the layout fix does seem to have worked even though the
fix-layout operation didn't stop.
Regards
Dan.
--
Mr. D.A. Bretherton
Computer System Manager
Environmental Systems Science Centre
Harry Pitt Building
3 Earley Gate
University of Reading
Reading, RG6 6AL
UK
Tel. +44 118 378 5205
Fax: +44 118 378 6413
On 08/08/11 06:17, Amar Tumballi wrote:
> Hi Dan,
>
> It should be completely safe to use a volume while fix-layout is going
> on after add-brick.
>
> Even in case of fix-layout after remove brick, there is no harm in
> using volume while fix-layout is going on, but the issue is, the new
> entry creates (like creat(),mkdir(),mknod(),symlink() etc) would fail
> if its parent path has not yet undergone 'fix-layout'. If your
> application is creating a new directory for its operation, you can
> surely have a working volume while fix-layout operations are going on.
>
> About the number of entries shown as extra, it is an issue with
> rebalance operation as such because the 'readdir()' operation happens
> on a directory while its layout gets fixed, hence the way the 'offset'
> is calculated internally may differ, and hence we would end up reading
> the same entry again some times.
>
> Regards,
> Amar
>
> On Sun, Aug 7, 2011 at 5:50 PM, Dan Bretherton
> <d.a.bretherton at reading.ac.uk <mailto:d.a.bretherton at reading.ac.uk>>
> wrote:
>
> Hello All-
> I regularly increase the size of volumes using "add-brick"
> followed by "rebalance VOLNAME fix-layout". I usually allow
> normal use of an expanded volume (i.e reading and writing files)
> to continue while "fix-layout" is taking place, and I have not
> experienced any apparent problems as a result. The documentation
> does not say that volumes cannot be used during "fix-layout" after
> "add-brick", but I would like to know for certain that this
> practice is safe.
>
> I have a similar question about using volumes during"fix-layout"
> after "remove-brick"; is this a safe thing to do? Being cautious
> I assume not, but it would be very useful to know if files can
> actually be safely written to a volume in this situation. This
> weekend I had to shrink a volume using "remove-brick", and I am
> currently waiting for "fix-layout" to complete before copying the
> files from the removed brick into the shrunk volume. The problem
> is that there are 2.5TB of data to copy, but fix-layout is still
> going after two days. I was banking on completing the shrinking
> operation over the weekend and making the volume available for use
> again on Monday morning. Therefore I would really like to know if
> I can start the copy now while fix-layout is still going on.
>
> Incidentally, is there a way to estimate how long fix-layout will
> take for a particular volume? I don't understand why fix-layout
> is taking so long for my shrunk volume. According to the
> master_list.txt file I created recently during the GFID error
> fixing process, the volume in question has ~1.2 million paths, but
> "fix-layout VOLNAME status" shows that twice this number of
> layouts have been fixed already.
>
> Regards
> Dan.
>
> --
> Mr. D.A. Bretherton
> Computer System Manager
> Environmental Systems Science Centre
> Harry Pitt Building
> 3 Earley Gate
> University of Reading
> Reading, RG6 6AL
> UK
>
> Tel. +44 118 378 5205
> Fax: +44 118 378 6413
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110906/95858f03/attachment.html>
More information about the Gluster-users
mailing list