[Gluster-users] Brick Preference
phil cryer
phil at cryer.us
Mon Jun 28 17:48:56 UTC 2010
On Thu, Jun 24, 2010 at 11:20 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
> The setfattrs are actually attribute deletes, so deleting what already
> didn't exist is a no-op. I was working with another user on the IRC
> channel yesterday who was seeing the same thing. The approach we came
> up with was:
That was me, and the scale-n-defrag.sh is still running, going on
about day #5 now, likely telling how out of balance my setup was.
Some quick background, we built 2 servers and started loading our
data, about 50TB of data, by downloading it cross country (US). This
was far from ideal, but the only solution we had at the time, plus we
still had 4 more servers to build so we had time. When we got around
to adding new servers the 'ls -lR' trick would fail because some of
the initial servers' bricks were at 100% capacity, and would not play
nice and share. What would have been best would have been to delete
everything and start over with all 6 servers running so they would be
loaded correctly, but again, we had to download all the data, and even
on Internet2 this was taking months. Fast forward, I tried 'manually'
moving some of the files, which after talking to you I understand how
I made things get so outta whack. Looking at the drives now, ones that
were 100% full, that I manually made 99% full, now have 100s of TB
free, so I can tell they're the scale-n-defrag.sh is helping to
balance things.
> I haven't heard back about the results yet, but a test on one directory
> seemed to work correctly so he seemed to feel comfortable doing it
> across the whole data set
I'll certainly post results, and I think this may be a good thing to
add to the wiki since I think a lot of people will 'play' with
GlusterFS before really using it, and may also end up in such a state.
> (personally I would have sought more input from the real devs first).
Actually Jeff, I had been posting my question to the the user (and
then dev) list since May 11th, this is how long I've been having this
issue:
[Gluster-users] How can I scale and defrag this setup? (May 11)
http://gluster.org/pipermail/gluster-users/2010-May/004633.html
[Gluster-users] Input/output error when running `ls` and `cd` on
directories (May 14)
http://gluster.org/pipermail/gluster-users/2010-May/004692.html
(NOTE: Lakshmipathi replied to this one asking for more logs, which I sent)
[Gluster-users] Transport endpoint is not connected - getfattr (June 17 and 18)
http://gluster.org/pipermail/gluster-users/2010-June/004870.html
[Gluster-devel] Fwd: Can create directories, but cannot delete them
(was (June 22)
http://lists.gnu.org/archive/html/gluster-devel/2010-06/msg00004.html
I didn't know they were all related, but I suspected they were.
Working with you over IRC helped me understand the problem, how it
likely happened, and how (I hope) it will be fixed. I was getting so
desperate I was actually looking at other distributed filesystems in
case I had to completely punt and start over, which would have been a
big failure on my part as this project is way behind schedule anyway,
mainly due to downloading/verification issues.
I sincerely appreciate your assistance, and want to pass on what we
learned/did on the wiki to help others in similar binds.
P
On Thu, Jun 24, 2010 at 11:20 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
> On 06/24/2010 10:57 AM, Andy Pace wrote:
>> Good call.
>>
>>
>> However, when running scale-n-defrag.sh (not supposed to run
>> defrag.sh standalone apparently), i get a lot of errors:
>>
>> find: `setfattr': No such file or directory
>
> The setfattrs are actually attribute deletes, so deleting what already
> didn't exist is a no-op. I was working with another user on the IRC
> channel yesterday who was seeing the same thing. The approach we came
> up with was:
>
> (1) Remove the xattrs *on the server side* to make sure they're well and
> truly gone and there won't be any inconsistent remnants to cause
> problems later.
>
> (2) Mount with lookup-unhashed and unhashed-sticky-bit enabled.
>
> (3) Run scale-n-defrag.sh on the client to redistribute *and* make sure
> all of the maps/links are consistent.
>
> I haven't heard back about the results yet, but a test on one directory
> seemed to work correctly so he seemed to feel comfortable doing it
> across the whole data set (personally I would have sought more input
> from the real devs first).
>
>> is it safe to ignore thouse? because it seems to have defragged
>> anyway:
>>
>> Defragmenting directory /distributed//29150
>> (/root/defrag-store-29150.log) Completed directory
>
> Seems promising, but the real question is whether examining disk usage
> across the bricks shows improved distribution.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
--
http://philcryer.com
More information about the Gluster-users
mailing list