[Gluster-devel] rm -r problem on FreeBSD port

Sakshi Bansal sabansal at redhat.com
Thu Jan 21 04:48:16 UTC 2016


I would expect cluster.lookup-optimize to be creating the problem here, so may be you could first try with this option off. Another thing that would be helpful is to get the strace when rm fails with no such file, as this would tell us to identify if the readdir is not returning the entry or is it the unlink that is failing.

----- Original Message -----
From: "Rick Macklem" <rmacklem at uoguelph.ca>
To: "Sakshi Bansal" <sabansal at redhat.com>
Cc: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Gluster Devel" <gluster-devel at gluster.org>
Sent: Thursday, January 21, 2016 10:03:37 AM
Subject: Re: [Gluster-devel] rm -r problem on FreeBSD port

Sakshi Bansal wrote:
> The directory deletion is failing with ENOTEMPTY since not all the files
> inside it have been deleted. Looks like lookup is not listing all the files.
> It is possible that cluster.lookup-optimize could be the culprit here. When
> did you turn this option 'on'? Was it during the untaring of the source
> tree?
> Also once this option if turned 'off', explicitly doing an ls on the
> problematic files still throw error?
> 
Good suggestion. I had disabled it but after I had created the tree
(unrolled the tarball and created the directory tree that the build goes in).

I ran a test where I disabled all three of:
performance,readdir-ahead
cluster.lookup-optimize
cluster.readdir-optimize
right after I created the volume with 2 bricks.

Then I ran a test and everything worked. I didn't get any directory with files
missing when doing an "ls" and the "rm -r" worked too.
So, it looks like it is one or more of these settings and they have to be
disabled when the files/directories are created to fix the problem.

It will take a while, but I will run tests with them individually disabled
to see which one(s) need to be disabled. Once I know that I'll email and
try to get the other information you requested to see if we can isolate the problem further.

Thanks, I feel this is progress, rick


More information about the Gluster-devel mailing list