[Gluster-devel] rm -r problem on FreeBSD port

Rick Macklem rmacklem at uoguelph.ca
Wed Jan 27 01:43:11 UTC 2016


Sakshi Bansal wrote:
> I would expect cluster.lookup-optimize to be creating the problem here, so
> may be you could first try with this option off. Another thing that would be
> helpful is to get the strace when rm fails with no such file, as this would
> tell us to identify if the readdir is not returning the entry or is it the
> unlink that is failing.
> 
Actually it seems that disabling all of the three options:
cluster.lookup-optimize
cluster.readdir-optimize
performance.readdir-ahead
doesn't stop this from happening.

Here's what I have learned sofar:
- Doing the simple case of "rm -rf" after creating the tree works fine,
  whether or not the above options are on.
- To get the "Directory not empty" failure (with two instances of the same
  file name, one in each brick, with one having 0 mode and 0 size) I have to:
  - Do an "rm -r" without the "f" option.
    This will then generate a message like:
    Override --------- root wheel ... ?
    - If you answer "y" to all of these, then it will succeed as above (which
      makes sense since "-f" just disables these messages).
  However, if you answer "n" and then:
  <ctrl>C out of the "rm -r"
  unmount/remount the fuse mount point
  do another "rm -r"
  (I'm not 100% certain that both of these steps is needed, but it is
   the only way I've been able to reproduce it?)
  --> then you get the "Directory not empty" failures.
I have also found that sometimes after a unmount/remount I can "rm <file>" of
the file left in the directory and the "cd ..; rmdir <dir>" successfully.
However, other times, the "rm" doesn't report an error, but the file remains
visible via "ls" and the rmdir fails.
Also, I've seen cases where I type "ls -l" twice and see one entry for the
file in reply to the first "ls", but two entries for the file in a second
"ls -l" done about a second later.

I've looked at the logs (although I didn't have debug enabled) and all I
found that looked slightly suspicious were messages w.r.t. client and server
having different lock versions, reopening fd's.

When I looked at the system calls via ktrace (I'm guessing similar to strace
under Linux?), the unlink syscalls always succeed (reply 0) and the getdirentries()
find entries in the directories. (I'd guess that means that the "lookup" done
in the "unlink" succeeds?)

It does seem that unmount/remount of the fuse mount has some effect on what is
returned.

Sorry this isn't particularily useful, but it's all I have, rick

> ----- Original Message -----
> From: "Rick Macklem" <rmacklem at uoguelph.ca>
> To: "Sakshi Bansal" <sabansal at redhat.com>
> Cc: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Gluster Devel"
> <gluster-devel at gluster.org>
> Sent: Thursday, January 21, 2016 10:03:37 AM
> Subject: Re: [Gluster-devel] rm -r problem on FreeBSD port
> 
> Sakshi Bansal wrote:
> > The directory deletion is failing with ENOTEMPTY since not all the files
> > inside it have been deleted. Looks like lookup is not listing all the
> > files.
> > It is possible that cluster.lookup-optimize could be the culprit here. When
> > did you turn this option 'on'? Was it during the untaring of the source
> > tree?
> > Also once this option if turned 'off', explicitly doing an ls on the
> > problematic files still throw error?
> > 
> Good suggestion. I had disabled it but after I had created the tree
> (unrolled the tarball and created the directory tree that the build goes in).
> 
> I ran a test where I disabled all three of:
> performance,readdir-ahead
> cluster.lookup-optimize
> cluster.readdir-optimize
> right after I created the volume with 2 bricks.
> 
> Then I ran a test and everything worked. I didn't get any directory with
> files
> missing when doing an "ls" and the "rm -r" worked too.
> So, it looks like it is one or more of these settings and they have to be
> disabled when the files/directories are created to fix the problem.
> 
> It will take a while, but I will run tests with them individually disabled
> to see which one(s) need to be disabled. Once I know that I'll email and
> try to get the other information you requested to see if we can isolate the
> problem further.
> 
> Thanks, I feel this is progress, rick
> 


More information about the Gluster-devel mailing list