[Gluster-users] rm -rf some_dir results in "Directory not empty"

Ravishankar N ravishankar at redhat.com
Mon Feb 23 14:36:17 UTC 2015


On 02/23/2015 07:04 PM, Alessandro Ipe wrote:
>
> Hi Ravi,
>
> gluster volume status md1 returns
>
> Status of volume: md1
>
> Gluster process Port Online Pid
>
> ------------------------------------------------------------------------------
>
> Brick tsunami1:/data/glusterfs/md1/brick1 49157 Y 2260
>
> Brick tsunami2:/data/glusterfs/md1/brick1 49152 Y 2320
>
> Brick tsunami3:/data/glusterfs/md1/brick1 49156 Y 20715
>
> Brick tsunami4:/data/glusterfs/md1/brick1 49156 Y 10544
>
> Brick tsunami5:/data/glusterfs/md1/brick1 49152 Y 12588
>
> Brick tsunami6:/data/glusterfs/md1/brick1 49152 Y 12242
>
> Self-heal Daemon on localhost N/A Y 2336
>
> Self-heal Daemon on tsunami2 N/A Y 2359
>
> Self-heal Daemon on tsunami5 N/A Y 27619
>
> Self-heal Daemon on tsunami4 N/A Y 12318
>
> Self-heal Daemon on tsunami3 N/A Y 19118
>
> Self-heal Daemon on tsunami6 N/A Y 27650
>
> Task Status of Volume md1
>
> ------------------------------------------------------------------------------
>
> Task : Rebalance
>
> ID : 9dfee1a2-49ac-4766-bdb6-00de5e5883f6
>
> Status : completed
>
> so it seems that all brick server are up.
>
> gluster volume heal md1 info returns
>
> Brick tsunami1.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Brick tsunami2.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Brick tsunami3.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Brick tsunami4.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Brick tsunami5.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Brick tsunami6.oma.be:/data/glusterfs/md1/brick1/
>
> Number of entries: 0
>
> Should I run "gluster volume heal md1 full" ?
>
Hi Alessandro,

Looks like there is no pending-self heals, so no need to run the heal 
command. Can you share the output of the client (mount) log when you get 
the ENOTEMPTY during the rm -rf?

What version of gluster are you using?
Thanks,
Ravi

> Thanks,
>
> A.
>
> On Monday 23 February 2015 18:12:43 Ravishankar N wrote:
>
>
> On 02/23/2015 05:42 PM, Alessandro Ipe wrote:
>
> Hi,
>
> We have a "md1" volume under gluster 3.5.3 over 6 servers configured 
> as distributed and replicated. When trying on a client, thourgh fuse 
> mount (which turns out to be also a brick server) to delete (as root) 
> recursively a directory with "rm -rf /home/.md1/linux/suse/12.1", I 
> get the error messages
>
> rm: cannot remove ‘/home/.md1/linux/suse/12.1/KDE4.7.4/i586’: 
> Directory not empty
>
> rm: cannot remove ‘/home/.md1/linux/suse/12.1/src-oss/suse/src’: 
> Directory not empty
>
> rm: cannot remove ‘/home/.md1/linux/suse/12.1/oss/suse/noarch’: 
> Directory not empty
>
> rm: cannot remove ‘/home/.md1/linux/suse/12.1/oss/suse/i586’: 
> Directory not empty
>
> (the same occurs as unprivileged user but with "Permission denied".)
>
> while a "ls -Ral /home/.md1/linux/suse/12.1" gives me
>
> /home/.md1/linux/suse/12.1:
>
> total 0
>
> drwxrwxrwx 5 gerb users 151 Feb 20 16:22 .
>
> drwxr-xr-x 6 gerb users 245 Feb 23 12:55 ..
>
> drwxrwxrwx 3 gerb users 95 Feb 23 13:03 KDE4.7.4
>
> drwxrwxrwx 3 gerb users 311 Feb 20 16:57 oss
>
> drwxrwxrwx 3 gerb users 86 Feb 20 16:20 src-oss
>
> /home/.md1/linux/suse/12.1/KDE4.7.4:
>
> total 28
>
> drwxrwxrwx 3 gerb users 95 Feb 23 13:03 .
>
> drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
>
> d--------- 2 root root 61452 Feb 23 13:03 i586
>
> /home/.md1/linux/suse/12.1/KDE4.7.4/i586:
>
> total 28
>
> d--------- 2 root root 61452 Feb 23 13:03 .
>
> drwxrwxrwx 3 gerb users 95 Feb 23 13:03 ..
>
> /home/.md1/linux/suse/12.1/oss:
>
> total 0
>
> drwxrwxrwx 3 gerb users 311 Feb 20 16:57 .
>
> drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
>
> drwxrwxrwx 4 gerb users 90 Feb 23 13:03 suse
>
> /home/.md1/linux/suse/12.1/oss/suse:
>
> total 536
>
> drwxrwxrwx 4 gerb users 90 Feb 23 13:03 .
>
> drwxrwxrwx 3 gerb users 311 Feb 20 16:57 ..
>
> d--------- 2 root root 368652 Feb 23 13:03 i586
>
> d--------- 2 root root 196620 Feb 23 13:03 noarch
>
> /home/.md1/linux/suse/12.1/oss/suse/i586:
>
> total 360
>
> d--------- 2 root root 368652 Feb 23 13:03 .
>
> drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..
>
> /home/.md1/linux/suse/12.1/oss/suse/noarch:
>
> total 176
>
> d--------- 2 root root 196620 Feb 23 13:03 .
>
> drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..
>
> /home/.md1/linux/suse/12.1/src-oss:
>
> total 0
>
> drwxrwxrwx 3 gerb users 86 Feb 20 16:20 .
>
> drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
>
> drwxrwxrwx 3 gerb users 48 Feb 23 13:03 suse
>
> /home/.md1/linux/suse/12.1/src-oss/suse:
>
> total 220
>
> drwxrwxrwx 3 gerb users 48 Feb 23 13:03 .
>
> drwxrwxrwx 3 gerb users 86 Feb 20 16:20 ..
>
> d--------- 2 root root 225292 Feb 23 13:03 src
>
> /home/.md1/linux/suse/12.1/src-oss/suse/src:
>
> total 220
>
> d--------- 2 root root 225292 Feb 23 13:03 .
>
> drwxrwxrwx 3 gerb users 48 Feb 23 13:03 ..
>
> Is there a cure such as manually forcing a healing on that directory ?
>
>
> Are all bricks up? Are there any pending self-heals ? Does `gluster 
> volume heal md1` info show any output? If it does, run 'gluster volume 
> heal md1' to manually trigger heal.
> -Ravi
>
> Many thanks,
>
> Alessandro.
>
> gluster volume info md1 outputs:
>
> Volume Name: md1
>
> Type: Distributed-Replicate
>
> Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b
>
> Status: Started
>
> Number of Bricks: 3 x 2 = 6
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: tsunami1:/data/glusterfs/md1/brick1
>
> Brick2: tsunami2:/data/glusterfs/md1/brick1
>
> Brick3: tsunami3:/data/glusterfs/md1/brick1
>
> Brick4: tsunami4:/data/glusterfs/md1/brick1
>
> Brick5: tsunami5:/data/glusterfs/md1/brick1
>
> Brick6: tsunami6:/data/glusterfs/md1/brick1
>
> Options Reconfigured:
>
> performance.write-behind: on
>
> performance.write-behind-window-size: 4MB
>
> performance.flush-behind: off
>
> performance.io-thread-count: 64
>
> performance.cache-size: 512MB
>
> nfs.disable: on
>
> features.quota: off
>
> cluster.read-hash-mode: 2
>
> server.allow-insecure: on
>
> cluster.lookup-unhashed: off
>
>
>
> _______________________________________________Gluster-users mailing 
> listGluster-users at gluster.org 
> <mailto:Gluster-users at gluster.org>http://www.gluster.org/mailman/listinfo/gluster-users 
>
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150223/492c099e/attachment.html>


More information about the Gluster-users mailing list