[Gluster-users] rm -rf some_dir results in "Directory not empty"

Alessandro Ipe Alessandro.Ipe at meteo.be
Mon Feb 23 14:45:29 UTC 2015


Hi,


Gluster version is 3.5.3-1.
/var/log/gluster.log (client log) gives during the rm -rf the  following logs:
[2015-02-23 14:42:50.180091] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-2: remote operation failed: Directory not empty
[2015-02-23 14:42:50.180134] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-3: remote operation failed: Directory not empty
[2015-02-23 14:42:50.180740] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-5: remote operation failed: File exists. Path: /linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.180772] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-4: remote operation failed: File exists. Path: /linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.181129] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-3: remote operation failed: File exists. Path: /linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.181160] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-2: remote operation failed: File exists. Path: /linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.319213] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-3: remote operation failed: Directory not empty
[2015-02-23 14:42:50.319762] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-2: remote operation failed: Directory not empty
[2015-02-23 14:42:50.320501] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-0: remote operation failed: File exists. Path: /linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320552] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-1: remote operation failed: File exists. Path: /linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320842] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-2: remote operation failed: File exists. Path: /linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320884] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-3: remote operation failed: File exists. Path: /linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.438982] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-3: remote operation failed: Directory not empty
[2015-02-23 14:42:50.439347] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-2: remote operation failed: Directory not empty
[2015-02-23 14:42:50.440235] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-0: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440344] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-1: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440603] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-2: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440665] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-3: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.680827] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-2: remote operation failed: Directory not empty
[2015-02-23 14:42:50.681721] W [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-
client-3: remote operation failed: Directory not empty
[2015-02-23 14:42:50.682482] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-3: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/i586
[2015-02-23 14:42:50.682517] W [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-
client-2: remote operation failed: File exists. Path: /linux/suse/12.1/oss/suse/i586


Thanks,


A.


On Monday 23 February 2015 20:06:17 Ravishankar N wrote:


        
On 02/23/2015 07:04 PM, Alessandro Ipe      wrote:        
                  
Hi Ravi,      
      
      
gluster volume status md1 returns      
Status of volume: md1      
Gluster process Port Online Pid      
------------------------------------------------------------------------------      
Brick tsunami1:/data/glusterfs/md1/brick1        49157 Y 2260      
Brick tsunami2:/data/glusterfs/md1/brick1        49152 Y 2320      
Brick tsunami3:/data/glusterfs/md1/brick1        49156 Y 20715      
Brick tsunami4:/data/glusterfs/md1/brick1        49156 Y 10544      
Brick tsunami5:/data/glusterfs/md1/brick1        49152 Y 12588      
Brick tsunami6:/data/glusterfs/md1/brick1        49152 Y 12242      
Self-heal Daemon on localhost N/A Y 2336      
Self-heal Daemon on tsunami2 N/A Y 2359      
Self-heal Daemon on tsunami5 N/A Y 27619      
Self-heal Daemon on tsunami4 N/A Y 12318      
Self-heal Daemon on tsunami3 N/A Y 19118      
Self-heal Daemon on tsunami6 N/A Y 27650      
       
Task Status of Volume md1      
------------------------------------------------------------------------------      
Task : Rebalance       
ID : 9dfee1a2-49ac-4766-bdb6-00de5e5883f6      
Status : completed       
so it seems that all brick server are up.      
      
gluster volume heal md1 info returns      
Brick        tsunami1.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Brick        tsunami2.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Brick        tsunami3.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Brick        tsunami4.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Brick        tsunami5.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Brick        tsunami6.oma.be:/data/glusterfs/md1/brick1/      
Number of entries: 0      
      
Should I run "gluster volume heal md1 full" ?      
    
    Hi Alessandro,        Looks like there is no pending-self heals, so no need to run the    
heal command. Can you share the output of the client (mount) log    when you get the 
ENOTEMPTY during the rm -rf?        What version of gluster are you using?    
Thanks,    Ravi        
      
      
Thanks,      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150223/33f091a3/attachment.html>


More information about the Gluster-users mailing list