[Gluster-users] rm -rf some_dir results in "Directory not empty"

Alessandro Ipe Alessandro.Ipe at meteo.be
Mon Feb 23 17:04:58 UTC 2015


gluster volume rebalance md1 status gives :
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost             3837         6.3GB        163881             0             0            completed             365.00
                                tsunami5              179       343.8MB        163882             0             0            completed             353.00
                                tsunami3             6786         4.7GB        163882             0             0            completed             416.00
                                tsunami6                0        0Bytes        163882             0             0            completed             353.00
                                tsunami4                0        0Bytes        163882             0             0            completed             353.00
                                tsunami2                0        0Bytes        163882             0             0            completed             353.00
volume rebalance: md1: success:

but no change on the bricks for the directory, still empty except on 2 bricks. 
Should I remove files in the .glusterfs directory on the 2 bricks associated to these  "---T" files ?


Thanks,


A.  



On Monday 23 February 2015 21:40:41 Ravishankar N wrote:


        
On 02/23/2015 09:19 PM, Alessandro Ipe      wrote:        
                  
On 4 of the 6 bricks, it is empty. However,        on tsunami 3-4, ls -lsa gives      
total 16      
d--------- 2 root root 61440 Feb 23 15:42 .      
drwxrwxrwx 3 gerb users 61 Feb 22 21:10 ..      
---------T 2 gerb users 0 Apr 16 2014        akonadi-googledata-1.2.0-2.5.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        bluedevil-debugsource-1.2.2-1.8.3.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        bovo-4.7.4-3.12.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        digikam-debugsource-2.2.0-3.12.9.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        dolphin-debuginfo-4.7.4-4.22.6.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        freetds-doc-0.91-2.5.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kanagram-debuginfo-4.7.4-2.10.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kdebase4-runtime-4.7.4-3.17.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kdebindings-smokegen-debuginfo-4.7.4-2.9.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kdesdk4-strigi-debuginfo-4.7.4-3.12.5.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kradio-4.0.2-9.9.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kremotecontrol-4.7.4-2.12.9.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        kreversi-debuginfo-4.7.4-3.12.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        krfb-4.7.4-2.13.6.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        krusader-doc-2.0.0-23.9.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libalkimia-devel-4.3.1-2.5.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libdmtx0-0.7.4-2.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libdmtx0-debuginfo-0.7.4-2.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libkdegames4-debuginfo-4.7.4-3.12.7.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libksane0-4.7.4-2.10.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libkvkontakte-debugsource-1.0.0-2.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libmediawiki-debugsource-2.5.0-4.6.1.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        libsmokeqt-4.7.4-2.10.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        NetworkManager-vpnc-kde4-0.9.1git20111027-1.11.5.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        qtcurve-kde4-1.8.8-3.6.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        QtZeitgeist-devel-0.7.0-7.4.2.i586.rpm      
---------T 2 gerb users 0 Apr 16 2014        umbrello-4.7.4-3.12.5.i586.rpm      
      
so that might be the reason of error. How can        I fix this ?      
      
    
    The '----T' files are DHT link-to files.  The actual files must be    present  on the other distribute subolumes (tsunami 1-2 or tsunami    5-6) in the same path.  But since that doesn't seem to be the case,    the something went wrong with the re-balance process.  You could run    `gluster volume rebalance <volname> start+status` again and    see if they disappear.            
      
Thanks,      
      
      
A.      
      
      
On Monday 23 February 2015 21:06:58        Ravishankar N wrote:            
 Just noticed that your `gluster volume        status` shows that rebalance was triggered. Maybe DHT developers        can help out. I see a similar bug[1]        has been fixed some time back.        FWIW, can you check if " /linux/suse/12.1/KDE4.7.4/i586" on all        6 bricks is indeed empty?                            
On 02/23/2015 08:15 PM, Alessandro Ipe wrote:            
       
Hi,       
       
       
Gluster version is 3.5.3-1.       
/var/log/gluster.log (client log) gives        during the rm -rf the following logs:       
[2015-02-23 14:42:50.180091] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.180134] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.180740] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-5:        remote operation failed: File exists. Path:        /linux/suse/12.1/KDE4.7.4/i586       
[2015-02-23 14:42:50.180772] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-4:        remote operation failed: File exists. Path:        /linux/suse/12.1/KDE4.7.4/i586       
[2015-02-23 14:42:50.181129] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:        remote operation failed: File exists. Path:        /linux/suse/12.1/KDE4.7.4/i586       
[2015-02-23 14:42:50.181160] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:        remote operation failed: File exists. Path:        /linux/suse/12.1/KDE4.7.4/i586       
[2015-02-23 14:42:50.319213] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.319762] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.320501] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-0:        remote operation failed: File exists. Path:        /linux/suse/12.1/src-oss/suse/src       
[2015-02-23 14:42:50.320552] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-1:        remote operation failed: File exists. Path:        /linux/suse/12.1/src-oss/suse/src       
[2015-02-23 14:42:50.320842] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:        remote operation failed: File exists. Path:        /linux/suse/12.1/src-oss/suse/src       
[2015-02-23 14:42:50.320884] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:        remote operation failed: File exists. Path:        /linux/suse/12.1/src-oss/suse/src       
[2015-02-23 14:42:50.438982] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.439347] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.440235] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-0:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/noarch       
[2015-02-23 14:42:50.440344] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-1:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/noarch       
[2015-02-23 14:42:50.440603] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/noarch       
[2015-02-23 14:42:50.440665] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/noarch       
[2015-02-23 14:42:50.680827] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.681721] W        [client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:        remote operation failed: Directory not empty       
[2015-02-23 14:42:50.682482] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/i586       
[2015-02-23 14:42:50.682517] W        [client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:        remote operation failed: File exists. Path:        /linux/suse/12.1/oss/suse/i586       
       
       
Thanks,       
       
       
A.       
       
       
On Monday 23 February 2015 20:06:17        Ravishankar N wrote:            
             
On 02/23/2015 07:04 PM, Alessandro Ipe wrote:            
       
Hi Ravi,       
       
       
gluster volume status md1 returns       
Status of volume: md1       
Gluster process Port Online Pid       
------------------------------------------------------------------------------            
Brick tsunami1:/data/glusterfs/md1/brick1        49157 Y 2260       
Brick tsunami2:/data/glusterfs/md1/brick1        49152 Y 2320       
Brick tsunami3:/data/glusterfs/md1/brick1        49156 Y 20715       
Brick tsunami4:/data/glusterfs/md1/brick1        49156 Y 10544       
Brick tsunami5:/data/glusterfs/md1/brick1        49152 Y 12588       
Brick tsunami6:/data/glusterfs/md1/brick1        49152 Y 12242       
Self-heal Daemon on localhost N/A Y 2336       
Self-heal Daemon on tsunami2 N/A Y 2359       
Self-heal Daemon on tsunami5 N/A Y 27619       
Self-heal Daemon on tsunami4 N/A Y 12318       
Self-heal Daemon on tsunami3 N/A Y 19118       
Self-heal Daemon on tsunami6 N/A Y 27650       
       
Task Status of Volume md1       
------------------------------------------------------------------------------            
Task : Rebalance       
ID : 9dfee1a2-49ac-4766-bdb6-00de5e5883f6       
Status : completed       
so it seems that all brick server are up.       
       
gluster volume heal md1 info returns       
Brick        tsunami1.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Brick        tsunami2.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Brick        tsunami3.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Brick        tsunami4.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Brick        tsunami5.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Brick        tsunami6.oma.be:/data/glusterfs/md1/brick1/       
Number of entries: 0       
       
Should I run "gluster volume heal md1 full" ?            
       
 Hi Alessandro,                Looks like there is no pending-self heals, so no need to run the        heal command. Can you share the output of the client (mount) log        when you get the ENOTEMPTY during the rm -rf?                What version of gluster are you using?        Thanks,        Ravi                    
       
       
Thanks,       
       
       
A.       
       
       
On Monday 23 February 2015 18:12:43        Ravishankar N wrote:            
             
On 02/23/2015 05:42 PM, Alessandro Ipe wrote:            
       
Hi,       
       
       
We have a "md1" volume under gluster 3.5.3        over 6 servers configured as distributed and replicated. When        trying on a client, thourgh fuse mount (which turns out to be        also a brick server) to delete (as root) recursively a directory        with "rm -rf /home/.md1/linux/suse/12.1", I get the error        messages       
       
rm: cannot remove        ‘/home/.md1/linux/suse/12.1/KDE4.7.4/i586’: Directory not empty            
rm: cannot remove        ‘/home/.md1/linux/suse/12.1/src-oss/suse/src’: Directory not        empty       
rm: cannot remove        ‘/home/.md1/linux/suse/12.1/oss/suse/noarch’: Directory not        empty       
rm: cannot remove        ‘/home/.md1/linux/suse/12.1/oss/suse/i586’: Directory not empty            
(the same occurs as unprivileged user but        with "Permission denied".)       
       
while a "ls -Ral /home/.md1/linux/suse/12.1"        gives me       
/home/.md1/linux/suse/12.1:       
total 0       
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 .       
drwxr-xr-x 6 gerb users 245 Feb 23 12:55 ..       
drwxrwxrwx 3 gerb users 95 Feb 23 13:03        KDE4.7.4       
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 oss            
drwxrwxrwx 3 gerb users 86 Feb 20 16:20        src-oss       
       
/home/.md1/linux/suse/12.1/KDE4.7.4:       
total 28       
drwxrwxrwx 3 gerb users 95 Feb 23 13:03 .       
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..       
d--------- 2 root root 61452 Feb 23 13:03        i586       
       
/home/.md1/linux/suse/12.1/KDE4.7.4/i586:       
total 28       
d--------- 2 root root 61452 Feb 23 13:03 .       
drwxrwxrwx 3 gerb users 95 Feb 23 13:03 ..       
       
/home/.md1/linux/suse/12.1/oss:       
total 0       
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 .       
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..       
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 suse            
       
/home/.md1/linux/suse/12.1/oss/suse:       
total 536       
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 .       
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 ..       
d--------- 2 root root 368652 Feb 23 13:03        i586       
d--------- 2 root root 196620 Feb 23 13:03        noarch       
       
/home/.md1/linux/suse/12.1/oss/suse/i586:       
total 360       
d--------- 2 root root 368652 Feb 23 13:03 .            
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..       
       
/home/.md1/linux/suse/12.1/oss/suse/noarch:       
total 176       
d--------- 2 root root 196620 Feb 23 13:03 .            
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..       
       
/home/.md1/linux/suse/12.1/src-oss:       
total 0       
drwxrwxrwx 3 gerb users 86 Feb 20 16:20 .       
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..       
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 suse            
       
/home/.md1/linux/suse/12.1/src-oss/suse:       
total 220       
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 .       
drwxrwxrwx 3 gerb users 86 Feb 20 16:20 ..       
d--------- 2 root root 225292 Feb 23 13:03        src       
       
/home/.md1/linux/suse/12.1/src-oss/suse/src:            
total 220       
d--------- 2 root root 225292 Feb 23 13:03 .            
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 ..       
       
       
Is there a cure such as manually forcing a        healing on that directory ?       
         Are all bricks up? Are there any pending self-heals ? Does         `gluster volume heal md1` info show any output? If it does, run        'gluster volume heal md1' to manually trigger heal.         -Ravi                    
       
       
Many thanks,       
       
       
Alessandro.       
       
       
gluster volume info md1 outputs:       
Volume Name: md1       
Type: Distributed-Replicate       
Volume ID:        6da4b915-1def-4df4-a41c-2f3300ebf16b       
Status: Started       
Number of Bricks: 3 x 2 = 6       
Transport-type: tcp       
Bricks:       
Brick1: tsunami1:/data/glusterfs/md1/brick1       
Brick2: tsunami2:/data/glusterfs/md1/brick1       
Brick3: tsunami3:/data/glusterfs/md1/brick1       
Brick4: tsunami4:/data/glusterfs/md1/brick1       
Brick5: tsunami5:/data/glusterfs/md1/brick1       
Brick6: tsunami6:/data/glusterfs/md1/brick1       
Options Reconfigured:       
performance.write-behind: on       
performance.write-behind-window-size: 4MB       
performance.flush-behind: off       
performance.io-thread-count: 64       
performance.cache-size: 512MB       
nfs.disable: on       
features.quota: off       
cluster.read-hash-mode: 2       
server.allow-insecure: on       
cluster.lookup-unhashed: off       
                     
_______________________________________________Gluster-users        mailing listGluster-users at gluster.org[2]http://www.gluster.org/mailman/listinfo/gluster-users[3]                    




--       
          



--------
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1125824
[2] mailto:Gluster-users at gluster.org
[3] http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150223/8d3c5809/attachment.html>


More information about the Gluster-users mailing list