[Gluster-users] cannot delete non-empty directory

David F. Robinson david.robinson at corvidtec.com
Sun Feb 8 17:19:15 UTC 2015


I am seeing these messsages after I delete large amounts of data using 
gluster 3.6.2.
cannot delete non-empty directory: 
old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final

>From the FUSE mount (as root), the directory shows up as empty:

# pwd
/backup/homegfs/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final

# ls -al
total 5
d--------- 2 root root    4106 Feb  6 13:55 .
drwxrws--- 3  601 dmiller   72 Feb  6 13:55 ..

However, when you look at the bricks, the files are still there (none on 
brick01bkp, all files are on brick02bkp).  All of the files are 0-length 
and have ------T permissions.
Any suggestions on how to fix this and how to prevent it from happening?

#  ls -al 
/data/brick*/homegfs_bkp/backup.0/old_shelf4/Aegis/\!\!\!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final
/data/brick01bkp/homegfs_bkp/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final:
total 4
d---------+ 2 root root  10 Feb  6 13:55 .
drwxrws---+ 3  601 raven 36 Feb  6 13:55 ..

/data/brick02bkp/homegfs_bkp/backup.0/old_shelf4/Aegis/!!!Programs/RavenCFD/Storage/Jimmy_Old/src_vj1.5_final:
total 8
d---------+ 3 root root  4096 Dec 31  1969 .
drwxrws---+ 3  601 raven   36 Feb  6 13:55 ..
---------T  5  601 raven    0 Nov 20 00:08 read_inset.f.gz
---------T  5  601 raven    0 Nov 20 00:08 readbc.f.gz
---------T  5  601 raven    0 Nov 20 00:08 readcn.f.gz
---------T  5  601 raven    0 Nov 20 00:08 readinp.f.gz
---------T  5  601 raven    0 Nov 20 00:08 readinp_v1_2.f.gz
---------T  5  601 raven    0 Nov 20 00:08 readinp_v1_3.f.gz
---------T  5  601 raven    0 Nov 20 00:08 rotatept.f.gz
d---------+ 2 root root   118 Feb  6 13:54 save1
---------T  5  601 raven    0 Nov 20 00:08 sepvec.f.gz
---------T  5  601 raven    0 Nov 20 00:08 shadow.f.gz
---------T  5  601 raven    0 Nov 20 00:08 snksrc.f.gz
---------T  5  601 raven    0 Nov 20 00:08 source.f.gz
---------T  5  601 raven    0 Nov 20 00:08 step.f.gz
---------T  5  601 raven    0 Nov 20 00:08 stoprog.f.gz
---------T  5  601 raven    0 Nov 20 00:08 summer6.f.gz
---------T  5  601 raven    0 Nov 20 00:08 totforc.f.gz
---------T  5  601 raven    0 Nov 20 00:08 tritet.f.gz
---------T  5  601 raven    0 Nov 20 00:08 wallrsd.f.gz
---------T  5  601 raven    0 Nov 20 00:08 wheat.f.gz
---------T  5  601 raven    0 Nov 20 00:08 write_inset.f.gz


This is using gluster 3.6.2 on a distributed gluster volume that resides 
on a single machine.  Both of the bricks are on one machine consisting 
of 2x RAID-6 arrays.

df -h | grep brick
/dev/mapper/vg01-lvol1                       88T   22T   66T  25% 
/data/brick01bkp
/dev/mapper/vg02-lvol1                       88T   22T   66T  26% 
/data/brick02bkp

# gluster volume info homegfs_bkp
Volume Name: homegfs_bkp
Type: Distribute
Volume ID: 96de8872-d957-4205-bf5a-076e3f35b294
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs_bkp
Brick2: gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs_bkp
Options Reconfigured:
storage.owner-gid: 100
performance.io-thread-count: 32
server.allow-insecure: on
network.ping-timeout: 10
performance.cache-size: 128MB
performance.write-behind-window-size: 128MB
server.manage-gids: on
changelog.rollover-time: 15
changelog.fsync-interval: 3



===============================
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
David.Robinson at corvidtec.com
http://www.corvidtechnologies.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150208/90359099/attachment.html>


More information about the Gluster-users mailing list