[Gluster-devel] [Gluster-users] gluster 3.7.9 permission denied and mv errors

Glomski, Patrick patrick.glomski at corvidtec.com
Fri Apr 29 14:21:17 UTC 2016


Raghavendra,

This error is occurring in a shell script moving files between directories
on a FUSE mount when overwriting an old file with a newer file (it's a
backup script, moving an incremental backup of a file into a 'rolling full
backup' directory).

As a temporary workaround, we parse the output of this shell script for
move errors and handle the errors as they happen. Simply re-moving the
files fails, so we stat the destination (to see if we can learn anything
about the type of file that causes this behavior), delete the destination,
and try the move again (success!). Typical output is as follows:

/bin/mv: cannot move
`./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4'
> to `../bkp00/./homegfs/hpc_shared/motorsports/gmics/
> Raven/p11/149/data_collected4': File exists
> /bin/mv: cannot move `./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4'
> to `../bkp00/./homegfs/hpc_shared/motorsports/gmics/
> Raven/p11/149/data_collected4': File exists
>   File: `../bkp00/./homegfs/hpc_shared/motorsports/gmics/
> Raven/p11/149/data_collected4'
>   Size: 1714            Blocks: 4          IO Block: 131072 regular file
> Device: 13h/19d Inode: 11051758947722304158  Links: 1
> Access: (0660/-rw-rw----)  Uid: (  628/pkeistler)   Gid: ( 2020/   gmirl)
> Access: 2016-01-20 17:20:45.000000000 -0500
> Modify: 2015-11-06 15:20:41.000000000 -0500
> Change: 2016-01-27 03:35:00.434712146 -0500
> retry: renaming ./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4
> -> ../bkp00/./homegfs/hpc_shared/motorsports/gmics/Raven/p11/
> 149/data_collected4
>

Not sure if that description rings any bells as to what the problem might
be, but if not, I added some code to print out the 'getattr' for the source
and destination file on all of the bricks (before we delete the
destination) and will post to this thread the next time we have that issue.

Thanks,
Patrick


On Fri, Apr 29, 2016 at 8:15 AM, Raghavendra G <raghavendra at gluster.com>
wrote:

>
>
> On Wed, Apr 13, 2016 at 10:00 PM, David F. Robinson <
> david.robinson at corvidtec.com> wrote:
>
>> I am running into two problems (possibly related?).
>>
>> 1) Every once in a while, when I do a 'rm -rf DIRNAME', it comes back
>> with an error:
>>         rm: cannot remove `DIRNAME` : Directory not empty
>>
>>         If I try the 'rm -rf' again after the error, it deletes the
>> directory.  The issue is that I have scripts that clean up directories, and
>> they are failing unless I go through the deletes a 2nd time.
>>
>
> What kind of mount are you using? Is it a FUSE or NFS mount? Recently we
> saw a similar issue on NFS clients on RHEL6 where rm -rf used to fail with
> ENOTEMPTY in some specific cases.
>
>
>>
>> 2) I have different scripts to move a large numbers of files (5-25k) from
>> one directory to another.  Sometimes I receive an error:
>>     /bin/mv: cannot move `xyz` to `../bkp00/xyz`: File exists
>>
>
> Does ./bkp00/xyz exist on backend? If yes, what is the value of gfid xattr
> (key: "trusted.gfid") for "xyz" and "./bkp00/xyz" on backend bricks (I need
> gfid from all the bricks) when this issue happens?
>
>
>>     The move is done using '/bin/mv -f', so it should overwrite the file
>> if it exists.  I have tested this with hundreds of files, and it works as
>> expected.  However, every few days the script that moves the files will
>> have problems with 1 or 2 files during the move.  This is one move problem
>> out of roughly 10,000 files that are being moved and I cannot figure out
>> any reason for the intermittent problem.
>>
>> Setup details for my gluster configuration shown below.
>>
>> [root at gfs01bkp logs]# gluster volume info
>>
>> Volume Name: gfsbackup
>> Type: Distribute
>> Volume ID: e78d5123-d9bc-4d88-9c73-61d28abf0b41
>> Status: Started
>> Number of Bricks: 7
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfsib01bkp.corvidtec.com:/data/brick01bkp/gfsbackup
>> Brick2: gfsib01bkp.corvidtec.com:/data/brick02bkp/gfsbackup
>> Brick3: gfsib02bkp.corvidtec.com:/data/brick01bkp/gfsbackup
>> Brick4: gfsib02bkp.corvidtec.com:/data/brick02bkp/gfsbackup
>> Brick5: gfsib02bkp.corvidtec.com:/data/brick03bkp/gfsbackup
>> Brick6: gfsib02bkp.corvidtec.com:/data/brick04bkp/gfsbackup
>> Brick7: gfsib02bkp.corvidtec.com:/data/brick05bkp/gfsbackup
>> Options Reconfigured:
>> nfs.disable: off
>> server.allow-insecure: on
>> storage.owner-gid: 100
>> server.manage-gids: on
>> cluster.lookup-optimize: on
>> server.event-threads: 8
>> client.event-threads: 8
>> changelog.changelog: off
>> storage.build-pgfid: on
>> performance.readdir-ahead: on
>> diagnostics.brick-log-level: WARNING
>> diagnostics.client-log-level: WARNING
>> cluster.rebal-throttle: aggressive
>> performance.cache-size: 1024MB
>> performance.write-behind-window-size: 10MB
>>
>>
>> [root at gfs01bkp logs]# rpm -qa | grep gluster
>> glusterfs-server-3.7.9-1.el6.x86_64
>> glusterfs-debuginfo-3.7.9-1.el6.x86_64
>> glusterfs-api-3.7.9-1.el6.x86_64
>> glusterfs-resource-agents-3.7.9-1.el6.noarch
>> gluster-nagios-common-0.1.1-0.el6.noarch
>> glusterfs-libs-3.7.9-1.el6.x86_64
>> glusterfs-fuse-3.7.9-1.el6.x86_64
>> glusterfs-extra-xlators-3.7.9-1.el6.x86_64
>> glusterfs-geo-replication-3.7.9-1.el6.x86_64
>> glusterfs-3.7.9-1.el6.x86_64
>> glusterfs-cli-3.7.9-1.el6.x86_64
>> glusterfs-devel-3.7.9-1.el6.x86_64
>> glusterfs-rdma-3.7.9-1.el6.x86_64
>> samba-vfs-glusterfs-4.1.11-2.el6.x86_64
>> glusterfs-client-xlators-3.7.9-1.el6.x86_64
>> glusterfs-api-devel-3.7.9-1.el6.x86_64
>> python-gluster-3.7.9-1.el6.noarch
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160429/cdd23cdd/attachment-0001.html>


More information about the Gluster-devel mailing list