[Bugs] [Bug 1368639] New: Cannot create regular file or cannot create directory when copy file to glusterfs using fuse

bugzilla at redhat.com bugzilla at redhat.com
Sat Aug 20 09:34:09 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1368639

            Bug ID: 1368639
           Summary: Cannot create regular file or cannot create directory
                    when copy file to glusterfs using fuse
           Product: GlusterFS
           Version: 3.7.13
         Component: stripe
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: forbee at yeah.net
                CC: bugs at gluster.org



Description of problem:
  Recently,I upgrade from version 3.6.4 to version 3.7.13 in four
CentOS-7-x86_64-1511 nodes.I create striped replicated volume with command
'gluster volume create vol0 stripe 2 replica 2
172.16.52.{115,108,100,117}:/data/gluster/brick0/brick'. When I copy
directories and files into volume by using fuse it didn't work correct, cannot
create regular file or cannot create directory happened.
  Glusterfs version 3.6.4 work fine with the same configuration.
  Glusterfs version 3.7.13 work fine when using distributed replicated volume.

Version-Release number of selected component (if applicable):
I install glusterfs with source code glusterfs-3.7.13.tar.gz

How reproducible:


Steps to Reproduce:
1.Need four linux nodes(my nodes os:CentOS-7-x86_64-1511).
2.Install glusterfs with source code glusterfs-3.7.13.tar.gz.
3.Create striped replicated volume with command like 
'gluster volume create vol0 stripe 2 replica 2
172.16.52.{115,108,100,117}:/data/gluster/brick0/brick'.
4.Start volume: gluster volume start vol0.
5.Mount glustergs volume: mount -t glusterfs localhost:/vol0 /mnt/vol0
6.Copy some not empty directory to volume: 
cp -r /home/admin/Downloads /mnt/vol0/

Actual results:


Expected results:


Additional info:
1,glusterfs version 3.6.4 work fine with the same configuration.
2,glusterfs version 3.7.13 work fine when using distributed replicated volume.
3,my steps:
  1),install glusterfs with source code glusterfs-3.7.13.tar.gz in all 4 nodes.
  2),in node 172.16.52.115,executed next command:
     [root at localhost vol0]#gluster peer probe 172.16.52.108
     [root at localhost vol0]#gluster peer probe 172.16.52.100
     [root at localhost vol0]#gluster peer probe 172.16.52.117
     [root at localhost vol0]#gluster volume create vol0 stripe 2 replica 2
172.16.52.{115,108,100,117}:/data/gluster/brick0/brick
     [root at localhost vol0]#gluster volume start vol0
     [root at localhost vol0]#mount -t glusterfs localhost:/vol0 /mnt/vol0
     [root at localhost vol0]#cp -r /home/admin/Downloads ./
       ......
       cp: cannot create regular file
‘./Downloads/install/glusterfs-3.7.13.tar.gz’: No such file or directory
       cp: cannot create directory ‘./Downloads/install/glusterfs-3.7.13’: No
data available
       ......
     [root at localhost vol0]# ls
       ls: cannot access Downloads: No data available
       Downloads

     [root at localhost vol0]# rm -rf Downloads 
       rm: cannot remove ‘Downloads’: No data available

     [root at localhost vol0]# gluster volume info

       Volume Name: vol0
       Type: Striped-Replicate
       Volume ID: e17a8660-2eb4-45cb-9cdf-8b1785893f00
       Status: Started
       Number of Bricks: 1 x 2 x 2 = 4
       Transport-type: tcp
       Bricks:
       Brick1: 172.16.52.115:/data/gluster/brick0/brick
       Brick2: 172.16.52.108:/data/gluster/brick0/brick
       Brick3: 172.16.52.100:/data/gluster/brick0/brick
       Brick4: 172.16.52.117:/data/gluster/brick0/brick

     [root at localhost vol0]# gluster volume heal vol0 info
       Brick 172.16.52.115:/data/gluster/brick0/brick
       Status: Connected
       Number of entries: 0

       Brick 172.16.52.108:/data/gluster/brick0/brick
       Status: Connected
       Number of entries: 0

       Brick 172.16.52.100:/data/gluster/brick0/brick
       Status: Connected
       Number of entries: 0

       Brick 172.16.52.117:/data/gluster/brick0/brick
       Status: Connected
       Number of entries: 0

     [root at localhost vol0]# gluster volume status
Status of volume: vol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.52.115:/data/gluster/brick0/br
ick                                         49152     0          Y       2698 
Brick 172.16.52.108:/data/gluster/brick0/br
ick                                         49152     0          Y       2622 
Brick 172.16.52.100:/data/gluster/brick0/br
ick                                         49152     0          Y       2300 
Brick 172.16.52.117:/data/gluster/brick0/br
ick                                         49152     0          Y       2466 
NFS Server on localhost                     2049      0          Y       2720 
Self-heal Daemon on localhost               N/A       N/A        Y       2727 
NFS Server on 172.16.52.108                 2049      0          Y       2644 
Self-heal Daemon on 172.16.52.108           N/A       N/A        Y       2651 
NFS Server on 172.16.52.100                 2049      0          Y       2323 
Self-heal Daemon on 172.16.52.100           N/A       N/A        Y       2329 
NFS Server on 172.16.52.117                 2049      0          Y       2488 
Self-heal Daemon on 172.16.52.117           N/A       N/A        Y       2495 

Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks

     [root at localhost vol0]# cat /var/log/glusterfs/mnt-vol0.log | grep " E "
[2016-08-20 08:11:28.347465] E [MSGID: 114031]
[client-rpc-fops.c:321:client3_3_mkdir_cbk] 0-vol0-client-3: remote operation
failed. Path: /Downloads [No data available]
[2016-08-20 08:11:28.347488] E [MSGID: 114031]
[client-rpc-fops.c:321:client3_3_mkdir_cbk] 0-vol0-client-2: remote operation
failed. Path: /Downloads [No data available]

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list