[Bugs] [Bug 1320892] New: Over some time Files which were accessible become inaccessible(music files)

bugzilla at redhat.com bugzilla at redhat.com
Thu Mar 24 09:39:31 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1320892

            Bug ID: 1320892
           Summary: Over some time Files which were accessible become
                    inaccessible(music files)
           Product: GlusterFS
           Version: 3.7.9
         Component: posix
          Keywords: ZStream
          Assignee: bugs at gluster.org
          Reporter: vmallika at redhat.com
                CC: bugs at gluster.org, byarlaga at redhat.com,
                    nbalacha at redhat.com, nchilaka at redhat.com,
                    pkarampu at redhat.com, rabhat at redhat.com,
                    rhs-bugs at redhat.com, sasundar at redhat.com,
                    skoduri at redhat.com, vmallika at redhat.com
        Depends On: 1302355, 1320818
            Blocks: 1320887



+++ This bug was initially created as a clone of Bug #1320818 +++

On a dist-rep over EC tiered volume, I tried to mounted the volume on my
desktop using nfs and copied some mp3 files into it.
Now using vlc player, i tried to play the files, in a suffle mode (about 30
songs)

Intially about 10 mp3 files completed playing and then we start getting
permission denied error as below:



VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah
(2013) ~320Kbps/04 - Banthi Poola Janaki [www.AtoZmp3.Net].mp3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/04%20-%20Banthi%20Poola%20Janaki%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:





gluster version:3.7.5-17

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-01-27
10:20:41 EST ---

This bug is automatically being proposed for the current z-stream release of
Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change
the proposed release flag.

--- Additional comment from nchilaka on 2016-01-28 00:20:54 EST ---

gluster v info:
[root at dhcp37-202 ~]# gluster v info nagvol

Volume Name: nagvol
Type: Tier
Volume ID: 5972ca44-130a-4543-8cc0-abf76a133a34
Status: Started
Number of Bricks: 36
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: 10.70.37.120:/rhs/brick7/nagvol_hot
Brick2: 10.70.37.60:/rhs/brick7/nagvol_hot
Brick3: 10.70.37.69:/rhs/brick7/nagvol_hot
Brick4: 10.70.37.101:/rhs/brick7/nagvol_hot
Brick5: 10.70.35.163:/rhs/brick7/nagvol_hot
Brick6: 10.70.35.173:/rhs/brick7/nagvol_hot
Brick7: 10.70.35.232:/rhs/brick7/nagvol_hot
Brick8: 10.70.35.176:/rhs/brick7/nagvol_hot
Brick9: 10.70.35.222:/rhs/brick7/nagvol_hot
Brick10: 10.70.35.155:/rhs/brick7/nagvol_hot
Brick11: 10.70.37.195:/rhs/brick7/nagvol_hot
Brick12: 10.70.37.202:/rhs/brick7/nagvol_hot
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick13: 10.70.37.202:/rhs/brick1/nagvol
Brick14: 10.70.37.195:/rhs/brick1/nagvol
Brick15: 10.70.35.155:/rhs/brick1/nagvol
Brick16: 10.70.35.222:/rhs/brick1/nagvol
Brick17: 10.70.35.108:/rhs/brick1/nagvol
Brick18: 10.70.35.44:/rhs/brick1/nagvol
Brick19: 10.70.35.89:/rhs/brick1/nagvol
Brick20: 10.70.35.231:/rhs/brick1/nagvol
Brick21: 10.70.35.176:/rhs/brick1/nagvol
Brick22: 10.70.35.232:/rhs/brick1/nagvol
Brick23: 10.70.35.173:/rhs/brick1/nagvol
Brick24: 10.70.35.163:/rhs/brick1/nagvol
Brick25: 10.70.37.101:/rhs/brick1/nagvol
Brick26: 10.70.37.69:/rhs/brick1/nagvol
Brick27: 10.70.37.60:/rhs/brick1/nagvol
Brick28: 10.70.37.120:/rhs/brick1/nagvol
Brick29: 10.70.37.202:/rhs/brick2/nagvol
Brick30: 10.70.37.195:/rhs/brick2/nagvol
Brick31: 10.70.35.155:/rhs/brick2/nagvol
Brick32: 10.70.35.222:/rhs/brick2/nagvol
Brick33: 10.70.35.108:/rhs/brick2/nagvol
Brick34: 10.70.35.44:/rhs/brick2/nagvol
Brick35: 10.70.35.89:/rhs/brick2/nagvol
Brick36: 10.70.35.231:/rhs/brick2/nagvol
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: off
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on



[root at dhcp37-202 ~]# gluster v status nagvol
Status of volume: nagvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.37.120:/rhs/brick7/nagvol_hot   49156     0          Y       32513
Brick 10.70.37.60:/rhs/brick7/nagvol_hot    49156     0          Y       4060 
Brick 10.70.37.69:/rhs/brick7/nagvol_hot    49156     0          Y       32442
Brick 10.70.37.101:/rhs/brick7/nagvol_hot   49156     0          Y       4199 
Brick 10.70.35.163:/rhs/brick7/nagvol_hot   49156     0          Y       617  
Brick 10.70.35.173:/rhs/brick7/nagvol_hot   49156     0          Y       32751
Brick 10.70.35.232:/rhs/brick7/nagvol_hot   49156     0          Y       32361
Brick 10.70.35.176:/rhs/brick7/nagvol_hot   49156     0          Y       32383
Brick 10.70.35.222:/rhs/brick7/nagvol_hot   49155     0          Y       22713
Brick 10.70.35.155:/rhs/brick7/nagvol_hot   49155     0          Y       22505
Brick 10.70.37.195:/rhs/brick7/nagvol_hot   49156     0          Y       25832
Brick 10.70.37.202:/rhs/brick7/nagvol_hot   49156     0          Y       26275
Cold Bricks:
Brick 10.70.37.202:/rhs/brick1/nagvol       49152     0          Y       16950
Brick 10.70.37.195:/rhs/brick1/nagvol       49152     0          Y       16702
Brick 10.70.35.155:/rhs/brick1/nagvol       49152     0          Y       13578
Brick 10.70.35.222:/rhs/brick1/nagvol       49152     0          Y       13546
Brick 10.70.35.108:/rhs/brick1/nagvol       49152     0          Y       4675 
Brick 10.70.35.44:/rhs/brick1/nagvol        49152     0          Y       12288
Brick 10.70.35.89:/rhs/brick1/nagvol        49152     0          Y       12261
Brick 10.70.35.231:/rhs/brick1/nagvol       49152     0          Y       22810
Brick 10.70.35.176:/rhs/brick1/nagvol       49152     0          Y       22781
Brick 10.70.35.232:/rhs/brick1/nagvol       49152     0          Y       22783
Brick 10.70.35.173:/rhs/brick1/nagvol       49152     0          Y       22795
Brick 10.70.35.163:/rhs/brick1/nagvol       49152     0          Y       22805
Brick 10.70.37.101:/rhs/brick1/nagvol       49152     0          Y       22847
Brick 10.70.37.69:/rhs/brick1/nagvol        49152     0          Y       22847
Brick 10.70.37.60:/rhs/brick1/nagvol        49152     0          Y       22895
Brick 10.70.37.120:/rhs/brick1/nagvol       49152     0          Y       22916
Brick 10.70.37.202:/rhs/brick2/nagvol       49153     0          Y       16969
Brick 10.70.37.195:/rhs/brick2/nagvol       49153     0          Y       16721
Brick 10.70.35.155:/rhs/brick2/nagvol       49153     0          Y       13597
Brick 10.70.35.222:/rhs/brick2/nagvol       49153     0          Y       13565
Brick 10.70.35.108:/rhs/brick2/nagvol       49153     0          Y       4694 
Brick 10.70.35.44:/rhs/brick2/nagvol        49153     0          Y       12307
Brick 10.70.35.89:/rhs/brick2/nagvol        49153     0          Y       12280
Brick 10.70.35.231:/rhs/brick2/nagvol       49153     0          Y       22829
NFS Server on localhost                     2049      0          Y       26295
Self-heal Daemon on localhost               N/A       N/A        Y       26303
Quota Daemon on localhost                   N/A       N/A        Y       26311
NFS Server on 10.70.37.101                  2049      0          Y       4219 
Self-heal Daemon on 10.70.37.101            N/A       N/A        Y       4227 
Quota Daemon on 10.70.37.101                N/A       N/A        Y       4235 
NFS Server on 10.70.37.69                   2049      0          Y       32462
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       32470
Quota Daemon on 10.70.37.69                 N/A       N/A        Y       32478
NFS Server on 10.70.37.195                  2049      0          Y       25852
Self-heal Daemon on 10.70.37.195            N/A       N/A        Y       25860
Quota Daemon on 10.70.37.195                N/A       N/A        Y       25868
NFS Server on 10.70.37.60                   2049      0          Y       4080 
Self-heal Daemon on 10.70.37.60             N/A       N/A        Y       4088 
Quota Daemon on 10.70.37.60                 N/A       N/A        Y       4096 
NFS Server on 10.70.37.120                  2049      0          Y       32533
Self-heal Daemon on 10.70.37.120            N/A       N/A        Y       32541
Quota Daemon on 10.70.37.120                N/A       N/A        Y       32549
NFS Server on 10.70.35.173                  2049      0          Y       303  
Self-heal Daemon on 10.70.35.173            N/A       N/A        Y       311  
Quota Daemon on 10.70.35.173                N/A       N/A        Y       319  
NFS Server on 10.70.35.232                  2049      0          Y       32381
Self-heal Daemon on 10.70.35.232            N/A       N/A        Y       32389
Quota Daemon on 10.70.35.232                N/A       N/A        Y       32397
NFS Server on 10.70.35.176                  2049      0          Y       32403
Self-heal Daemon on 10.70.35.176            N/A       N/A        Y       32411
Quota Daemon on 10.70.35.176                N/A       N/A        Y       32419
NFS Server on 10.70.35.231                  2049      0          Y       32446
Self-heal Daemon on 10.70.35.231            N/A       N/A        Y       32455
Quota Daemon on 10.70.35.231                N/A       N/A        Y       32463
NFS Server on 10.70.35.163                  2049      0          Y       637  
Self-heal Daemon on 10.70.35.163            N/A       N/A        Y       645  
Quota Daemon on 10.70.35.163                N/A       N/A        Y       653  
NFS Server on 10.70.35.222                  2049      0          Y       22733
Self-heal Daemon on 10.70.35.222            N/A       N/A        Y       22742
Quota Daemon on 10.70.35.222                N/A       N/A        Y       22750
NFS Server on 10.70.35.108                  2049      0          Y       13877
Self-heal Daemon on 10.70.35.108            N/A       N/A        Y       13885
Quota Daemon on 10.70.35.108                N/A       N/A        Y       13893
NFS Server on 10.70.35.155                  2049      0          Y       22525
Self-heal Daemon on 10.70.35.155            N/A       N/A        Y       22533
Quota Daemon on 10.70.35.155                N/A       N/A        Y       22541
NFS Server on 10.70.35.44                   2049      0          Y       21479
Self-heal Daemon on 10.70.35.44             N/A       N/A        Y       21487
Quota Daemon on 10.70.35.44                 N/A       N/A        Y       21495
NFS Server on 10.70.35.89                   2049      0          Y       20671
Self-heal Daemon on 10.70.35.89             N/A       N/A        Y       20679
Quota Daemon on 10.70.35.89                 N/A       N/A        Y       20687

Task Status of Volume nagvol
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 0870550a-70ba-4cd1-98da-b456059bd6cc
Status               : in progress         







Later changed some performance settings as per Rafi's instruction to debug the
issue:
[root at dhcp37-202 glusterfs]# gluster v info nagvol

Volume Name: nagvol
Type: Tier
Volume ID: 5972ca44-130a-4543-8cc0-abf76a133a34
Status: Started
Number of Bricks: 36
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: 10.70.37.120:/rhs/brick7/nagvol_hot
Brick2: 10.70.37.60:/rhs/brick7/nagvol_hot
Brick3: 10.70.37.69:/rhs/brick7/nagvol_hot
Brick4: 10.70.37.101:/rhs/brick7/nagvol_hot
Brick5: 10.70.35.163:/rhs/brick7/nagvol_hot
Brick6: 10.70.35.173:/rhs/brick7/nagvol_hot
Brick7: 10.70.35.232:/rhs/brick7/nagvol_hot
Brick8: 10.70.35.176:/rhs/brick7/nagvol_hot
Brick9: 10.70.35.222:/rhs/brick7/nagvol_hot
Brick10: 10.70.35.155:/rhs/brick7/nagvol_hot
Brick11: 10.70.37.195:/rhs/brick7/nagvol_hot
Brick12: 10.70.37.202:/rhs/brick7/nagvol_hot
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick13: 10.70.37.202:/rhs/brick1/nagvol
Brick14: 10.70.37.195:/rhs/brick1/nagvol
Brick15: 10.70.35.155:/rhs/brick1/nagvol
Brick16: 10.70.35.222:/rhs/brick1/nagvol
Brick17: 10.70.35.108:/rhs/brick1/nagvol
Brick18: 10.70.35.44:/rhs/brick1/nagvol
Brick19: 10.70.35.89:/rhs/brick1/nagvol
Brick20: 10.70.35.231:/rhs/brick1/nagvol
Brick21: 10.70.35.176:/rhs/brick1/nagvol
Brick22: 10.70.35.232:/rhs/brick1/nagvol
Brick23: 10.70.35.173:/rhs/brick1/nagvol
Brick24: 10.70.35.163:/rhs/brick1/nagvol
Brick25: 10.70.37.101:/rhs/brick1/nagvol
Brick26: 10.70.37.69:/rhs/brick1/nagvol
Brick27: 10.70.37.60:/rhs/brick1/nagvol
Brick28: 10.70.37.120:/rhs/brick1/nagvol
Brick29: 10.70.37.202:/rhs/brick2/nagvol
Brick30: 10.70.37.195:/rhs/brick2/nagvol
Brick31: 10.70.35.155:/rhs/brick2/nagvol
Brick32: 10.70.35.222:/rhs/brick2/nagvol
Brick33: 10.70.35.108:/rhs/brick2/nagvol
Brick34: 10.70.35.44:/rhs/brick2/nagvol
Brick35: 10.70.35.89:/rhs/brick2/nagvol
Brick36: 10.70.35.231:/rhs/brick2/nagvol
Options Reconfigured:
cluster.watermark-hi: 50
cluster.watermark-low: 30
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: off
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root at dhcp37-202 glusterfs]# gluster v status nagvol
Status of volume: nagvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.37.120:/rhs/brick7/nagvol_hot   49156     0          Y       32513
Brick 10.70.37.60:/rhs/brick7/nagvol_hot    49156     0          Y       4060 
Brick 10.70.37.69:/rhs/brick7/nagvol_hot    49156     0          Y       32442
Brick 10.70.37.101:/rhs/brick7/nagvol_hot   49156     0          Y       4199 
Brick 10.70.35.163:/rhs/brick7/nagvol_hot   49156     0          Y       617  
Brick 10.70.35.173:/rhs/brick7/nagvol_hot   49156     0          Y       32751
Brick 10.70.35.232:/rhs/brick7/nagvol_hot   49156     0          Y       32361
Brick 10.70.35.176:/rhs/brick7/nagvol_hot   49156     0          Y       32383
Brick 10.70.35.222:/rhs/brick7/nagvol_hot   49155     0          Y       22713
Brick 10.70.35.155:/rhs/brick7/nagvol_hot   49155     0          Y       22505
Brick 10.70.37.195:/rhs/brick7/nagvol_hot   49156     0          Y       25832
Brick 10.70.37.202:/rhs/brick7/nagvol_hot   49156     0          Y       26275
Cold Bricks:
Brick 10.70.37.202:/rhs/brick1/nagvol       49152     0          Y       16950
Brick 10.70.37.195:/rhs/brick1/nagvol       49152     0          Y       16702
Brick 10.70.35.155:/rhs/brick1/nagvol       49152     0          Y       13578
Brick 10.70.35.222:/rhs/brick1/nagvol       49152     0          Y       13546
Brick 10.70.35.108:/rhs/brick1/nagvol       49152     0          Y       4675 
Brick 10.70.35.44:/rhs/brick1/nagvol        49152     0          Y       12288
Brick 10.70.35.89:/rhs/brick1/nagvol        49152     0          Y       2668 
Brick 10.70.35.231:/rhs/brick1/nagvol       49152     0          Y       22810
Brick 10.70.35.176:/rhs/brick1/nagvol       49152     0          Y       22781
Brick 10.70.35.232:/rhs/brick1/nagvol       49152     0          Y       22783
Brick 10.70.35.173:/rhs/brick1/nagvol       49152     0          Y       22795
Brick 10.70.35.163:/rhs/brick1/nagvol       49152     0          Y       22805
Brick 10.70.37.101:/rhs/brick1/nagvol       49152     0          Y       22847
Brick 10.70.37.69:/rhs/brick1/nagvol        49152     0          Y       22847
Brick 10.70.37.60:/rhs/brick1/nagvol        49152     0          Y       22895
Brick 10.70.37.120:/rhs/brick1/nagvol       49152     0          Y       22916
Brick 10.70.37.202:/rhs/brick2/nagvol       49153     0          Y       16969
Brick 10.70.37.195:/rhs/brick2/nagvol       49153     0          Y       16721
Brick 10.70.35.155:/rhs/brick2/nagvol       49153     0          Y       13597
Brick 10.70.35.222:/rhs/brick2/nagvol       49153     0          Y       13565
Brick 10.70.35.108:/rhs/brick2/nagvol       49153     0          Y       4694 
Brick 10.70.35.44:/rhs/brick2/nagvol        49153     0          Y       12307
Brick 10.70.35.89:/rhs/brick2/nagvol        49153     0          Y       2683 
Brick 10.70.35.231:/rhs/brick2/nagvol       49153     0          Y       22829
NFS Server on localhost                     2049      0          Y       3356 
Self-heal Daemon on localhost               N/A       N/A        Y       3364 
Quota Daemon on localhost                   N/A       N/A        Y       3372 
NFS Server on 10.70.37.195                  2049      0          Y       2354 
Self-heal Daemon on 10.70.37.195            N/A       N/A        Y       2362 
Quota Daemon on 10.70.37.195                N/A       N/A        Y       2370 
NFS Server on 10.70.37.120                  2049      0          Y       9573 
Self-heal Daemon on 10.70.37.120            N/A       N/A        Y       9581 
Quota Daemon on 10.70.37.120                N/A       N/A        Y       9589 
NFS Server on 10.70.37.101                  2049      0          Y       13331
Self-heal Daemon on 10.70.37.101            N/A       N/A        Y       13339
Quota Daemon on 10.70.37.101                N/A       N/A        Y       13347
NFS Server on 10.70.37.60                   2049      0          Y       13071
Self-heal Daemon on 10.70.37.60             N/A       N/A        Y       13079
Quota Daemon on 10.70.37.60                 N/A       N/A        Y       13087
NFS Server on 10.70.37.69                   2049      0          Y       9368 
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       9376 
Quota Daemon on 10.70.37.69                 N/A       N/A        Y       9384 
NFS Server on 10.70.35.176                  2049      0          Y       9438 
Self-heal Daemon on 10.70.35.176            N/A       N/A        Y       9446 
Quota Daemon on 10.70.35.176                N/A       N/A        Y       9454 
NFS Server on 10.70.35.155                  2049      0          Y       31698
Self-heal Daemon on 10.70.35.155            N/A       N/A        Y       31706
Quota Daemon on 10.70.35.155                N/A       N/A        Y       31714
NFS Server on 10.70.35.232                  2049      0          Y       9301 
Self-heal Daemon on 10.70.35.232            N/A       N/A        Y       9309 
Quota Daemon on 10.70.35.232                N/A       N/A        Y       9317 
NFS Server on 10.70.35.163                  2049      0          Y       9935 
Self-heal Daemon on 10.70.35.163            N/A       N/A        Y       9943 
Quota Daemon on 10.70.35.163                N/A       N/A        Y       9951 
NFS Server on 10.70.35.89                   2049      0          Y       2483 
Self-heal Daemon on 10.70.35.89             N/A       N/A        Y       2560 
Quota Daemon on 10.70.35.89                 N/A       N/A        Y       2597 
NFS Server on 10.70.35.222                  2049      0          Y       32079
Self-heal Daemon on 10.70.35.222            N/A       N/A        Y       32087
Quota Daemon on 10.70.35.222                N/A       N/A        Y       32095
NFS Server on 10.70.35.173                  2049      0          Y       9724 
Self-heal Daemon on 10.70.35.173            N/A       N/A        Y       9732 
Quota Daemon on 10.70.35.173                N/A       N/A        Y       9740 
NFS Server on 10.70.35.231                  2049      0          Y       9171 
Self-heal Daemon on 10.70.35.231            N/A       N/A        Y       9179 
Quota Daemon on 10.70.35.231                N/A       N/A        Y       9187 
NFS Server on 10.70.35.44                   2049      0          Y       30600
Self-heal Daemon on 10.70.35.44             N/A       N/A        Y       30608
Quota Daemon on 10.70.35.44                 N/A       N/A        Y       30616
NFS Server on 10.70.35.108                  2049      0          Y       23000
Self-heal Daemon on 10.70.35.108            N/A       N/A        Y       23008
Quota Daemon on 10.70.35.108                N/A       N/A        Y       23016

Task Status of Volume nagvol
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 0870550a-70ba-4cd1-98da-b456059bd6cc
Status               : in progress         

[root at dhcp37-202 glusterfs]# 
[root at dhcp37-202 glusterfs]# 




[root at dhcp37-202 ~]# rpm -qa|grep gluster
glusterfs-client-xlators-3.7.5-17.el7rhgs.x86_64
glusterfs-server-3.7.5-17.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-3.7.5-17.el7rhgs.x86_64
glusterfs-api-3.7.5-17.el7rhgs.x86_64
glusterfs-cli-3.7.5-17.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-17.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-17.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
python-gluster-3.7.5-16.el7rhgs.noarch
glusterfs-libs-3.7.5-17.el7rhgs.x86_64
glusterfs-fuse-3.7.5-17.el7rhgs.x86_64
glusterfs-rdma-3.7.5-17.el7rhgs.x86_64




-->Mounted the volume on 4 different clients(all NFS mounts): 3 were RHEL and 1
was the fedora personal laptop machine, where i had seen the problem 
--->total 16 gluster nodes 





================= Error thrown by VLC application================
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu
(2013)~128Kbps/02 - Yaevaindho [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/02%20-%20Yaevaindho%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/Laahiri Laahiri
Laahiri Lo/OHOHO_CHILAKAMMA at bugnine.net.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/Laahiri%20Laahiri%20Laahiri%20Lo/OHOHO_CHILAKAMMA@bugnine.net.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah
(2013) ~320Kbps/04 - Banthi Poola Janaki [www.AtoZmp3.Net].mp3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/04%20-%20Banthi%20Poola%20Janaki%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu
(2013)~128Kbps/03 - Lucky Lucky Rai [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/03%20-%20Lucky%20Lucky%20Rai%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah
(2013) ~320Kbps/02 - Diamond Girl [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Baadshah%20%282013%29%20~320Kbps/02%20-%20Diamond%20Girl%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu
(2012) ~320Kbps/02.Jaga Jaga Jagadeka Veera [www.AtoZmp3.Net].mp3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/02.Jaga%20Jaga%20Jagadeka%20Veera%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu
(2012) ~320Kbps/01.Made For Each Other [www.AtoZmp3.Net].mp3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/01.Made%20For%20Each%20Other%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu
(2013) ~320 Kbps/06 - Pimple Dimple [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/06%20-%20Pimple%20Dimple%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_sa
rajkumar/Yavvana Veena.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_sa%20rajkumar/Yavvana%20Veena.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu
(2013)~128Kbps/04 - Padipoyaanila [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Balupu%20(2013)~128Kbps/04%20-%20Padipoyaanila%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu
(2013) ~320 Kbps/05 - Oye Oye [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/05%20-%20Oye%20Oye%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu
(2012) ~320Kbps/05.Kaatuka Kallu [www.AtoZmp3.Net].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Sarocharu%20(2012)%20~320Kbps/05.Kaatuka%20Kallu%20%5Bwww.AtoZmp3.Net%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/DADDY/03 VANA
VANA.MP3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/DADDY/03%20VANA%20VANA.MP3'. Check the
log for details.
File reading failed:
VLC could not open the file
"/mnt/nagvol/my_lappy/01_Telugu/Deviputhrudu/03___OKATA_RENDA.MP3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/Deviputhrudu/03___OKATA_RENDA.MP3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_sa
rajkumar/Panchadara_Chilaka-Anukunnana.mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_sa%20rajkumar/Panchadara_Chilaka-Anukunnana.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/DADDY/01 LKKI.MP3"
(Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/DADDY/01%20LKKI.MP3'. Check the log for
details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva
(2014) ~320Kbps/04 - Pisthol Bava [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva%20(2014)%20~320Kbps/04%20-%20Pisthol%20Bava%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva
(2014) ~320Kbps/02 - Chinnadana Chinnadana [www.AtoZmp3.in].mp3" (Permission
denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/001_2015/Beeruva%20(2014)%20~320Kbps/02%20-%20Chinnadana%20Chinnadana%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.
File reading failed:
VLC could not open the file "/mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu
(2013) ~320 Kbps/02 - Nee Jathaga  [www.AtoZmp3.in].mp3" (Permission denied).
Your input can't be opened:
VLC is unable to open the MRL
'file:///mnt/nagvol/my_lappy/01_Telugu/01_2013/Yevadu%20(2013)%20~320%20Kbps/02%20-%20Nee%20Jathaga%20%20%5Bwww.AtoZmp3.in%5D.mp3'.
Check the log for details.


============================================================================
How reproducible:
Reproduced it atleast  3 times

--- Additional comment from nchilaka on 2016-01-28 00:37:44 EST ---

sosreports:
[nchilaka at rhsqe-repo nchilaka]$ /home/repo/sosreports/nchilaka/bug.1302355

[nchilaka at rhsqe-repo nchilaka]$ hostname
rhsqe-repo.lab.eng.blr.redhat.com

--- Additional comment from Nithya Balachandran on 2016-01-28 03:42:03 EST ---

Which NFS server did you use to mount the volume?

--- Additional comment from nchilaka on 2016-01-29 05:21:49 EST ---

I saw the problem with atleast 2 different servers at different times.
10.70.37.202 and 10.70.37.120

--- Additional comment from Soumya Koduri on 2016-02-02 05:01:21 EST ---

Here is our analysis so far:

NFS clients send ACCESS fop to get permission of any file. GlusterFS server
encapsulates the file permissions in the op_errno before sending to gluster-NFS
server. In the pkt trace collected, we have seen few ACCESS fops sent with
op_errno set to zero (that too for root gfid/inode). That means brick processes
have been sending NULL permissions. We have seen this issue hit with cold-tier
bricks.


When checked brick processes, we found out that, posix-acl xlator is checking
against NULL perms at times. posix-acl xlator (when no acl set) stores
permissions of the inodes in its 'ctx->perm' structure and uses those bits to
decide access permission for any user. This 'ctx->perm' is updated as part of
posix_ctx_update() which gets called as part of variety of fops (like lookup,
stat, readdir etc).


gluster-nfs server:
(gdb)
1453                    gf_msg (this->name, GF_LOG_WARNING,
(gdb) c
Continuing.

Breakpoint 1, client3_3_access (frame=0x7f9885331570, this=0x7f9874031150,
data=0x7f986bffe7a0)
    at client-rpc-fops.c:3520
3520    {
(gdb) p this->name
$10 = 0x7f9874030ae0 "nagvol-client-20"
(gdb)

volfile:
volume nagvol-client-20
    type protocol/client
    option send-gids true
    option password 492e0b7f-255f-469d-8b9c-6982079dbcd1
    option username 727545cc-cd4f-4d92-9fed-81763b6d3d29
    option transport-type tcp
    option remote-subvolume /rhs/brick2/nagvol
    option remote-host 10.70.35.108
    option ping-timeout 42
end-volume 


Brick process:

Breakpoint 3, posix_acl_ctx_update (inode=0x7f7b1a1b306c,
this=this at entry=0x7f7b340106b0,
    buf=buf at entry=0x7f7acc4d9148) at posix-acl.c:734
734    {
(gdb) p inode>gfid
No symbol "gfid" in current context.
(gdb) p inode->gfid
$23 = '\000' <repeats 15 times>, "\001"
(gdb) n
743            ctx = posix_acl_ctx_get (inode, this);
(gdb)
744            if (!ctx) {
(gdb)
743            ctx = posix_acl_ctx_get (inode, this);
(gdb)
744            if (!ctx) {
(gdb)
749            LOCK(&inode->lock);
(gdb)
751                    ctx->uid   = buf->ia_uid;
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot,
buf->ia_type);
(gdb)
751                    ctx->uid   = buf->ia_uid;
(gdb)
752                    ctx->gid   = buf->ia_gid;
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot,
buf->ia_type);
(gdb)
753                    ctx->perm  = st_mode_from_ia (buf->ia_prot,
buf->ia_type);
(gdb)
755            acl = ctx->acl_access;
(gdb) p/x ctx->perm
$24 = 0x4000
(gdb) p buf->ia_prot
$25 = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0
'\000', write = 0 '\000',
    exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0
'\000'}, other = {read = 0 '\000',
    write = 0 '\000', exec = 0 '\000'}} 
(gdb) bt
#0  posix_acl_ctx_update (inode=0x7f7b1a1b306c, this=this at entry=0x7f7b340106b0,
buf=buf at entry=0x7f7acc4d9148)
    at posix-acl.c:755
#1  0x00007f7b39932af5 in posix_acl_readdirp_cbk (frame=0x7f7b46619a14,
cookie=<optimized out>,
    this=0x7f7b340106b0, op_ret=7, op_errno=0, entries=0x7f7ab008da50,
xdata=0x0) at posix-acl.c:1625
#2  0x00007f7b39b46bef in br_stub_readdirp_cbk (frame=0x7f7b46627674,
cookie=<optimized out>, this=0x7f7b3400f270,
    op_ret=7, op_errno=0, entries=0x7f7ab008da50, dict=0x0) at
bit-rot-stub.c:2546
#3  0x00007f7b3b0af163 in posix_readdirp (frame=0x7f7b46618aa0, this=<optimized
out>, fd=<optimized out>,
    size=<optimized out>, off=<optimized out>, dict=<optimized out>) at
posix.c:6022
#4  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0,
this=0x7f7b34009240, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#5  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0,
this=0x7f7b3400a880, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#6  0x00007f7b48b10535 in default_readdirp (frame=0x7f7b46618aa0,
this=0x7f7b3400d2e0, fd=0x7f7b2c005408, size=0,
    off=0, xdata=0x7f7b48ddd5b8) at defaults.c:2101
#7  0x00007f7b39b404db in br_stub_readdirp (frame=0x7f7b46627674,
this=0x7f7b3400f270, fd=0x7f7b2c005408, size=0,
    offset=0, dict=0x7f7b48ddd5b8) at bit-rot-stub.c:2581
#8  0x00007f7b39930949 in posix_acl_readdirp (frame=0x7f7b46619a14,
this=0x7f7b340106b0, fd=0x7f7b2c005408, size=0,
    offset=0, dict=0x7f7b48ddd5b8) at posix-acl.c:1674
#9  0x00007f7b39718817 in pl_readdirp (frame=0x7f7b4661b0ec,
this=0x7f7b34011ac0, fd=0x7f7b2c005408, size=0,
    offset=0, xdata=0x7f7b48ddd5b8) at posix.c:2213
#10 0x00007f7b39506c85 in up_readdirp (frame=0x7f7b46631504,
this=0x7f7b34012e60, fd=0x7f7b2c005408, size=0, off=0,
    dict=0x7f7b48ddd5b8) at upcall.c:1342
#11 0x00007f7b48b1e19d in default_readdirp_resume (frame=0x7f7b4662b4f0,
this=0x7f7b340142d0, fd=0x7f7b2c005408,
    size=0, off=0, xdata=0x7f7b48ddd5b8) at defaults.c:1657
#12 0x00007f7b48b3b17d in call_resume (stub=0x7f7b46113684) at call-stub.c:2576
#13 0x00007f7b392f6363 in iot_worker (data=0x7f7b340546a0) at io-threads.c:215
#14 0x00007f7b47973dc5 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f7b472ba21d in clone () from /lib64/libc.so.6
(gdb) f 1
#1  0x00007f7b39932af5 in posix_acl_readdirp_cbk (frame=0x7f7b46619a14,
cookie=<optimized out>,
    this=0x7f7b340106b0, op_ret=7, op_errno=0, entries=0x7f7ab008da50,
xdata=0x0) at posix-acl.c:1625
1625                    posix_acl_ctx_update (entry->inode, this,
&entry->d_stat);
(gdb) p entry
$26 = (gf_dirent_t *) 0x7f7acc4d9120
(gdb) p entry->d_stat
$27 = {ia_ino = 0, ia_gfid = '\000' <repeats 15 times>, "\001", ia_dev = 0,
ia_type = IA_IFDIR, ia_prot = {
    suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0
'\000', write = 0 '\000',
      exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0
'\000'}, other = {read = 0 '\000',
      write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid =
0, ia_rdev = 0, ia_size = 0,
  ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0,
ia_mtime_nsec = 0, ia_ctime = 0,
  ia_ctime_nsec = 0} 


As we can see above, as part of readdirp, we ended up with root inode entry
which hasn't got update with right stat (entry->d_stat). When looked at the
code,

int
posix_make_ancestryfromgfid (xlator_t *this, char *path, int pathsize,
                             gf_dirent_t *head, int type, uuid_t gfid,
                             const size_t handle_size,
                             const char *priv_base_path, inode_table_t *itable,
                             inode_t **parent, dict_t *xdata, int32_t
*op_errno)
{
        char        *linkname   = NULL; /* "../../<gfid[0]>/<gfid[1]/"
                                         "<gfidstr>/<NAME_MAX>" */ 

...........
...........
        if (__is_root_gfid (gfid)) {
                if (parent) {
                        if (*parent) {
                                inode_unref (*parent);
                        }

                        *parent = inode_ref (itable->root);
                }

                inode = itable->root;

                memset (&iabuf, 0, sizeof (iabuf));
                gf_uuid_copy (iabuf.ia_gfid, inode->gfid);
                iabuf.ia_type = inode->ia_type;

                ret = posix_make_ancestral_node (priv_base_path, path,
pathsize,
                                                 head, "/", &iabuf, inode,
type,
                                                 xdata);
                if (ret < 0)
                        *op_errno = ENOMEM;
                return ret;
        } 
.............

}

For root inode entry, we do not seem to fetching stat (for other entries
'posix_resolve()' call is made which updates stat). So we suspect this could
have resulted in root entry with NULL perms which in turn got updated in
posix_acl xlator 'ctx->perm' resulting in EPERM error for ACCESS fop. But its
not yet sure why this issue hasn't been hit till now.

--- Additional comment from Vijay Bellur on 2016-03-24 00:04:58 EDT ---

REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt
attributes for the root inode) posted (#5) for review on master by Raghavendra
Bhat (raghavendra at redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-24 03:26:02 EDT ---

REVIEW: http://review.gluster.org/13730 (storage/posix: send proper iatt
attributes for the root inode) posted (#6) for review on master by Vijaikumar
Mallikarjuna (vmallika at redhat.com)

--- Additional comment from Vijaikumar Mallikarjuna on 2016-03-24 05:27:52 EDT
---

upstream patch: http://review.gluster.org/#/c/13730/


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1302355
[Bug 1302355] Over some time Files which were accessible become
inaccessible(music files)
https://bugzilla.redhat.com/show_bug.cgi?id=1320818
[Bug 1320818] Over some time Files which were accessible become
inaccessible(music files)
https://bugzilla.redhat.com/show_bug.cgi?id=1320887
[Bug 1320887] Over some time Files which were accessible become
inaccessible(music files)
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list