[Gluster-users] forrtl: severe (51): inconsistent file organization error on Gluster

Neil Van Lysel van-lyse at cs.wisc.edu
Thu Jun 27 23:42:57 UTC 2013


Hello,

I recently setup a SLURM cluster with a shared filesystem using Gluster. 
The Gluster nodes are connected to the rest of the cluster with a 56Gb 
InfiniBand Interconnect.

Some of our users are receiving the following error when they run VASP 
jobs that access files on Gluster:

     forrtl: severe (51): inconsistent file organization, unit 12 
/path/to/file/WAVECAR

Is this an error with VASP or Gluster? If it is an error with Gluster 
how do I fix it? I do not know much about Gluster so I need some help.

Here are some relevant specs:
     [root at aci-storage-1 ~]# gluster --version
     glusterfs 3.4.0beta2 built on May 24 2013 14:11:16

     [root at aci-storage-1 ~]# gluster volume info
     Volume Name: scratch
     Type: Distribute
     Volume ID: 2d30a015-0452-45a3-9a1d-42cee619d35f
     Status: Started
     Number of Bricks: 8
     Transport-type: tcp
     Bricks:
     Brick1: 10.129.40.21:/data/glusterfs/brick1/scratch
     Brick2: 10.129.40.21:/data/glusterfs/brick2/scratch
     Brick3: 10.129.40.22:/data/glusterfs/brick1/scratch
     Brick4: 10.129.40.22:/data/glusterfs/brick2/scratch
     Brick5: 10.129.40.23:/data/glusterfs/brick1/scratch
     Brick6: 10.129.40.23:/data/glusterfs/brick2/scratch
     Brick7: 10.129.40.24:/data/glusterfs/brick1/scratch
     Brick8: 10.129.40.24:/data/glusterfs/brick2/scratch
     Options Reconfigured:
     features.quota: on
     features.limit-usage: /:80TB

     Volume Name: home
     Type: Distribute
     Volume ID: 711465cf-db6c-4407-9b02-43e44ee4779b
     Status: Started
     Number of Bricks: 8
     Transport-type: tcp
     Bricks:
     Brick1: 10.129.40.21:/data/glusterfs/brick1/home
     Brick2: 10.129.40.21:/data/glusterfs/brick2/home
     Brick3: 10.129.40.22:/data/glusterfs/brick1/home
     Brick4: 10.129.40.22:/data/glusterfs/brick2/home
     Brick5: 10.129.40.23:/data/glusterfs/brick1/home
     Brick6: 10.129.40.23:/data/glusterfs/brick2/home
     Brick7: 10.129.40.24:/data/glusterfs/brick1/home
     Brick8: 10.129.40.24:/data/glusterfs/brick2/home
     Options Reconfigured:
     features.limit-usage: /:30TB
     features.quota: on

There doesn't appear to be any significant errors in the log files, but 
/var/log/glusterfs/scratch.log does have a lot of these types of messages:
     [2013-06-27 21:57:21.399355] W [quota.c:2167:quota_fstat_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:0b855d43-2a51-42bc-8707-fbe010cfe5b9)
     [2013-06-27 21:59:29.188686] E [io-cache.c:557:ioc_open_cbk] 
0-scratch-io-cache: inode context is NULL 
(5555d554-41ff-44be-be88-af3b0d570876)
     [2013-06-27 21:59:29.189095] W [quota.c:2301:quota_readv_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:5555d554-41ff-44be-be88-af3b0d570876)
     [2013-06-27 21:59:34.296190] E [io-cache.c:557:ioc_open_cbk] 
0-scratch-io-cache: inode context is NULL 
(5555d554-41ff-44be-be88-af3b0d570876)
     [2013-06-27 21:59:34.296686] W [quota.c:2301:quota_readv_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:5555d554-41ff-44be-be88-af3b0d570876)
     [2013-06-27 22:01:41.415542] E [io-cache.c:557:ioc_open_cbk] 
0-scratch-io-cache: inode context is NULL 
(bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
     [2013-06-27 22:01:41.416062] W [quota.c:2301:quota_readv_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
     [2013-06-27 22:01:43.570357] W [quota.c:1253:quota_unlink_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
     [2013-06-27 22:01:43.571182] W [quota.c:1253:quota_unlink_cbk] 
0-scratch-quota: quota context not set in inode 
(gfid:592ca6e8-31f9-4e97-9fe3-68ecaa806f22)

Please let me know if you need anything else.

Thanks much,

Neil Van Lysel
van-lyse at cs.wisc.edu
UNIX Systems Administrator
Center for High Throughput Computing
University of Wisconsin - Madison
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130627/42fd41c0/attachment.html>


More information about the Gluster-users mailing list