[Bugs] [Bug 1288238] New: failed to get inode size
bugzilla at redhat.com
bugzilla at redhat.com
Thu Dec 3 22:40:25 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1288238
Bug ID: 1288238
Summary: failed to get inode size
Product: GlusterFS
Version: 3.5.5
Component: glusterd
Severity: low
Assignee: bugs at gluster.org
Reporter: nvanlysel at morgridge.org
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
The following errors appear over and over in the etc-glusterfs-glusterd.vol.log
log:
[2015-12-03 22:19:14.147379] E
[glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management: failed to
get inode size
[2015-12-03 22:22:14.100826] E
[glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management: xfs_info
exited with non-zero exit status
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. Format bricks with xfs
2. Create 8x2 distributed-replicate volume
3. Start volume
Actual results:
Expected results:
Additional info:
[root at storage-1 ~]# df -hT /brick1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb1 xfs 28T 373G 27T 2% /brick1
Inode size is clearly defined.
[root at storage-1 ~]# xfs_info /brick1
meta-data=/dev/sdb1 isize=512 agcount=28, agsize=268435455 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=7324302848, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root at storage-1 ~]# xfs_info /brick1/home
meta-data=/dev/sdb1 isize=512 agcount=28, agsize=268435455 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=7324302848, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Note the value of inode size below.
[root at storage-1 ~]# gluster volume status home detail
Status of volume: home
------------------------------------------------------------------------------
Brick : Brick storage-1:/brick1/home
Port : 49152
Online : Y
Pid : 8281
File System : xfs
Device : /dev/sdb1
Mount Options : rw,noatime,nodiratime,barrier,largeio,inode64
Inode Size : N/A
Disk Space Free : 33.1TB
Total Disk Space : 36.4TB
Inode Count : 3906469632
Free Inodes : 3902138232
[root at storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%
GLUSTER SERVER PACKAGES:
[root at storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64
XFSPROGS PACKAGE:
[root at storage-1 ~]# rpm -qa |grep xfsprogs
xfsprogs-3.1.1-16.el6.x86_64
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list