[Gluster-users] mkfs.xfs inode size question
Brian Foster
bfoster at redhat.com
Wed Oct 3 11:50:25 UTC 2012
On 10/03/2012 07:22 AM, Kaleb S. KEITHLEY wrote:
> On 10/03/2012 02:36 AM, Bryan Whitehead wrote:
>> Look at this guide:
>> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html-single/Administration_Guide/index.html#chap-User_Guide-Setting_Volumes
>>
>>
>> I noticed this: "you must increase the inode size to 512 bytes from
>> the default 256 bytes"
>>
>> With an example mkfs.xfs like: mkfs.xfs -i size=512 DEVICE
>>
>> I didn't do this when first setting up my bricks. Will I suddenly be
>> boned some random time in the future?
>
> You may see poorer performance than you would on an xfs with size=512;
> that's about it.
>
> When all the xattrs won't fit in a single inode they'll spill over into
> a second inode. The extra overhead of reading the extra inode is where
> the performance impact occurs.
>
You can verify whether you are affected by this with 'xfs_bmap -a'
against files on your backend servers. No extents means the attributes
fit within the inode. Otherwise, it prints the extent map info for the
extent(s) the filesystem allocated to store attributes.
Brian
More information about the Gluster-users
mailing list