[Gluster-devel] GlusterFS vs xfs w/ inode64 (was: Missing files?)

Liam Slusser lslusser at gmail.com
Tue Jul 28 21:49:25 UTC 2009


Matthias,

Just to run a few more tests....

Centos 5.3 x86_64 on all systems in my test

I just created 1000 directories with a file in each directory and everything
worked fine.  All the directories shows up on each node and on the client.

[root at store01 /]# xfs_db -r -c sb -c p /dev/sdb1| egrep 'ifree|icount'
icount = 37786752
ifree = 1133409
[root at store02 /]# xfs_db -r -c sb -c p /dev/sdb1| egrep 'ifree|icount'
icount = 36526592
ifree = 269

[root at client testliam]# for i in `seq 1 1000`; do mkdir $i; done
[root at client testliam]# for i in `seq 1 1000`; do echo "woohoo" >
$i/$i.myfileyay; done
[root at client testliam]# ls |wc -l
1000
[root at client testliam]# ls -R | wc -l
4001

(after the test)

[root at store01 /]# xfs_db -r -c sb -c p /dev/sdb1| egrep 'ifree|icount'
icount = 37786752
ifree = 1133409
[root at store02 /]# xfs_db -r -c sb -c p /dev/sdb1| egrep 'ifree|icount'
icount = 36526592
ifree = 269

[root at server01 testliam]# ls | wc -l
1000
[root at server01 testliam]# ls -R | wc -l
4001
[root at server02 testliam]# ls | wc -l
1000
[root at server02 testliam]# ls -R | wc -l
4001

The ifree number didnt change after i ran the tests, all the files look to
be fine and intact - so as far as i can tell everything is working fine.

Did your tests show something else?

liam


On Tue, Jul 28, 2009 at 1:14 PM, Matthias Saou <
thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net> wrote:

> Liam Slusser <lslusser at gmail.com> wrote:
>
> > I worked on this issue this morning and could find nothing that would
> > indicate it wouldn't work.  I was down to 45 free inodes (says
> > xfs_db) so i brought down one of the nodes and applied the inode64
> > option to /etc/fstab and remount the partition and restarted
> > gluster.  Everything appears to be working normally so i applied the
> > same option to my other server, and again, everything is working
> > normally. I'll let you know after we run with this for a few days but
> > so far everything is fine and working normally.  I'm on Centos 5.3
> > x86_64 btw.
> >
> > An interesting note, after applying the inode64 option the "ifree"
> > output after running xfs_db didn't actually change but the filesystem
> > is working normally.  I found a bunch of posts on the interweb of
> > people who had that exact experience.
>
> It's not mounting with the inode64 option which is an issue, it's once
> you get files or directories allocated with inodes > 32bits from our
> experience. So what you need to do is test with files or directories
> created once the filesystem has been mounted with the inode64 options
> and do have their inode beyond the 32bit limit.
>
> Here's an example of what we have on the server :
> [...]
>  3221490663 drwxrwsr-x  5 flumotion file   46 Mar 11 19:07 cust5
>  3221491044 drwxrwsr-x  6 flumotion file   56 May 12 12:27 cust1
>  3221495387 drwxrwsr-x  9 flumotion file  109 Mar 26  2008 cust2
>  3221500135 drwxrwsr-x  5 flumotion file   46 May 22 11:36 cust8
>  3221500510 drwxrwsr-x  3 flumotion file   21 Jan  2  2008 cust23
>  3221500956 drwxrwsr-x  5 flumotion file   46 Jun 25 16:23 cust7
> 72107183169 drwxrwsr-x  3 flumotion file   26 Jul 10 13:38 cust4
> 98784439192 drwxrwsr-x  3 flumotion file   29 Jul  2 16:55 cust12
>
> The last two directories are the ones which aren't accessible from the
> glusterfs clients.
>
> So in your case, if you had only 45 free inodes left, you should be
> able to create 46 directories and get the 46th reproduce the problems.
>
> Matthias
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090728/ee429d85/attachment-0003.html>


More information about the Gluster-devel mailing list