[Gluster-devel] Re: Problems while upgrading from 1.2.x to 1.3.x

NovA av.nova at gmail.com
Wed Oct 24 21:28:19 UTC 2007

Hello, Krishna!

2007/10/24, Krishna Srinivas <krishna at zresearch.com>:

> > The problem
> > is that files and directories on bricks has inconsistent inode numbers
> > for new 1.3.x GlusterFS. Different files don't allowed to have the
> > same inodes, which is typical situation in unifed GlusterFS 1.2
> > volume. The question is, what is the client behavior in such a case?
> > It could happen on working system, if one create a file on backend FS
> > for example...

> When you are using unify which uses NS to decide on the inode number,
> it can not happen so that two files have same inode number because
> the NS will be on the same filesystem.
Yes, it would be so if one create files through GlusterFS only (as it
should be). BUT what if I  create file directly in the exported
directory on backend filesystem? It could get any inode number FS
choose regardless glfs...

> How did you conclude that two
> files got the same inode number?
Debug log had many records something about "inode missmatch". But I
doesn't save that log... I think that among thousands of files created
independently on different HDDs some should have same inodes :)

> Can you paste your spec files?
They are the same as in my previous post:

> Can you make sure that both glusterfs
> and glusterfsd are of the same version?
Yes. I use self made rpms for glusterfs and believe that "rpm -e
glusterfs", then "rpm -i glusterfs-1.3.6.tla527-4.x86_64.rpm" and then
rebooting does everything correctly...

> > And the more practical question: Is there a way to convert FS without
> > recreating file structure? In my case, the data on GlusterFS 1.2 is
> > too large to be saved on one HDD and then moved on new gluster
> > storage...
> Yes, selfhealing should take care of it.
Will it change coincident inode numbers for already existing files to
fit in unify?


More information about the Gluster-devel mailing list