[Gluster-devel] Confusion about 1.3pre5*

Anand Babu Periasamy ab at gnu.org.in
Tue Jul 3 14:33:14 UTC 2007


Hi Steffen,
Thanks for raising your concerns. My answers are below..

Steffen Grunewald writes:

> I'm confused.
> 
> I've been watching glusterfs development for quite a while now, and
> really enjoyed its rapid development.
> 
> I even considered to install glusterfs in a production
> environment. The 1.3 pre-releases showed a trend towards
> stabilization, and there were only a few changes to mainline--2.4
> (which is still being announced as the main branch on the
> download.php page).
> 
> Then, all of a sudden, stat-prefetch was removed, io-threads still
> seem to be unstable, and there's even a new feature (self-heal) that
> would have better fitted into the 1.4 roadmap, IMHO that introduced
> a major code and API change into a release candidate (I'd have
> expected a feature freeze at this point, and fixes being made to the
> 2.4 branch while preparing the next devel release in 2.5 - or is
> there even a 2.6 yet?).

Actually we would have released 1.3 long time ago. 

REASON: 1
Brent Nelson reported a bug that required us use inode as
reference instead of names. Even some tools such as TLA will use
inodes directly to track files (POSIX API allows so). Bringing this
change would render previous name based clients incompatible. We
definitely do not want to make 1.3 popular and come up with 1.4 that
is not backward compatible. I am sure we will find more displeased
users. So we decided to bring this change early on in 1.3 itself.

REASON: 2
To implement inode based reference, name-space-cache was anyways
required. Name-space cache also provides other benifits such
as.. avoiding creation of duplicate files while some bricks are down,
reducing I/O operations considerably for small files, avoiding global
name space lock for create operations..

REASON: 3
Lot of users gave feed back that they will consider 1.4 for production
deployment mainly because of because of self-heal. If 1.3 will not be
considered serious, we might as well pull self-heal into 1.3 and make
it a production class release. 


> The documentation in the Wiki doesn't refer to release versions nor
> branch versions, so I don't know whether I can still live without
> self-healing and namespace volumes, I don't know what the
> requirements would be for such a namespace volume (some info started
> to trickle through this list: yes, I'd need lots of inodes... still
> I don't know where to place it - would it interfere with the data
> volume it definitely would have to share the same disk with? what
> about performance penalties then?)...

We are planning to provide PDF documentation with every stable release
of GlusterFS. Moving forward we will also tag version specific options
in Wiki appropriately. As of now the problem is time crunch. 


NAME-SPACE REQUIREMENTS:
Typically you don't have to do any thing special. Just point your name
space volume directory to another fresh directory that is not already
exported through GlusterFS. Typically default inode number count is
good enough. Name space cache is nothing but the same directory tree
with dummy files as place holders. They are mostly cached in ram and
only look up operation happens on them. There is nothing much to worry
about performance. If you building a specialized or giant storage,
then you have the option to use a mount that has sufficient number of
inodes and fewer bytes per inode. I will just go for regular Ext3 for
my needs. If name space cache is lost for some reason, GlusterFS will
rebuild it automatically. It is just a cache. 

GlusterFS is already running on some production deployments. Release
was based on previous older ones. It is not as fast or feature rich
when compared to current development version, but stable enough. We
decided to go ahead and stabilize the current version than trying to
support a older version that we have don't have any interest in
maintaining it. Because GlusterFS is not popular yet, we had the
liberty to make such drastic decisions. We appreciate your feedback a
lot. We will assure you, this wont happen in future. My recommendation
for you is to wait for couple of weeks and see the progress. 

Luckily GlusterFS because of its modular stackable user-space design,
it is easy to fix bugs. We have already frozen the code base. In just
matter of weeks we will see a stable release. You can already see the
progress we have made in a short period of time. We are working round
the clock. 

-- 
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]








More information about the Gluster-devel mailing list