[Gluster-users] Update on internal development

Stephan von Krawczynski skraw at ithnet.com
Thu Aug 20 12:32:47 UTC 2009


On Wed, 19 Aug 2009 18:27:29 -0700
Anand Babu Periasamy <ab at gluster.com> wrote:

> John Leach wrote:
> > On Tue, 2009-08-18 at 18:22 +0200, Stephan von Krawczynski wrote:
> >> On Tue, 18 Aug 2009 15:01:46 +0200
> > 
> >> And, I forgot to mention: it took me around 1 hour of bonnie to crash a server
> >> in a classical distribute setup...
> >> Please stop featurism and start reliability.
> > 
> > Hi Stephan,
> > 
> > I just reviewed the GlusterFS commit logs for the release-2.0 branch and
> > *every single commit since Jul 17th has been a bug fix*, with a
> > reference to the bugzilla entry.
> > 
> > Practically all the commits before then through to May (where I gave up
> > looking) are also bugfixes, just without bugzilla references - I
> > couldn't find one serious new feature mentioned.
> > 
> > I'm not saying your problems aren't real, but the Z research folk do
> > seem to have been taking reliability seriously for a long time now.
> > 
> > John.
> 
> We are happy to see our community advocating both sides. Negative feedbacks are
> important for us to improve.
> 
> 2.0 has been frozen since 2.0.0 release. New features are scheduled for 2.1 release.
> 
> Recently we recruited experienced storage professionals and created a RAS team
> (reliability, availability, serviceability). Their goal is to solely improve
> reliability and make GlusterFS resilient. Here are some of the initiatives..
> 
> * Automated regression / stress / functionality tests
> * Unit test framework
> * Patch-by-patch code audit
> * Increased the size of hardware lab significantly
> * Gtrace framework (similar solaris dtrace)
> 
> We are particularly excited about Gtrace, because it will help us narrow down faults
> fairly quickly. Users will be able to report bugs by posting gtrace dumps which contain
> complete information about the bug than log files. It is easier than launching gdb or
> strace :).
> 
> You will also see a volume-generator tool to automatically generate volume specification
> files. This avoids learning time and human-errors while crafting your volume design.
> 
> Because GlusterFS runs on many hardware and operating system distributions, it is very
> hard for us to test and certify all combinations. What makes worse is.. GlusterFS itself
> is programmable. One solution to this problem is to reduce the number of variables. Then
> it will allow us to certify those models. Volume specification generator will have
> well tested options only.
> 
> Gluster Storage Platform with embedded kernel, web based management and monitoring will
> remove most of the variables today. We will then be able to provide certified list of
> hardware and client operating systems. We are also adding NFS, CIFS and WebDAV support.
> 
> -- 
> Anand Babu Periasamy
> GPG Key ID: 0x62E15A31
> Blog [http://unlocksmith.org]
> GlusterFS [http://www.gluster.org]
> GNU/Linux [http://www.gnu.org]

If you want to hear my opinion on that:
stay with a successful strategy : keep it simple.
If you want to learn from a bad strategy look at btrfs. Take my word, it will
take at least 2-3 years until being really reliably useable, and then people
will realise it has become the brontosaurus of a fs: big, slow, eating lots of
resources, and doing everything beside the things really needed.
And some day then someone will understand the simple fact that a (local) fs
only needs few things:
- unlimited max size
- journals
- versioning / undelete by versioning
- _online_ userspace filesystem check
To make sure I cc'ed the "someone" ;-)

-- 
Regards,
Stephan




More information about the Gluster-users mailing list