[Gluster-users] GlusterFS on mailservers

Craig Carl craig at gluster.com
Mon Nov 15 20:17:48 UTC 2010


On 11/15/2010 08:04 AM, Stephan von Krawczynski wrote:
> On Mon, 15 Nov 2010 10:18:28 -0500
> Joe Landman<landman at scalableinformatics.com>  wrote:
>
>> On 11/15/2010 09:47 AM, Stephan von Krawczynski wrote:
>>
>>>> Stephan -
>>>>       Dovecot has been a challenge in the past. We don't specifically test
>>>> with it here, if you are interested in using it with Gluster I would
>>>> suggest testing with 3.1.1, and always keep the index files local, that
>>>> makes a big difference.
>>>>
>>>> Thanks,
>>>>
>>>> Craig
>>> Well, Craig, I cannot follow your advice as these are 32 bit clients and AFAIK
>>> you said 3.1.1 is not expected to be used in such an environment.
>>> Really quite a lot of interesting setups for glusterfs turn around mail
>>> servers, I judge it to be a major deficiency if the fs cannot be used for such
>> Quick interjection here:  We have some customers using Dovecot on our
>> storage units with GlusterFS 3.0.x.  There are some issues, usually
>> interactions between dovecot and fuse/glusterfs.  Nothing that can't be
>> worked around.
> Well, a work-around is not the same as "just working". Do you really think that
> it is no sign of a problem if you need a work-around for a pretty standard
> usage request?
>
>>   We are seeing strong/growing interest from our customer
>> base in this use case.
> Well, that means I am right, not?
>
>> Craig's advice is spot on.
>>
>>> purposes. You cannot expect voting for glusterfs if there are other options
>>> that have no problems with such a standard setup. I mean is there something
>>> more obvious than mailservers for such a fs?
>> Hmmm ... apart from NFS (which isn't a cluster file system), which has a
>> number of its own issues, which other cluster file system are you
>> referring to, that don't have these sorts of issues?  Small file and
>> small record performance on any sort of cluster file system is very
>> hard.  You have to get it right first, and then work on the performance
>> side later.
> I am not talking of performance currently (though argueable), I am talking
> about the shere basic usage. Probably a lot of potential users come from nfs
> setups and want to make them redundant. And none has ever heard of a fs
> problem with 32 bit clients (just as an example) ...
> So this is an obvious problem.
> "Dovecot has been a challenge in the past", well, and how does the fs
> currently cope with this challenge?
> I am no supporter of the idea that fs tuning should be necessary just to make
> something work at all. For faster performance let there be tuning options, but
> for general support of a certain environment? I mean, did you ever tune
> fat,ntfs,extX or the like just to make email work? And don't argue about them
> not being network related: the simple truth is that this product is only a big
> hit if it is as easy to deploy as a local fs. That should be the primary goal.
>
>>> Honestly, I got the impression that you're heading away from the mainstream fs
>>> usage to very special environments and usage patterns.
>>> I feel very sorry about that because 2.X looked very promising. But I did not
>>> find a single setup where 3.X could be used at all.
>> While I respect your opinion, I do disagree with it. In our opinion
>> 3.1.x has gotten better than 3.0.x, which was a huge step up from 2.0.x.
> 2.0.x was something like a filesystem, 3.X is obviously heading to be a
> storage platform. That makes a big difference. And I'd say it did not get
> really better in general comparing apples to apples. glusterfs 2.0.x is a lot
> closer to a useable filesystem (lets say on linux boxes) than glusterfs 3.X is
> to netapp or emc storage platforms. There is nothing comparable to glusterfs
> 2.0.X on its boxes whereas one cannot really choose glusterfs storage in
> comparison to netapp. I mean you're trying to enter the wrong league because
> the big players will just crash you.
>
>> Regards,
>>
>> Joe
>>
>> -- 
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics, Inc.
>> email: landman at scalableinformatics.com
>> web  : http://scalableinformatics.com
>>          http://scalableinformatics.com/jackrabbit
>> phone: +1 734 786 8423 x121
>> fax  : +1 866 888 3112
>> cell : +1 734 612 4615
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Stephan -
    You made some very good points, thank you for your feedback. I'd 
like to address your Dovecot question directly, then some of your 
broader points.

     Around email servers our challenge there is very specific to 
Dovecot. Organizations regularly run Zimbra, Sendmail, Postfix, James, 
GroupWise, etc using Gluster with no problems and excellent performance. 
We also seem to have more community users using Gluster with Dovecot 
than I suspected. We do have at least one paid support subscription for 
a group using Gluster with Dovecot, we are actively working with them to 
improve performance and stability. If you take a look at 
bugs.gluster.com we have a P5 critical bug (#956) opened for Dovecot 
support, engineers have been assigned and we are actively working on a 
solution. Because Gluster is free as in beer as we patch Gluster to 
improve Dovecot support for our paid subscribers (THANK YOU ALL!) you 
and the rest of the community will benefit. Please don't think we are 
not working hard to meet your expectations.

    At a higher level Gluster is changing, and I think improving based 
on feedback from the community, our paid subscribers and the storage 
industry as a whole. Designing and writing a file system that is used on 
thousands of servers in less than 3 years was, and is incredibly 
challenging, and expensive. Contrast Gluster with another excellent file 
system project, brtfs, which also has paid engineering resources and is 
still very experimental [1].
   Our community asked for a couple of things from Gluster 3.1;

1. The ability to dynamically grow and shrink a cluster. We have 
delivered that.
2. Better NFS support so we implemented NFS as a translator, bypassing 
both FUSE and VFS. We recommend NFS with UCARP and RRDNS for a load 
balanced and highly available storage solution, that works well today.
3. WAN replication (async), coming soon.
4. Quotas, coming soon.
5. Full AD integration, coming soon.
6. Easier installation and configuration, we delivered that with a 
single command to manage a entire distributed file system.

    Now the one point I really disagree with you on - "...wrong league 
because the big players will just crash(sic) you." Absolutely not, I 
guarantee it! We are competitive against any solution from Netapp, EMC, 
Isilon, and anybody else in the market;

1. We provide excellent, 24x7, 365 days a year, 4 hour response time 
support to our paid subscribers, as good or better than any of the "big 
players".
2. We've separated the file system and hardware, all the "big players" 
require incredibly expensive, proprietary hardware to run their solution.
3. We are open source, no danger of something going EOL or losing 
support from a vendor.
4. At anytime, with any volume configuration Gluster can be completely 
uninstalled and the file system consolidated onto a single host with 
less than 5 commands. (SCP, cat, ls, cp)
5. Using Gluster 3.1 an organization can completely replace every piece 
of hardware Gluster is using without any impact to applications. We have 
completely eliminated the concept of data migrations due to hardware 
changes, a huge advantage over any of the "big players".
I could go on and on....

    Yes, Gluster 3.1 is less customizable than 3.0.x versions, this also 
means it is easier to use.  We are working on adding some of that 
customizability back (bug 2014 
<http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2014>). If 
you have some things you would like to see Gluster do, or improve, 
please, please post your ideas here. Most of the product ideas posted to 
this mailing list in the last 6 months are being actively developed or 
investigated.

    Thanks again for your feedback.

[1] Chris Mason is doing incredible work along with the entire brtfs 
community, nothing but respect from us. We look forward to combining the 
capabilities of Gluster and brtfs ASAP.


Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster



















More information about the Gluster-users mailing list