[Gluster-devel] Mail cluster storage benchmark

Basavanagowda Kanur gowda at zresearch.com
Tue Nov 11 07:38:57 UTC 2008


Daniel,
  If you meant to migrate an existing storage from unify-AFR-posix to
DHT-AFR-BDB or fresh setup?

--
gowda

On Tue, Nov 11, 2008 at 1:05 PM, Basavanagowda Kanur <gowda at zresearch.com>wrote:

> Daniel,
>   Replies inline.
>
> On Tue, Nov 11, 2008 at 12:10 AM, Daniel van Ham Colchete <
> daniel.colchete at gmail.com> wrote:
>
>> Hi yall!
>>
>> for the last few days I have been developing a software to benchmark a
>> mail server cluster. The reason for this is that first I couldn't find one
>> online, second using sequential tests on any filesystem won't give you
>> what's best for you because with e-mail everything is happening in parallel
>> and you won't see if one option wins on parallelism (tar, rsync, cp, find,
>> ls -la, etc all are sequential tests), and third is that there is no better
>> way to produce a similar read/write IO pattern. This software will be
>> released GPL next month.
>>
>> Thursday I'll be arriving at my data center (at another continent by the
>> way), I'll be there for 13 days and my main objective for these days is to
>> put GlusterFS up running there. The current setup is not good enought and it
>> doesn't scale anymore. Other things are secondary, although there are a few.
>> Throughout those 13 days I will be using my software to study many many
>> options for the storage servers, for example:
>>
>> Filesystem options: Ext3, XFS, ZFS (OpenSolaris)
>> RAID options: RAID 10, RAID 5, RAID 6, RAID 0 + (ZFS:RAID-1, ZFS:RAID-Z,
>> ZFS:RAID-Z2)
>> GlusterFS: 1.3/1.4, Unify/DHT, BDB/Posix, IO-Cache, Read-Ahead,
>> Write-Behind
>> Network file systems: NFS/GlusterFS (splitting directories with NFS is a
>> escape plan).
>>
>> Besides other options involving e-mail hosting, like what RDBMS I should
>> use to do user authentication, etc...
>>
>> I have 4 nice mail servers (storage clients) and two nice storage servers
>> (4TB each) available for the tests. My final setup is exactly the double (8
>> mail servers, 4 storage servers). Right now I have about 9k users checking
>> their e-mails every 5 minutes, running with HA-NFS, but this setup is my
>> current bottle-neck and it doesn't scale (I can't just add more storage
>> servers). I expect to put 15k users on the new setup but I wish it could
>> grow bigger.
>>
>> Of couse I'll send all the results to the list, put on the wiki too (NFS x
>> Best GlusterFS for mail storage, how is this going to end?). I would like to
>> ask you devs and users a few questions:
>>
>> (Devs and users) How do I get started with OpenSolaris and ZFS? What
>> should I be looking for with my bechmarks? What do I have to study on ZFS
>> performance optimization? I have no experience at all here.
>>
>> (Devs) Can you send me an example on the DHT? The DHT is what I have
>> always dreamt about, I'm really betting on it.
>
>
> volume dht
>     type cluster/dht
>     subvolume <sub-1> <sub-2> <sub-3>
> end-volume
>
> please note that you need to have support for extended attributes on your
> storage.
>
>
>>
>>
>> (Devs) I've seen by the changelog that the 1.4 tree development is going
>> really fast. Bug are being fixed everyday and this is great! I hope I'll
>> help to find a fix a few too in this process. So, say 1.4 is much much
>> better for me (I'm betting on that), when do you guys expect an stable
>> version? How easy is to migrate from 1.3-Unify+AFR+Posix to 1.4-DHT+AFR+BDB?
>
>
> It is pretty straightforward. :)
>
> DHT servers the same purpose as unify, distributing files over subvolumes.
>
> You need to have Berkeley DB library for BDB to work.
>
>
>>
>>
>> Thank you all for all the help!
>>
>> Best regards,
>> Daniel van Ham Colchete
>> (IRC: vanham)
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>
>
>
> --
> hard work often pays off after time, but laziness always pays off now
>



-- 
hard work often pays off after time, but laziness always pays off now
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20081111/97496c4b/attachment-0003.html>


More information about the Gluster-devel mailing list