[Gluster-users] Need a quick answer on "Distributed Replicated Storage" questions

Liam Slusser lslusser at gmail.com
Thu Jun 18 23:35:59 UTC 2009


Thanks for the update Anand.

Funny you mention unfs3.  Just today one of our engineers at work setup
unfs3 against our large production gluster cluster and, so far, it has been
very good.  I just was reading up on your modified booster version and will
also give that a try also.
I'm looking forward to testing out the modified unfs3 and the native nfs
protocol translator!

Thanks!

ls

On Thu, Jun 18, 2009 at 1:36 AM, Anand Babu <ab at gluster.com> wrote:

> We made good progress with unfs3 integration using booster model. GlusterFS
> and unfs3 (modified version) will run in single address space using booster
> library.
> This feature is scheduled for 2.1. We will try to have a pre-release
> available
> soon (in weeks). GlusterFS v2.2 will have a native NFS protocol translator.
>
> iSCSI exporting requires mmap support. You can create image files and
> losetup them
> as devices. Then it will be possible to export as iSCSI volumes. We just
> fixed
> a bug that caused poor mmap write performance. Work is on the way. We will
> keep
> you updated.
>
> --
> Anand Babu Periasamy
> GPG Key ID: 0x62E15A31
> Blog [http://unlocksmith.org]
> GlusterFS [http://www.gluster.org]
> GNU/Linux [http://www.gnu.org]
>
>
> Liam Slusser wrote:
>
>>  Jonathan,
>>  You can export a Gluster mount via a client with a NFS server however the
>> performance is pretty poor.  As far as i know there is no way to export it
>> with iSCSI.
>>  Your best option is to use a single/dual Linux/Solaris iscsi server to
>> boot strap all your systems in xenServer and then use Gluster and fuse to
>> mount your /data drive once the system is up and running.
>>  liam
>>
>> On Mon, Jun 15, 2009 at 5:15 PM, Jonathan Bayles <jbayles at readytechs.com<mailto:
>> jbayles at readytechs.com>> wrote:
>>
>>    Hi all,
>>
>>    I am attempting to prevent my company from having to buy a SAN to
>>    backend our virtualization platform(xenServer). Right now we have a
>>    light workload and 4 dell 2950's (6disks, 1 controller each) to
>>    leverage against the storage side. I like what I see in regard to
>>    the "Distributed Replicated Storage" where you essentially create a
>>    RAID 10 of bricks. This would work very well for me. The question
>>    is, how do I serve this storage paradigm to a front end that's
>>    expecting an NFS share or an iSCSI target? Does gluster enable me to
>>    access the entire cluster from a single IP? Or is it something I
>>    could run on a centos cluster (luci and ricci) and use the cluster
>>    suite to present the glustered file system in the form of an NFS share?
>>
>>    Let me back up and state my needs/assumptions:
>>
>>    * A storage cluster with the capacity equal to at least 1
>>    node(assuming all nodes are the same).
>>
>>    * I need to be able to lose/take down any one brick in the cluster
>>    at any time without a loss of data.
>>
>>    * I need more than the throughput of a single server, if not in
>>    overall speed, then in width.
>>
>>    * I need to be able to add more bricks in and have the expectation
>>    of increased storage capacity and throughput.
>>
>>    * I need to present the storage as a single entity as an NFS share
>>    or a iSCSI target.
>>
>>    If there are any existing models out there please point me too them,
>>    I don't mind doing the work I just don't want to re-invent the
>>    wheel. Thanks in advance for your time and effort, I know what its
>>    like to have to answer newbie questions!
>>    _______________________________________________
>>    Gluster-users mailing list
>>    Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>    http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090618/9c567e16/attachment.html>


More information about the Gluster-users mailing list