[Gluster-users] Need a quick answer on "Distributed Replicated Storage" questions

Liam Slusser lslusser at gmail.com
Thu Jun 18 06:00:18 UTC 2009


Jonathan,

You can export a Gluster mount via a client with a NFS server however the
performance is pretty poor.  As far as i know there is no way to export it
with iSCSI.

Your best option is to use a single/dual Linux/Solaris iscsi server to boot
strap all your systems in xenServer and then use Gluster and fuse to mount
your /data drive once the system is up and running.

liam

On Mon, Jun 15, 2009 at 5:15 PM, Jonathan Bayles <jbayles at readytechs.com>wrote:

> Hi all,
>
> I am attempting to prevent my company from having to buy a SAN to backend
> our virtualization platform(xenServer). Right now we have a light workload
> and 4 dell 2950's (6disks, 1 controller each) to leverage against the
> storage side. I like what I see in regard to the "Distributed Replicated
> Storage" where you essentially create a RAID 10 of bricks. This would work
> very well for me. The question is, how do I serve this storage paradigm to a
> front end that's expecting an NFS share or an iSCSI target? Does gluster
> enable me to access the entire cluster from a single IP? Or is it something
> I could run on a centos cluster (luci and ricci) and use the cluster suite
> to present the glustered file system in the form of an NFS share?
>
> Let me back up and state my needs/assumptions:
>
> * A storage cluster with the capacity equal to at least 1 node(assuming all
> nodes are the same).
>
> * I need to be able to lose/take down any one brick in the cluster at any
> time without a loss of data.
>
> * I need more than the throughput of a single server, if not in overall
> speed, then in width.
>
> * I need to be able to add more bricks in and have the expectation of
> increased storage capacity and throughput.
>
> * I need to present the storage as a single entity as an NFS share or a
> iSCSI target.
>
> If there are any existing models out there please point me too them, I
> don't mind doing the work I just don't want to re-invent the wheel. Thanks
> in advance for your time and effort, I know what its like to have to answer
> newbie questions!
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090617/25cdb239/attachment.html>


More information about the Gluster-users mailing list