[Gluster-users] how well will this work
Gerald Brandt
gbr at majentis.com
Thu Dec 27 12:23:19 UTC 2012
On 12-12-26 10:24 PM, Miles Fidelman wrote:
> Hi Folks,
>
> I find myself trying to expand a 2-node high-availability cluster from
> to a 4-node cluster. I'm running Xen virtualization, and currently
> using DRBD to mirror data, and pacemaker to failover cleanly.
>
> The thing is, I'm trying to add 2 nodes to the cluster, and DRBD
> doesn't scale. Also, as a function of rackspace limits, and the
> hardware at hand, I can't separate storage nodes from compute nodes -
> instead, I have to live with 4 nodes, each with 4 large drives (but
> also w/ 4 gigE ports per server).
>
> The obvious thought is to use Gluster to assemble all the drives into
> one large storage pool, with replication. But.. last time I looked at
> this (6 months or so back), it looked like some of the critical
> features were brand new, and performance seemed to be a problem in the
> configuration I'm thinking of.
>
> Which leads me to my question: Has the situation improved to the
> point that I can use Gluster this way?
>
> Thanks very much,
>
> Miles Fidelman
>
>
Hi,
I have a XenServer pool (3 servers) talking to an GlusterFS replicate
server over NFS with uCARP for IP failover.
The system was put in place in May 2012, using GlusterFS 3.3. It ran
very well, with speeds comparable to my existing iSCSI solution
(http://majentis.com/2011/09/21/xenserver-iscsi-and-glusterfsnfs/
I was quite pleased with the system, it worked flawlessly. Until
November. At that point, the Gluster NFS server started stalling under
load. It would become unresponsive for a long enough period of time
that the VM's under XenServer would lose their drives. Linux would
remount the drives read-only and then eventually lock up, while Windows
would just lock up. In this case, Windows was more resilient to the
transient disk loss.
I have been unable to solve the problem, and am now switching back to a
DRBD/iSCSI setup. I'm not happy about it, but we were losing NFS
connectively nightly, during backups. Life was hell for a long time
while I was trying to fix things.
Gerald
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121227/7c0e3f7d/attachment.html>
More information about the Gluster-users
mailing list