[Gluster-users] Architecture advice..
Wipe_Out
wipe_out at users.sourceforge.net
Thu Apr 19 23:02:27 UTC 2012
Hi all,
After lots of looking at just about all the distributed file systems out
there I have decided that I like the simple setup of GlusterFS the best and
not having to have a metadata server seems like a good idea.. The issue I
have is working out the best way to setup the cluster we are trying to
create.. We have existing hardware that will be used which may throw a few
spanners in the works..
The sole purpose of the system is for virtualisation..
The Hardware
2 x 1U Dual Xeon Quad core servers with 2x2TB SATA drives which will be
setup in a RAID1 configuration..
2 x 2U Dual Xeon Quad core servers with 6x2TB SATA drives which will be
setup in a RAID10 configuration..
Features we are trying to achieve are..
- Live migration of VM's between any of the nodes..
- High performance (reading and writing to multiple servers) and high
availability (loss of a drive or a whole server won't stop any VM's running
for any length of time. VM's running on the failed server can be booted
immediately on another server)
- Scalability - We want to be able to add more nodes to the cluster as and
when needed to expand computing power and/or storage..
Firstly can GlusterFS support bricks of different sizes in a volume? I have
not been able to find details on this..
Now it seems to me what's needed is a block/chunk level distribution of
data rather than file level because VM's run in single large files so if
files are distributed at file level it will mean an entire VM image will be
stored on one brick which won't help performance.. Am I right in thinking
that this is what the "stripe" translator does as opposed to the
"distributed" translator?
If so how do you achieve high availability with a "stripe" because it would
need to be "replicated" as well.. Is this possible?? On different sized
bricks??
Then the issue is with scalability.. How do you expand a "striped" and
"replicated" volume?? If starting with 4 servers would I have to add
another 4 in order to expand the cluster or could I add one at a time??
These are the things I can't figure out with GlusterFS where systems like
Ceph allow incremental expansion and the data is then redistributed so
there are n copies of each block somewhere in the cluster (its not a strict
mirror redundancy but rather distributed redundancy throughout the
cluster), at least that's what their documentation says..
If anyone can give any pointers on these issue or advise on how to put it
together I would be most grateful..
Thanks in advance...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120420/66ead1b2/attachment.html>
More information about the Gluster-users
mailing list