[Gluster-users] Gluster infrastructure question

bernhard glomm bernhard.glomm at ecologic.eu
Mon Dec 9 18:52:57 UTC 2013

Hi Heiko,

some years ago I had to deliver a reliable storage that should be easy to grow in size over time.
For that I was in close contact with
presto prime who produced a lot of interesting research results accessible to the public.
what was striking me was the general concern of how and when and with which pattern hard drives will fail,
and the rebuilding time in case a "big" (i.e. 2TB+) drive fails. (one of the papers at pp was dealing in detail with that)
From that background my approach was to build relatively small raid6 bricks (9 * 2 TB + 1 Hot-Spare)
and connect them together with a distributed glusterfs.
I never experienced any problems with that and felt quite comfortable about it.
That was for just a lot of big file data exported via samba.
At the same time I used another, mirrored, glusterfs as a storage backend for 
my VM-images, same there, no problem and much less hazel and headache than drbd and ocfs2 
which I run on another system.


	 Bernhard Glomm
IT Administration

Phone:	 +49 (30) 86880 134
Fax:	 +49 (30) 86880 100
Skype:	 bernhard.glomm.ecologic
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

On Dec 9, 2013, at 2:18 PM, Heiko Krämer <hkraemer at anynines.de> wrote:

> Signed PGP part
> Heyho guys,
> I'm running since years glusterfs in a small environment without big
> problems.
> Now I'm going to use glusterFS for a bigger cluster but I've some
> questions :)
> Environment:
> * 4 Servers
> * 20 x 2TB HDD, each
> * Raidcontroller
> * Raid 10
> * 4x bricks => Replicated, Distributed volume
> * Gluster 3.4
> 1)
> I'm asking me, if I can delete the raid10 on each server and create
> for each HDD a separate brick.
> In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there
> any experience about the write throughput in a production system with
> many of bricks like in this case? In addition i'll get double of HDD
> capacity.
> 2)
> I've heard a talk about glusterFS and out scaling. The main point was
> if more bricks are in use, the scale out process will take a long
> time. The problem was/is the Hash-Algo. So I'm asking me how is it if
> I've one very big brick (Raid10 20TB on each server) or I've much more
> bricks, what's faster and is there any issues?
> Is there any experiences ?
> 3)
> Failover of a HDD is for a raid controller with HotSpare HDD not a big
> deal. Glusterfs will rebuild automatically if a brick fails and there
> are no data present, this action will perform a lot of network traffic
> between the mirror bricks but it will handle it equal as the raid
> controller right ?
> Thanks and cheers
> Heiko
> --
> Anynines.com
> Avarteq GmbH
> B.Sc. Informatik
> Heiko Krämer
> Twitter: @anynines
> ----
> Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
> Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
> Sitz: Saarbrücken
> <hkraemer.vcf>_______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131209/c95b9cc8/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131209/c95b9cc8/attachment.sig>

More information about the Gluster-users mailing list