[Gluster-users] Gluster infrastructure question
Andrew Lau
andrew at andrewklau.com
Tue Dec 10 10:03:36 UTC 2013
Hi Ben,
For glusterfs would you recommend the enterprise-storage
or throughput-performance tuned profile?
Thanks,
Andrew
On Tue, Dec 10, 2013 at 6:31 AM, Ben Turner <bturner at redhat.com> wrote:
> ----- Original Message -----
> > From: "Ben Turner" <bturner at redhat.com>
> > To: "Heiko Krämer" <hkraemer at anynines.de>
> > Cc: "gluster-users at gluster.org List" <gluster-users at gluster.org>
> > Sent: Monday, December 9, 2013 2:26:45 PM
> > Subject: Re: [Gluster-users] Gluster infrastructure question
> >
> > ----- Original Message -----
> > > From: "Heiko Krämer" <hkraemer at anynines.de>
> > > To: "gluster-users at gluster.org List" <gluster-users at gluster.org>
> > > Sent: Monday, December 9, 2013 8:18:28 AM
> > > Subject: [Gluster-users] Gluster infrastructure question
> > >
> > > -----BEGIN PGP SIGNED MESSAGE-----
> > > Hash: SHA1
> > >
> > > Heyho guys,
> > >
> > > I'm running since years glusterfs in a small environment without big
> > > problems.
> > >
> > > Now I'm going to use glusterFS for a bigger cluster but I've some
> > > questions :)
> > >
> > > Environment:
> > > * 4 Servers
> > > * 20 x 2TB HDD, each
> > > * Raidcontroller
> > > * Raid 10
> > > * 4x bricks => Replicated, Distributed volume
> > > * Gluster 3.4
> > >
> > > 1)
> > > I'm asking me, if I can delete the raid10 on each server and create
> > > for each HDD a separate brick.
> > > In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there
> > > any experience about the write throughput in a production system with
> > > many of bricks like in this case? In addition i'll get double of HDD
> > > capacity.
> >
> > Have a look at:
> >
> > http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf
>
> That one was from 2012, here is the latest:
>
>
> http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf
>
> -b
>
> > Specifically:
> >
> > ● RAID arrays
> > ● More RAID LUNs for better concurrency
> > ● For RAID6, 256-KB stripe size
> >
> > I use a single RAID 6 that is divided into several LUNs for my bricks.
> For
> > example, on my Dell servers(with PERC6 RAID controllers) each server has
> 12
> > disks that I put into raid 6. Then I break the RAID 6 into 6 LUNs and
> > create a new PV/VG/LV for each brick. From there I follow the
> > recommendations listed in the presentation.
> >
> > HTH!
> >
> > -b
> >
> > > 2)
> > > I've heard a talk about glusterFS and out scaling. The main point was
> > > if more bricks are in use, the scale out process will take a long
> > > time. The problem was/is the Hash-Algo. So I'm asking me how is it if
> > > I've one very big brick (Raid10 20TB on each server) or I've much more
> > > bricks, what's faster and is there any issues?
> > > Is there any experiences ?
> > >
> > > 3)
> > > Failover of a HDD is for a raid controller with HotSpare HDD not a big
> > > deal. Glusterfs will rebuild automatically if a brick fails and there
> > > are no data present, this action will perform a lot of network traffic
> > > between the mirror bricks but it will handle it equal as the raid
> > > controller right ?
> > >
> > >
> > >
> > > Thanks and cheers
> > > Heiko
> > >
> > >
> > >
> > > - --
> > > Anynines.com
> > >
> > > Avarteq GmbH
> > > B.Sc. Informatik
> > > Heiko Krämer
> > > CIO
> > > Twitter: @anynines
> > >
> > > - ----
> > > Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
> > > Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
> > > Sitz: Saarbrücken
> > > -----BEGIN PGP SIGNATURE-----
> > > Version: GnuPG v1.4.14 (GNU/Linux)
> > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > >
> > > iQEcBAEBAgAGBQJSpcMfAAoJELxFogM4ixOF/ncH/3L9DvOWHrF0XBqCgeT6QQ6B
> > > lDwtXiD9xoznht0Zs2S9LA9Z7r2l5/fzMOUSOawEMv6M16Guwq3gQ1lClUi4Iwj0
> > > GKKtYQ6F4aG4KXHY4dlu1QKT5OaLk8ljCQ47Tc9aAiJMhfC1/IgQXOslFv26utdJ
> > > N9jxiCl2+r/tQvQRw6mA4KAuPYPwOV+hMtkwfrM4UsIYGGbkNPnz1oqmBsfGdSOs
> > > TJh6+lQRD9KYw72q3I9G6ZYlI7ylL9Q7vjTroVKH232pLo4G58NLxyvWvcOB9yK6
> > > Bpf/gRMxFNKA75eW5EJYeZ6EovwcyCAv7iAm+xNKhzsoZqbBbTOJxS5zKm4YWoY=
> > > =bDly
> > > -----END PGP SIGNATURE-----
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131210/b19779ff/attachment.html>
More information about the Gluster-users
mailing list