[Gluster-users] Project pre planning

Paul Robert Marino prmarino1 at gmail.com
Tue Dec 17 14:20:52 UTC 2013


By the way if you can use the Gluster native fuse client instead of
NFS there is a performance penalty when you use NFS in Gluster.

Also I use keepalived to manage the VIP and to load balance the client
connections it works fairly well. The only down side is most of the
how to documents are ancient and no longer correct, this includes the
examples that come with it. The only reliable documentation is here
https://github.com/acassen/keepalived/blob/master/doc/keepalived.conf.SYNOPSIS
.
The big difference between the old method and the more stable modern
method of configuring it is you no longer set an explicit master and
backup node in the VRRP config. Now you set both all both nodes to
backup and you give them different priority numbers so the hold a
proper election this makes fail over and recovery a lot smother. if
you use the old method from the examples and how to docs on the web
you will run into issues and many of the features like preempt delay
wont work correctly.



On Tue, Dec 17, 2013 at 4:37 AM, Frank Kirschner <frank at celebrate.de> wrote:
> Thanks Nux,
>
> now I build up my first cluster for testing.
>
> best regards
> Frank
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Nux!
> Sent: Tuesday, December 17, 2013 9:42 AM
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Project pre planning
>
> On 17.12.2013 08:06, Frank Kirschner wrote:
>> Hello GlusterFS users,
>
> Hello,
>
>>
>> can anybody give me please his opinion about the following facts and
>> questions:
>> 4 storage server with 16 SATA bays, connected by GigE:
>
> My IMHO inline:
>
>>
>> Q1:
>> Volume will be set up as distributed-replicated.
>> Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one
>> big GlusterVolume or each dir in it's own GlusterVolume?
>
> I'd go for individual volumes, might give you extra flexibility in the
> future and it's also better security to have stuff in different places.
>
>>
>> Q2: Set up the bricks as a collection of JBOD's or underlay it with a
>> RAID-5
>> array?
>
> We use RAID6 in our setup, JBOD might give you better total throughput at
> the cost of more hassle (replacing bricks). There was a similar thread
> recently, check it out:
> https://www.mail-archive.com/gluster-users@gluster.org/msg13707.html
>
>>
>> Q3: A client mounted the GlusterFS from a nfs export of node 1. What
>> if the server is down - would be a set up with virtual IP triggered by
>> heartbeat a solution to provide one available fix IP for the clients?
>
> You can use a VIP to export your NFS, we use keepalived, but in hindsight I
> regret not knowing about CTDB at the time, it plays well with Samba.
> http://download.gluster.org/pub/gluster/glusterfs/doc/Gluster_CTDB_setup.v1.
> pdf
> http://ctdb.samba.org/
>
>>
>> I have everytime two http-server, (FTP server, Mailserver, SMB Server)
>> which have mounted the GlusterFS by nfs.
>> The clients are also controlled by heartbeat with a virtual IP which
>> will use the clients on the network:
>>
>> Gluster Node1 --+
>>                 |
>> Gluster Node2 --+      +-- http1 --+
>>                 +------+           |--> Clients
>> Gluster Node3 --+      +-- http2 --+
>>                 |
>> Gluster Node4 --+
>>
>> Will this work?
>
> Looks good to me.
>
> Good luck!
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list