[Gluster-users] Automatic arbiter volumes on distributed/replicated volumes with replica 2?

Frank Rothenstein f.rothenstein at bodden-kliniken.de
Thu Mar 31 14:13:06 UTC 2016


Hi Andre,

the key ist not having some number of nodes but to have replica 3. To
save some hardware you can use an arbiter node. It only contains meta-
data, but helps to keep the quorum.
The distribution is just spreading your data onto the nodes. Same level
of replication as two nodes but data distributed on more nodes.
As there is no upgrade path to "replica 3 arbiter 1" you have to create
a new gluster volume, for replica 3 you need to have at least 3 bricks,
you also could have 6/9/12 bricks of them using all your 4 nodes.

Maybe this helps a bit...

Grüße, FrankAm Donnerstag, den 31.03.2016, 15:04 +0200 schrieb André Bauer:
> OK, thanks.
> 
> As i understand, quorum does not work on a 2 node replica 2 cluster.
> Thats the reason VM images get read only, if one node goes down.
> 
> To get it work you need replica 3 and therefore a full third node or
> at
> least an arbiter.
> 
> Why is this also the case when having 4 node replica 2 cluster?
> 
> I use the other 2 nodes to have distributed/replicated volumes.
> Imho this should be enough to get proper quorum?
> If not, why?
> 
> Imho the distribution nodes could also do the work the arbiter does
> in
> a 4 node replicated/distributed setup?
> Is this something thats makes sense from a technical view?
> 
> If its technical possible but just no feature at the moment i would
> really like to see this in the future.
> 
> Regards
> André
> 
> 
> Am 30.03.2016 um 03:15 schrieb Ravishankar N:
> > 
> > On 03/30/2016 01:33 AM, André Bauer wrote:
> > > 
> > > Am 24.03.2016 um 13:56 schrieb Ravishankar N:
> > > > 
> > > > On 03/24/2016 04:30 PM, André Bauer wrote:
> > > > > 
> > > > > So if you have a 4 node cluster is it realy needed to have a
> > > > > third
> > > > > replica? Imho the 2 of the nodes could also be used as
> > > > > arbiters?
> > > > I'm not sure I understand. The 'arbiter' volume is a special
> > > > type of
> > > > replica volume where the 3rd brick of that replica (for every
> > > > replica)
> > > > only holds metadata. So if you're asking if this brick itself
> > > > can be
> > > > co-located on a node which holds the other 'data' bricks of the
> > > > volume,
> > > > then yes that is possible.
> > > My Question is:
> > > 
> > > If i have 4 Nodes and use replica 2, why should i need to add 2
> > > more
> > > arbiter nodes, when i also have 2 (distributed) nodes which could
> > > do the
> > > arbiter job automaticly?
> > Again, the term 'arbiter' is used to refer to a type of replica-3
> > volume
> > in gluster parlance (at least until another feature comes that uses
> > the
> > same terminology ;-) ). A replica-2 configuration does not have an
> > 'arbiter'.
> > 
> > > 
> > > 
> > > Imho 4 nodes should be enough to get proper quorum even if only
> > > replica
> > > 2 is used.
> > There are both client and server quorums in gluster.
> > http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbi
> > ter-volumes-and-quorum/
> > has more information.
> > Thanks,
> > Ravi
> > > 
> > > 
> > > Regards
> > > André
> > > 
> > > > 
> > > > -Ravi
> > > > > 
> > > > > Does it make sense to open a feature request in the
> > > > > bugtracker?
> > > > > 
> > > > > Regards
> > > > > André
> > > > > 
> > > > > Am 24.03.2016 um 11:02 schrieb Ravishankar N:
> > > > > > 
> > > > > > On 03/24/2016 02:39 PM, André Bauer wrote:
> > > > > > > 
> > > > > > > Hi List,
> > > > > > > 
> > > > > > > we just upgraded out 4 node cluster from 3.5.8 to 3.7.8.
> > > > > > > 
> > > > > > > Because of replica 2 on all volumes i run into problems
> > > > > > > with read
> > > > > > > only
> > > > > > > file systems of vm images when running on 3.5.x. As i
> > > > > > > know now the
> > > > > > > solution would be to have replica 3 or at least use
> > > > > > > arbiter volumes.
> > > > > > > 
> > > > > > > Yesterday i stumbled over this post in the list which i
> > > > > > > missed before
> > > > > > > (damn spam filter):
> > > > > > > 
> > > > > > > https://www.gluster.org/pipermail/gluster-users/2015-Nove
> > > > > > > mber/024191.html
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > Steve Dainard is pointing out that 3.7.x uses an
> > > > > > > automatic arbiter,
> > > > > > > when
> > > > > > > you have 4 nodes configured as distributed/replicated.
> > > > > > > 
> > > > > > > Is this true? Could not find something about it in the
> > > > > > > documentation
> > > > > > > :-/
> > > > > > There is no 'automatic' arbiter for replica 2. I think he
> > > > > > was
> > > > > > referring
> > > > > > to the dummy node peer probed for maintaining server
> > > > > > quorum.
> > > > > > -Ravi
> > > > > > > 
> > > > > > > Would be nice i could save on having 2 more nodes this
> > > > > > > way.
> > > > > > > 
> > > > > > > If not, is there a chance to see such feature in the
> > > > > > > future?
> > > > > > > 
> > > > > > > 
> > > > 
> > 
> > 
> 



 

______________________________________________________________________________
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten

Telefon: 03821-700-0
Fax:       03821-700-240

E-Mail: info at bodden-kliniken.de   Internet: http://www.bodden-kliniken.de

Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski

Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge- 
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf- 
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den 
Absender zu informieren und die E-Mail zu löschen. 


             Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***



More information about the Gluster-users mailing list