[Gluster-users] Gluster node without a brick

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Thu Jan 12 20:58:27 UTC 2017


no you can't
If you don't put any bricks on each new nodes and bring up the replica to
5, dare are not replicated and thus there is nothing to access from
"localhost"

You have to configure new nodes as clients and access gluster as a remote
storage

The proxmox solution is limited to 3 nodes (or at least limited to the
replica count value)

Il 12 gen 2017 4:07 PM, "Kevin Lemonnier" <lemonnierk at ulrar.net> ha scritto:

Hi,

We have a few proxmox 3 node clusters with a glusterFS brick on each.
The proxmox are configured to use "localhost" as the gluster server,
which works very well.

We are thinking of bumping the proxmox clusters to 5 nodes, but without
putter bricks on the 2 new ones (because of a bug I already mailed about
a few months ago), basically we just need more ram. My question is the
following : can I install glusterfs and peer probe the new servers
without putting a brick on them and still use localhost as the server ?

For example :

s1, s2 and s3 are proxmox nodes and have each a brick of a replica 3 volume.
We add s4 and s5 to the gluster by running peer probe, but we don't
add a brick on them. Will localhost:/VMs work on s4 and s5 ?
I believe it will, since it'll have access to the volume on the other
members
of the gluster, but I'd like to be sure before I go throught the trouble
of setting all of this up.

(I also have one of the original servers as backup volfile, but who knows
if that'll always be up, I like using localhost since if localhost is down
I have other problems anyway :D)

Thanks !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20170112/2737c9f3/attachment.html>


More information about the Gluster-users mailing list