[Gluster-users] Recovery when 2 of 3 servers goes down!

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Jul 17 13:49:27 UTC 2020


How there

I have 3 servers with gluster 7 installed and setting up with replica 3 and
arbiter 1.
Here's the commands I used:
- First create a simple volume with one server:
gluster volume create VMS proxmox01:/DATA/vms
- Then add the second one
gluster peer probe proxmox02
gluster volume add-brick VMS replica 2 proxmox02:/DATA/vms
- And finally and the third:
gluster peer probe proxmox03
gluster volume add-brick VMS replica 3 arbiter 1 proxmox03:/DATA/vms

But then I decide to test the environment and bring proxmox02 and proxmox03
down and get Transport endpoint is not connected after a few seconds.
Is there a way to keep one server up if 2 goes down?
gluster vol info

Volume Name: VMS
Type: Replicate
Volume ID: 64735da4-8671-4c5e-b832-d15f5c03e9f0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: proxmox01:/DATA/vms
Brick2: proxmox02:/DATA/vms
Brick3: proxmox03:/DATA/vms (arbiter)
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
cluster.self-heal-daemon: enable
cluster.quorum-reads: false
cluster.quorum-count: 1

gluster vol status
Status of volume: VMS
Gluster process                             TCP Port  RDMA Port  Online
 Pid
------------------------------------------------------------------------------

Brick proxmox01:/DATA/vms                   49152     0          Y
      1526
Self-heal Daemon on localhost               N/A       N/A        Y
      1537

Task Status of Volume VMS
------------------------------------------------------------------------------

There are no active volume tasks


Thanks a lot


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200717/ec7c835a/attachment.html>


More information about the Gluster-users mailing list