[Gluster-users] Recovery when 2 of 3 servers goes down!

Artem Russakovskii archon810 at gmail.com
Fri Jul 17 23:16:39 UTC 2020


No problem.

Oh I also had to set

> gluster v set VMS network.ping-timeout 5

because in case a server went down and started timing out (full shutdown),
the default value was so high (I think 60s) that it made all nodes straight
up freeze for this long before serving the files.

Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | @ArtemR <http://twitter.com/ArtemR>


On Fri, Jul 17, 2020 at 2:39 PM Gilberto Nunes <gilberto.nunes32 at gmail.com>
wrote:

> Yes Artem! That's it!
> I used the following commands and everything works as expected with 3
> nodes:
>
> gluster volume create VMS proxmox01:/DATA/vms
>
> gluster vol start VMS
> gluster vol status VMS
>
> gluster peer probe proxmox02
> gluster volume add-brick VMS replica 2 proxmox02:/DATA/vms
>
> gluster vol status VMS
> gluster vol info VMS
>
> gluster peer probe proxmox03
> gluster volume add-brick VMS replica 3 proxmox03:/DATA/vms
>
> gluster vol set VMS cluster.heal-timeout 60
> gluster volume heal VMS enable
> gluster vol set VMS cluster.quorum-reads false
> gluster vol set VMS cluster.quorum-count 1
>
>
> Thanks for you replay
>
> Cheers
>
>
> ---
> Gilberto Nunes Ferreira
>
>
>
> Em sex., 17 de jul. de 2020 às 16:56, Artem Russakovskii <
> archon810 at gmail.com> escreveu:
>
>> I had the same requirements (except with 4 servers and no arbiter), and
>> this was the solution:
>>
>> gluster v set VMS cluster.quorum-count 1
>>
>> gluster v set VMS cluster.quorum-type fixed
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
>> <http://www.apkmirror.com/>, Illogical Robot LLC
>> beerpla.net | @ArtemR <http://twitter.com/ArtemR>
>>
>>
>> On Fri, Jul 17, 2020 at 6:50 AM Gilberto Nunes <
>> gilberto.nunes32 at gmail.com> wrote:
>>
>>> How there
>>>
>>> I have 3 servers with gluster 7 installed and setting up with replica 3
>>> and arbiter 1.
>>> Here's the commands I used:
>>> - First create a simple volume with one server:
>>> gluster volume create VMS proxmox01:/DATA/vms
>>> - Then add the second one
>>> gluster peer probe proxmox02
>>> gluster volume add-brick VMS replica 2 proxmox02:/DATA/vms
>>> - And finally and the third:
>>> gluster peer probe proxmox03
>>> gluster volume add-brick VMS replica 3 arbiter 1 proxmox03:/DATA/vms
>>>
>>> But then I decide to test the environment and bring proxmox02 and
>>> proxmox03 down and get Transport endpoint is not connected after a few
>>> seconds.
>>> Is there a way to keep one server up if 2 goes down?
>>> gluster vol info
>>>
>>> Volume Name: VMS
>>> Type: Replicate
>>> Volume ID: 64735da4-8671-4c5e-b832-d15f5c03e9f0
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: proxmox01:/DATA/vms
>>> Brick2: proxmox02:/DATA/vms
>>> Brick3: proxmox03:/DATA/vms (arbiter)
>>> Options Reconfigured:
>>> nfs.disable: on
>>> storage.fips-mode-rchecksum: on
>>> transport.address-family: inet
>>> performance.client-io-threads: off
>>> cluster.self-heal-daemon: enable
>>> cluster.quorum-reads: false
>>> cluster.quorum-count: 1
>>>
>>> gluster vol status
>>> Status of volume: VMS
>>> Gluster process                             TCP Port  RDMA Port  Online
>>>  Pid
>>> ------------------------------------------------------------------------------
>>>
>>> Brick proxmox01:/DATA/vms                   49152     0          Y
>>>       1526
>>> Self-heal Daemon on localhost               N/A       N/A        Y
>>>       1537
>>>
>>> Task Status of Volume VMS
>>> ------------------------------------------------------------------------------
>>>
>>> There are no active volume tasks
>>>
>>>
>>> Thanks a lot
>>>
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>>> (47) 3025-5907
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>> ________
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200717/088f94d8/attachment.html>


More information about the Gluster-users mailing list