[Gluster-users] Access to Servers hangs after stop one server...

Gilberto Nunes gilberto.nunes32 at gmail.com
Thu Jan 24 13:43:47 UTC 2019


>I think your mount statement in /etc/fstab is only referencing ONE of the
gluster servers.
>
>Please take a look at "More redundant mount" section:
>
>https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume
>
>Then try taking down one of the gluster servers and report back results.

Guys! I have followed the very same instruction that found in the James's
website.
One of method his mentioned in that website, is create a file into
/etc/glusterfs directory, named datastore.vol, for instance, with this
content:

volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host server1
 option remote-subvolume /data/storage
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host server2
 option remote-subvolume /data/storage
 end-volume

 volume remote3
 type protocol/client
 option transport-type tcp
 option remote-host server3
 option remote-subvolume /data/storage
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2 remote3
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume


and then include this line into fstab:

/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,
default_permissions,max_read=131072 0 0

What I doing wrong???

Thanks






---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 24 de jan de 2019 às 11:27, Scott Worthington <
scott.c.worthington at gmail.com> escreveu:

> I think your mount statement in /etc/fstab is only referencing ONE of the
> gluster servers.
>
> Please take a look at "More redundant mount" section:
>
> https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume
>
> Then try taking down one of the gluster servers and report back results.
>
> On Thu, Jan 24, 2019 at 8:24 AM Gilberto Nunes <gilberto.nunes32 at gmail.com>
> wrote:
>
>> Yep!
>> But as I mentioned in previously e-mail, even with 3 or 4 servers this
>> issues occurr.
>> I don't know what's happen.
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em qui, 24 de jan de 2019 às 10:43, Diego Remolina <dijuremo at gmail.com>
>> escreveu:
>>
>>> Glusterfs needs quorum, so if you have two servers and one goes down,
>>> there is no quorum, so all writes stop until the server comes back up. You
>>> can add a third server as an arbiter which does not store data in the
>>> bricks, but still uses some minimal space (to keep metadata for the files).
>>>
>>> HTH,
>>>
>>> DIego
>>>
>>> On Wed, Jan 23, 2019 at 3:06 PM Gilberto Nunes <
>>> gilberto.nunes32 at gmail.com> wrote:
>>>
>>>> Hit there...
>>>>
>>>> I have set up two server as replica, like this:
>>>>
>>>> gluster vol create Vol01 server1:/data/storage server2:/data/storage
>>>>
>>>> Then I create a config file in client, like this:
>>>> volume remote1
>>>>  type protocol/client
>>>>  option transport-type tcp
>>>>  option remote-host server1
>>>>  option remote-subvolume /data/storage
>>>>  end-volume
>>>>
>>>>  volume remote2
>>>>  type protocol/client
>>>>  option transport-type tcp
>>>>  option remote-host server2
>>>>  option remote-subvolume /data/storage
>>>>  end-volume
>>>>
>>>>  volume replicate
>>>>  type cluster/replicate
>>>>  subvolumes remote1 remote2
>>>>  end-volume
>>>>
>>>>  volume writebehind
>>>>  type performance/write-behind
>>>>  option window-size 1MB
>>>>  subvolumes replicate
>>>>  end-volume
>>>>
>>>>  volume cache
>>>>  type performance/io-cache
>>>>  option cache-size 512MB
>>>>  subvolumes writebehind
>>>>  end-volume
>>>>
>>>> And add this line in /etc/fstab
>>>>
>>>> /etc/glusterfs/datastore.vol /mnt glusterfs defaults,_netdev 0 0
>>>>
>>>> After mount /mnt, I can access the servers. So far so good!
>>>> But when I make server1 crash, I was unable to access /mnt or even use
>>>> gluster vol status
>>>> on server2
>>>>
>>>> Everything hangon!
>>>>
>>>> I have tried with replicated, distributed and replicated-distributed
>>>> too.
>>>> I am using Debian Stretch, with gluster package installed via apt,
>>>> provided by Standard Debian Repo, glusterfs-server 3.8.8-1
>>>>
>>>> I am sorry if this is a  newbie question, but glusterfs share it's not
>>>> suppose to keep online if one server goes down?
>>>>
>>>> Any adviced will be welcome
>>>>
>>>> Best
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ---
>>>> Gilberto Nunes Ferreira
>>>>
>>>> (47) 3025-5907
>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>
>>>> Skype: gilberto.nunes36
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190124/85c3560d/attachment.html>


More information about the Gluster-users mailing list