[Gluster-users] How-to start gluster when only one node is up ?

Diego Remolina dijuremo at gmail.com
Fri Oct 30 12:07:57 UTC 2015


It is also worth noting that if you are using replica=2 you will
usually always want a third node which has no bricks (unless you can
afford a third node and replica=3) to provide quorum. You should then
set your quorum ratio to 51%. This is to avoid split brain situations.

Some RH doc reference may be  found at:

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html

If you ever run into a split brain situation, check this blog:

https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/

HTH,

Diego

On Fri, Oct 30, 2015 at 7:53 AM, Remi Serrano <rserrano at pros.com> wrote:
> Thanks Atin
>
>
>
> Regards,
>
>
>
> Rémi
>
>
>
> De : Atin Mukherjee [mailto:atin.mukherjee83 at gmail.com]
> Envoyé : vendredi 30 octobre 2015 12:48
> À : Remi Serrano <rserrano at pros.com>
> Cc : gluster-users at gluster.org
> Objet : Re: [Gluster-users] How-to start gluster when only one node is up ?
>
>
>
> -Atin
> Sent from one plus one
> On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote:
>>
>> Hello,
>>
>>
>>
>> I setup a gluster file cluster with 2 nodes. It works fine.
>>
>> But, when I shut down the 2 nodes, and startup only one node, I cannot
>> mount the share :
>>
>>
>>
>> [root at xxx ~]#  mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare
>>
>> Mount failed. Please check the log file for more details.
>>
>>
>>
>> Log says :
>>
>> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5
>> (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 /glusterLocalShare)
>>
>> [2015-10-30 10:33:26.171964] I [MSGID: 101190]
>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with
>> index 1
>>
>> [2015-10-30 10:33:26.185685] I [MSGID: 101190]
>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with
>> index 2
>>
>> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]
>> 0-gv0-client-0: parent translators are ready, attempting connect on
>> transport
>>
>> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]
>> 0-gv0-client-1: parent translators are ready, attempting connect on
>> transport
>>
>> [2015-10-30 10:33:26.192209] E [MSGID: 114058]
>> [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed to
>> get the port number for remote subvolume. Please ume status' on server to
>> see if brick process is running.
>>
>> [2015-10-30 10:33:26.192339] I [MSGID: 114018]
>> [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from
>> gv0-client-0. Client process will keep trying to connect t brick's port is
>> available
>>
>>
>>
>> And when I check the volumes I get:
>>
>> [root at xxx ~]# gluster volume status
>>
>> Status of volume: gv0
>>
>> Gluster process                             TCP Port  RDMA Port  Online
>> Pid
>>
>>
>> ------------------------------------------------------------------------------
>>
>> Brick 10.32.0.11:/glusterBrick1/gv0         N/A       N/A        N
>> N/A
>>
>> NFS Server on localhost                     N/A       N/A        N
>> N/A
>>
>> NFS Server on localhost                     N/A       N/A        N
>> N/A
>>
>>
>>
>> Task Status of Volume gv0
>>
>>
>> ------------------------------------------------------------------------------
>>
>> There are no active volume tasks
>>
>>
>>
>> If I start th second node, all is OK.
>>
>>
>>
>> Is this normal ?
> This behaviour is by design. In a multi node cluster when GlusterD comes up
> it doesn't start the bricks until it receives the configuration from its one
> of the friends to ensure that stale information is not been referred. In
> your case since the other node is down bricks are not started and hence
> mount fails.
> As a workaround, we recommend to add a dummy node to the cluster to avoid
> this issue.
>>
>>
>>
>> Regards,
>>
>>
>>
>> Rémi
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list