[Gluster-users] How-to start gluster when only one node is up ?

Atin Mukherjee atin.mukherjee83 at gmail.com
Fri Oct 30 11:48:03 UTC 2015


-Atin
Sent from one plus one
On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote:
>
> Hello,
>
>
>
> I setup a gluster file cluster with 2 nodes. It works fine.
>
> But, when I shut down the 2 nodes, and startup only one node, I cannot
mount the share :
>
>
>
> [root at xxx ~]#  mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare
>
> Mount failed. Please check the log file for more details.
>
>
>
> Log says :
>
> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5
(args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 /glusterLocalShare)
>
> [2015-10-30 10:33:26.171964] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
>
> [2015-10-30 10:33:26.185685] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
>
> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]
0-gv0-client-0: parent translators are ready, attempting connect on
transport
>
> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]
0-gv0-client-1: parent translators are ready, attempting connect on
transport
>
> [2015-10-30 10:33:26.192209] E [MSGID: 114058]
[client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed
to get the port number for remote subvolume. Please ume status' on server
to see if brick process is running.
>
> [2015-10-30 10:33:26.192339] I [MSGID: 114018]
[client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from
gv0-client-0. Client process will keep trying to connect t brick's port is
available
>
>
>
> And when I check the volumes I get:
>
> [root at xxx ~]# gluster volume status
>
> Status of volume: gv0
>
> Gluster process                             TCP Port  RDMA Port  Online
Pid
>
>
------------------------------------------------------------------------------
>
> Brick 10.32.0.11:/glusterBrick1/gv0         N/A       N/A        N
N/A
>
> NFS Server on localhost                     N/A       N/A        N
N/A
>
> NFS Server on localhost                     N/A       N/A        N
N/A
>
>
>
> Task Status of Volume gv0
>
>
------------------------------------------------------------------------------
>
> There are no active volume tasks
>
>
>
> If I start th second node, all is OK.
>
>
>
> Is this normal ?
This behaviour is by design. In a multi node cluster when GlusterD comes up
it doesn't start the bricks until it receives the configuration from its
one of the friends to ensure that stale information is not been referred.
In your case since the other node is down bricks are not started and hence
mount fails.
As a workaround, we recommend to add a dummy node to the cluster to avoid
this issue.
>
>
>
> Regards,
>
>
>
> Rémi
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151030/fb8f3665/attachment.html>


More information about the Gluster-users mailing list