[Gluster-users] How-to start gluster when only one node is up ?
Mauro M.
gluster at ezplanet.net
Fri Oct 30 12:01:55 UTC 2015
Hi,
Atin keeps giving the same answer: "it is by design"
I keep saying "the design is wrong and it should be changed to cater for
standby servers"
In the meantime this is the workaround I am using:
When the single node starts I stop and start the volume, and then it
becomes mountable. On CentOS 6 and CentOS 7 it works with release up to
3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4.
In my experience glusterfs releases are a bit of a hit and miss. Often
something stops working with newer releases, then after a few more
releases it works again or there is a workaround ... Not quite the
stability one would want for commercial use, and thus at the moment I can
risk using it only for my home servers, hence the cluster with a node
always ON and the second as STANDBY.
MOUNT=/home
LABEL="GlusterFS:"
if grep -qs $MOUNT /proc/mounts; then
echo "$LABEL $MOUNT is mounted";
gluster volume start gv_home 2>/dev/null
else
echo "$LABEL $MOUNT is NOT mounted";
echo "$LABEL Restarting gluster volume ..."
yes|gluster volume stop gv_home > /dev/null
gluster volume start gv_home
mount -t glusterfs sirius-ib:/gv_home $MOUNT;
if grep -qs $MOUNT /proc/mounts; then
echo "$LABEL $MOUNT is mounted";
gluster volume start gv_home 2>/dev/null
else
echo "$LABEL failure to mount $MOUNT";
fi
fi
I hope this helps.
Mauro
On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:
> -Atin
> Sent from one plus one
> On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote:
>>
>> Hello,
>>
>>
>>
>> I setup a gluster file cluster with 2 nodes. It works fine.
>>
>> But, when I shut down the 2 nodes, and startup only one node, I cannot
> mount the share :
>>
>>
>>
>> [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare
>>
>> Mount failed. Please check the log file for more details.
>>
>>
>>
>> Log says :
>>
>> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5
(args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0
> /glusterLocalShare)
>>
>> [2015-10-30 10:33:26.171964] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
>>
>> [2015-10-30 10:33:26.185685] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
>>
>> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]
> 0-gv0-client-0: parent translators are ready, attempting connect on
transport
>>
>> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]
> 0-gv0-client-1: parent translators are ready, attempting connect on
transport
>>
>> [2015-10-30 10:33:26.192209] E [MSGID: 114058]
> [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0:
failed to get the port number for remote subvolume. Please ume status'
on server to see if brick process is running.
>>
>> [2015-10-30 10:33:26.192339] I [MSGID: 114018]
> [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from
gv0-client-0. Client process will keep trying to connect t brick's port
is available
>>
>>
>>
>> And when I check the volumes I get:
>>
>> [root at xxx ~]# gluster volume status
>>
>> Status of volume: gv0
>>
>> Gluster process TCP Port RDMA Port Online
> Pid
>>
>>
> ------------------------------------------------------------------------------
>>
>> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N
> N/A
>>
>> NFS Server on localhost N/A N/A N
> N/A
>>
>> NFS Server on localhost N/A N/A N
> N/A
>>
>>
>>
>> Task Status of Volume gv0
>>
>>
> ------------------------------------------------------------------------------
>>
>> There are no active volume tasks
>>
>>
>>
>> If I start th second node, all is OK.
>>
>>
>>
>> Is this normal ?
> This behaviour is by design. In a multi node cluster when GlusterD comes up
> it doesn't start the bricks until it receives the configuration from its
one of the friends to ensure that stale information is not been
referred. In your case since the other node is down bricks are not
started and hence mount fails.
> As a workaround, we recommend to add a dummy node to the cluster to
avoid this issue.
>>
>>
>>
>> Regards,
>>
>>
>>
>> Rémi
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: mauro at ezplanet.net
--
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: mauro at ezplanet.net
More information about the Gluster-users
mailing list