[Gluster-users] gluster processes won't start when a single node is booted

Mauro M. gluster at ezplanet.net
Sun Sep 20 11:16:14 UTC 2015


Hi all,

I hope you might help.

I just upgraded from 3.5.6 to 3.7.4

My configuration is 1 volume with 2 x bricks replicated.

Normally I have brick1 running and brick2 turned off so that when I want
to do maintenance on brick1 I turn on brick2, wait for synchronization to
complete and turn off brick1.

Often I just reboot brick1 with brick2 still turned off.

With glusterfs version 3.5 I could do all of the above.

After the upgrade to 3.7.4 if I boot brick1 (or brick2) without the other
node, glusterd starts, but the gluster network processes won't start.

Here is the output of gluster volume info:
Volume Name: gv_home
Type: Replicate
Volume ID: ef806153-2a02-4db9-a54e-c2f89f79b52e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: brick1:/brick0/gv_home
Brick2: brick2:/brick0/gv_home
Options Reconfigured:
nfs.disable: on
config.transport: tcp

... and gluster volume status:

Status of volume: gv_home
Gluster process                         TCP Port  RDMA Port  Online  Pid
---------------------------------------------------------------------------
Brick brick1:/brick0/gv_home            N/A       N/A        N       N/A
NFS Server on localhost                 N/A       N/A        N       N/A

Task Status of Volume gv_home
---------------------------------------------------------------------------
There are no active volume tasks

Under this condition gv_home cannot be mounted.

Only if I start brick2, once glusterd starts on brick2 the gluster
processes also start on brick1 and gv_home can be mounted:

Status of volume: gv_home
Gluster process                         TCP Port  RDMA Port  Online  Pid
--------------------------------------------------------------------------
Brick brick1:/brick0/gv_home            49158     0          Y       30049
Brick brick2:/brick0/gv_home            49158     0          Y       14797
Self-heal Daemon on localhost           N/A       N/A        Y       30044
Self-heal Daemon on brick2              N/A       N/A        Y       14792

Task Status of Volume gv_home
--------------------------------------------------------------------------
There are no active volume tasks

Once I turn off brick2 then the volume remains available and mounted
without issues (for as long that the relative gluster processes remain
active, if I kill them I am back without volume).

The issue is that I would like to safely boot one the bricks without
having to boot both to get the volume back and mountable which is what I
was able to do with glusterfs version 3.5.

Please could you help?
Is there any parameter to set that would enable the same behaviour as in 3.5?

Thank you in advance,
Mauro



More information about the Gluster-users mailing list