<div dir="ltr"><div>Hi list,</div><div><br></div><div>I am using a replica volume (3 nodes) gluster in an ovirt environment and after setting one node in maintenance mode and rebooting it, the "Online" flag in gluster volume status does not go to "Y" again.<br></div><div style="margin-left:40px"><br></div><div style="margin-left:40px"><span style="font-family:monospace">[root@node1 glusterfs]# gluster volume status<br>Status of volume: my_volume<br>Gluster process                             TCP Port  RDMA Port  Online  Pid<br>------------------------------------------------------------------------------<br>Brick 10.22.1.95:/gluster_bricks/my_glust<br>er/my_gluster                              N/A       N/A        N       N/A<br>Brick 10.22.1.97:/gluster_bricks/my_glust<br>er/my_gluster                              49152     0          Y       4954<br>Brick 10.22.1.94:/gluster_bricks/my_glust<br>er/my_gluster                              49152     0          Y       3574<br>Self-heal Daemon on localhost               N/A       N/A        Y       3585<br>Self-heal Daemon on node2                   N/A       N/A        Y       3557<br>Self-heal Daemon on node3                   N/A       N/A        Y       4973<br><br>Task Status of Volume my_volume<br>------------------------------------------------------------------------------<br>There are no active volume tasks</span></div><div><br></div><div><br></div><div>Shouldn´t it go back to Online Y automatically?</div><div><br></div><div><br></div><div>This is the output from gluster volume info from the same node:</div><div><br></div><div style="margin-left:40px"><span style="font-family:monospace">[root@node1 glusterfs]# gluster volume info<br><br>Volume Name: my_volume<br>Type: Replicate<br>Volume ID: 78b9299c-1df5-4780-b108-4d3a6dee225d<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x 3 = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: 10.22.1.95:/gluster_bricks/my_gluster/my_gluster<br>Brick2: 10.22.1.97:/gluster_bricks/my_gluster/my_gluster<br>Brick3: 10.22.1.94:/gluster_bricks/my_gluster/my_gluster<br>Options Reconfigured:<br>cluster.granular-entry-heal: enable<br>storage.owner-gid: 36<br>storage.owner-uid: 36<br>cluster.lookup-optimize: off<br>server.keepalive-count: 5<br>server.keepalive-interval: 2<br>server.keepalive-time: 10<br>server.tcp-user-timeout: 20<br>network.ping-timeout: 30<br>server.event-threads: 4<br>client.event-threads: 4<br>cluster.choose-local: off<br>features.shard: on<br>cluster.shd-wait-qlength: 10000<br>cluster.shd-max-threads: 8<br>cluster.locking-scheme: granular<br>cluster.data-self-heal-algorithm: full<br>cluster.server-quorum-type: server<br>cluster.quorum-type: auto<br>cluster.eager-lock: enable<br>performance.strict-o-direct: on<br>network.remote-dio: off<br>performance.low-prio-threads: 32<br>performance.io-cache: off<br>performance.read-ahead: off<br>performance.quick-read: off<br>auth.allow: *<br>user.cifs: off<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: on</span></div><div><br></div><div>Regards,<br></div><div> Martin</div><div><br></div></div>