<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<p>Hi,</p>
<p>We have a cluster whose common storage is a gluster volume
consisting of 4 bricks residing on 2 servers (more details at
bottom). Yesterday we experienced a power outage. To start the
gluster volume after the power came back I had to</p>
<ul>
<li>manually start a gluster daemon on one of the servers
(mseas-data3)</li>
<li>start the gluster volume on the other server (mseas-data2)</li>
<ul>
<li>I had just tried starting the gluster volume without
manually starting the other daemon but that was unsuccessful.<br>
</li>
</ul>
</ul>
<p>After this my recollection is that the peers were talking to each
other at that time.</p>
<p>Today I was looking around and noticed that the mseas-data3
server is in a disconnected state (even though the compute nodes
of our cluster are seeing the full gluster volume)<br>
</p>
<p><tt>-----------------------</tt></p>
<p><tt>[root@mseas-data2 ~]# gluster peer status</tt><tt><br>
</tt><tt>Number of Peers: 1</tt><tt><br>
</tt><tt><br>
</tt><tt>Hostname: mseas-data3</tt><tt><br>
</tt><tt>Uuid: b39d4deb-c291-437e-8013-09050c1fa9e3</tt><tt><br>
</tt><tt>State: Peer in Cluster (Disconnected)</tt><tt><br>
</tt></p>
<p><tt>-----------------------</tt></p>
<p>Following the advice on
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/pipermail/gluster-users/2015-April/021597.html">https://lists.gluster.org/pipermail/gluster-users/2015-April/021597.html</a>
, I confirmed that the 2 servers can ping each other. The gluster
daemon on mseas-data2 is active but the daemon on mseas-data3
shows<br>
</p>
<p><tt>--------------------------------</tt></p>
<p><tt>[root@mseas-data3 ~]# service glusterd status</tt><tt><br>
</tt><tt>glusterd dead but pid file exists</tt></p>
<p><tt>--------------------------------</tt></p>
<p>Is it safe to just restart that daemon on mseas-data3? Is there
some other procedure I should do? I ask because we have a number
of job running that appear to be successfully writing to the
gluster volume and I'd prefer that they continue if possible.</p>
<p>Any advice would be appreciated. Thanks<br>
</p>
<p><tt>---------------------------------------------------</tt><tt><br>
</tt></p>
<p><tt>[root@mseas-data2 ~]# gluster volume info</tt><tt><br>
</tt><tt> </tt><tt><br>
</tt><tt>Volume Name: data-volume</tt><tt><br>
</tt><tt>Type: Distribute</tt><tt><br>
</tt><tt>Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18</tt><tt><br>
</tt><tt>Status: Started</tt><tt><br>
</tt><tt>Number of Bricks: 4</tt><tt><br>
</tt><tt>Transport-type: tcp</tt><tt><br>
</tt><tt>Bricks:</tt><tt><br>
</tt><tt>Brick1: mseas-data2:/mnt/brick1</tt><tt><br>
</tt><tt>Brick2: mseas-data2:/mnt/brick2</tt><tt><br>
</tt><tt>Brick3: mseas-data3:/export/sda/brick3</tt><tt><br>
</tt><tt>Brick4: mseas-data3:/export/sdc/brick4</tt><tt><br>
</tt><tt>Options Reconfigured:</tt><tt><br>
</tt><tt>diagnostics.client-log-level: ERROR</tt><tt><br>
</tt><tt>network.inode-lru-limit: 50000</tt><tt><br>
</tt><tt>performance.md-cache-timeout: 60</tt><tt><br>
</tt><tt>performance.open-behind: off</tt><tt><br>
</tt><tt>disperse.eager-lock: off</tt><tt><br>
</tt><tt>auth.allow: *</tt><tt><br>
</tt><tt>server.allow-insecure: on</tt><tt><br>
</tt><tt>nfs.exports-auth-enable: on</tt><tt><br>
</tt><tt>diagnostics.brick-sys-log-level: WARNING</tt><tt><br>
</tt><tt>performance.readdir-ahead: on</tt><tt><br>
</tt><tt>nfs.disable: on</tt><tt><br>
</tt><tt>nfs.export-volumes: off</tt><tt><br>
</tt><tt>cluster.min-free-disk: 1%</tt><tt><br>
</tt><tt><br>
</tt></p>
<pre class="moz-signature" cols="72">--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA 02139-4301
</pre>
</body>
</html>