<div dir="ltr">The mount log file of the volume would help in debugging the actual cause.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <span dir="ltr"><<a href="mailto:mrjoeldiaz@gmail.com" target="_blank">mrjoeldiaz@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Good morning Gluster users,<div><br></div><div>I'm very new to the Gluster file system. My apologies if this is not the correct way to seek assistance. However, I would appreciate some insight into understanding the issue I have.</div><div><br></div><div>I have three nodes running two volumes, engine and data. The third node is the arbiter on both volumes. Both volumes were operation fine but one of the volumes, data, no longer mounts.</div><div><br></div><div>Please see below:</div><div><br></div><div><div>gluster volume info all</div><div><br></div><div>Volume Name: data</div><div>Type: Replicate</div><div>Volume ID: 1d6bb110-9be4-4630-ae91-<wbr>36ec1cf6cc02</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 192.168.170.141:/gluster_<wbr>bricks/data/data</div><div>Brick2: 192.168.170.143:/gluster_<wbr>bricks/data/data</div><div>Brick3: 192.168.170.147:/gluster_<wbr>bricks/data/data (arbiter)</div><div>Options Reconfigured:</div><div>nfs.disable: on</div><div>performance.readdir-ahead: on</div><div>transport.address-family: inet</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>performance.low-prio-threads: 32</div><div>network.remote-dio: off</div><div>cluster.eager-lock: enable</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-max-threads: 8</div><div>cluster.shd-wait-qlength: 10000</div><div>features.shard: on</div><div>user.cifs: off</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>network.ping-timeout: 30</div><div>performance.strict-o-direct: on</div><div>cluster.granular-entry-heal: enable</div><div><br></div><div>Volume Name: engine</div><div>Type: Replicate</div><div>Volume ID: b160f0b2-8bd3-4ff2-a07c-<wbr>134cab1519dd</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 192.168.170.141:/gluster_<wbr>bricks/engine/engine</div><div>Brick2: 192.168.170.143:/gluster_<wbr>bricks/engine/engine</div><div>Brick3: 192.168.170.147:/gluster_<wbr>bricks/engine/engine (arbiter)</div><div>Options Reconfigured:</div><div>nfs.disable: on</div><div>performance.readdir-ahead: on</div><div>transport.address-family: inet</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>performance.low-prio-threads: 32</div><div>network.remote-dio: off</div><div>cluster.eager-lock: enable</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-max-threads: 8</div><div>cluster.shd-wait-qlength: 10000</div><div>features.shard: on</div><div>user.cifs: off</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>network.ping-timeout: 30</div><div>performance.strict-o-direct: on</div><div>cluster.granular-entry-heal: enable</div></div><div><br></div><div><div>df -h</div><div>Filesystem Size Used Avail Use% Mounted on</div><div>/dev/mapper/centos_ovirt--hyp-<wbr>-01-root 50G 3.9G 47G 8% /</div><div>devtmpfs 7.7G 0 7.7G 0% /dev</div><div>tmpfs 7.8G 0 7.8G 0% /dev/shm</div><div>tmpfs 7.8G 8.7M 7.7G 1% /run</div><div>tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup</div><div>/dev/mapper/centos_ovirt--hyp-<wbr>-01-home 61G 33M 61G 1% /home</div><div>/dev/mapper/gluster_vg_sdb-<wbr>gluster_lv_engine 50G 8.1G 42G 17% /gluster_bricks/engine</div><div>/dev/sda1 497M 173M 325M 35% /boot</div><div>/dev/mapper/gluster_vg_sdb-<wbr>gluster_lv_data 730G 157G 574G 22% /gluster_bricks/data</div><div>tmpfs 1.6G 0 1.6G 0% /run/user/0</div><div>ovirt-hyp-01.reis.com:engine 50G 8.1G 42G 17% /rhev/data-center/mnt/<wbr>glusterSD/ovirt-hyp-01.reis.<wbr>com:engine</div></div><div><br></div><div><div>gluster volume status data</div><div>Status of volume: data</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick 192.168.170.141:/gluster_<wbr>bricks/data/</div><div>data 49157 0 Y 11967</div><div>Brick 192.168.170.143:/gluster_<wbr>bricks/data/</div><div>data 49157 0 Y 2901</div><div>Brick 192.168.170.147:/gluster_<wbr>bricks/data/</div><div>data 49158 0 Y 2626</div><div>Self-heal Daemon on localhost N/A N/A Y 16211</div><div>Self-heal Daemon on 192.168.170.147 N/A N/A Y 3402</div><div>Self-heal Daemon on 192.168.170.143 N/A N/A Y 20254</div><div><br></div><div>Task Status of Volume data</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>There are no active volume tasks</div></div><div><br></div><div><div>gluster peer status</div><div>Number of Peers: 2</div><div><br></div><div>Hostname: 192.168.170.143</div><div>Uuid: b2b30d05-cf91-4567-92fd-<wbr>022575e082f5</div><div>State: Peer in Cluster (Connected)</div><div>Other names:</div><div>10.0.0.2</div><div><br></div><div>Hostname: 192.168.170.147</div><div>Uuid: 4e50acc4-f3cb-422d-b499-<wbr>fb5796a53529</div><div>State: Peer in Cluster (Connected)</div><div>Other names:</div><div>10.0.0.3</div></div><div><br></div><div>Any assistance in understanding how and why the volume no longer mounts and a possible resolution would be greatly appreciated.</div><div><br></div><div>Thank you,</div><div><br></div><div>Joel</div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>