[Gluster-users] Gluster volume not mounted
Joel Diaz
mrjoeldiaz at gmail.com
Wed Jun 28 13:16:19 UTC 2017
Good morning Atin,
Thanks for the reply.
I believe that log file is
"rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log", please
correct me if I'm wrong. However, it happens to be empty. See below:
ls -lah /var/log/glusterfs/|grep data
-rw-------. 1 root root 0 Jun 13 17:09 glfsheal-data.log
-rw-------. 1 root root 34K Jun 4 03:06 glfsheal-data.log-20170604.gz
-rw-------. 1 root root 563K Jun 7 16:01 glfsheal-data.log-20170613
*-rw-------. 1 root root 0 Jun 13 17:09
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log*
-rw-------. 1 root root 61K Jun 4 03:08
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log-20170604.gz
-rw-------. 1 root root 164K Jun 8 08:58
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log-20170613
-rw-------. 1 root root 0 Jun 4 03:08
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_engine.log
-rw-------. 1 root root 371 Jun 28 03:30
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log
-rw-------. 1 root root 16K May 31 14:12
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_engine.log-20170604
-rw-------. 1 root root 4.8K Jun 4 03:08
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170604.gz
-rw-------. 1 root root 34K Jun 13 17:09
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170613.gz
-rw-------. 1 root root 21K Jun 18 03:10
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170618.gz
-rw-------. 1 root root 32K Jun 25 03:26
rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170625
[root at ovirt-hyp-01 ~]# cat
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:
_data.log
[root at ovirt-hyp-01 ~]#
Please let me know what other information I can provide.
Thank you,
Joel
On Wed, Jun 28, 2017 at 12:08 AM, Atin Mukherjee <amukherj at redhat.com>
wrote:
> The mount log file of the volume would help in debugging the actual cause.
>
> On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote:
>
>> Good morning Gluster users,
>>
>> I'm very new to the Gluster file system. My apologies if this is not the
>> correct way to seek assistance. However, I would appreciate some insight
>> into understanding the issue I have.
>>
>> I have three nodes running two volumes, engine and data. The third node
>> is the arbiter on both volumes. Both volumes were operation fine but one of
>> the volumes, data, no longer mounts.
>>
>> Please see below:
>>
>> gluster volume info all
>>
>> Volume Name: data
>> Type: Replicate
>> Volume ID: 1d6bb110-9be4-4630-ae91-36ec1cf6cc02
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.170.141:/gluster_bricks/data/data
>> Brick2: 192.168.170.143:/gluster_bricks/data/data
>> Brick3: 192.168.170.147:/gluster_bricks/data/data (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>>
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: b160f0b2-8bd3-4ff2-a07c-134cab1519dd
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.170.141:/gluster_bricks/engine/engine
>> Brick2: 192.168.170.143:/gluster_bricks/engine/engine
>> Brick3: 192.168.170.147:/gluster_bricks/engine/engine (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>>
>> df -h
>> Filesystem Size Used Avail Use%
>> Mounted on
>> /dev/mapper/centos_ovirt--hyp--01-root 50G 3.9G 47G 8% /
>> devtmpfs 7.7G 0 7.7G 0% /dev
>> tmpfs 7.8G 0 7.8G 0%
>> /dev/shm
>> tmpfs 7.8G 8.7M 7.7G 1% /run
>> tmpfs 7.8G 0 7.8G 0%
>> /sys/fs/cgroup
>> /dev/mapper/centos_ovirt--hyp--01-home 61G 33M 61G 1% /home
>> /dev/mapper/gluster_vg_sdb-gluster_lv_engine 50G 8.1G 42G 17%
>> /gluster_bricks/engine
>> /dev/sda1 497M 173M 325M 35% /boot
>> /dev/mapper/gluster_vg_sdb-gluster_lv_data 730G 157G 574G 22%
>> /gluster_bricks/data
>> tmpfs 1.6G 0 1.6G 0%
>> /run/user/0
>> ovirt-hyp-01.reis.com:engine 50G 8.1G 42G 17%
>> /rhev/data-center/mnt/glusterSD/ovirt-hyp-01.reis.com:engine
>>
>> gluster volume status data
>> Status of volume: data
>> Gluster process TCP Port RDMA Port Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick 192.168.170.141:/gluster_bricks/data/
>> data 49157 0 Y
>> 11967
>> Brick 192.168.170.143:/gluster_bricks/data/
>> data 49157 0 Y
>> 2901
>> Brick 192.168.170.147:/gluster_bricks/data/
>> data 49158 0 Y
>> 2626
>> Self-heal Daemon on localhost N/A N/A Y
>> 16211
>> Self-heal Daemon on 192.168.170.147 N/A N/A Y
>> 3402
>> Self-heal Daemon on 192.168.170.143 N/A N/A Y
>> 20254
>>
>> Task Status of Volume data
>> ------------------------------------------------------------
>> ------------------
>> There are no active volume tasks
>>
>> gluster peer status
>> Number of Peers: 2
>>
>> Hostname: 192.168.170.143
>> Uuid: b2b30d05-cf91-4567-92fd-022575e082f5
>> State: Peer in Cluster (Connected)
>> Other names:
>> 10.0.0.2
>>
>> Hostname: 192.168.170.147
>> Uuid: 4e50acc4-f3cb-422d-b499-fb5796a53529
>> State: Peer in Cluster (Connected)
>> Other names:
>> 10.0.0.3
>>
>> Any assistance in understanding how and why the volume no longer mounts
>> and a possible resolution would be greatly appreciated.
>>
>> Thank you,
>>
>> Joel
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170628/05f0335b/attachment.html>
More information about the Gluster-users
mailing list