[Gluster-users] Gluster volume not mounted

Joel Diaz mrjoeldiaz at gmail.com
Tue Jun 27 13:03:48 UTC 2017


Good morning Gluster users,

I'm very new to the Gluster file system. My apologies if this is not the
correct way to seek assistance. However, I would appreciate some insight
into understanding the issue I have.

I have three nodes running two volumes, engine and data. The third node is
the arbiter on both volumes. Both volumes were operation fine but one of
the volumes, data,  no longer mounts.

Please see below:

gluster volume info all

Volume Name: data
Type: Replicate
Volume ID: 1d6bb110-9be4-4630-ae91-36ec1cf6cc02
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.170.141:/gluster_bricks/data/data
Brick2: 192.168.170.143:/gluster_bricks/data/data
Brick3: 192.168.170.147:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable

Volume Name: engine
Type: Replicate
Volume ID: b160f0b2-8bd3-4ff2-a07c-134cab1519dd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.170.141:/gluster_bricks/engine/engine
Brick2: 192.168.170.143:/gluster_bricks/engine/engine
Brick3: 192.168.170.147:/gluster_bricks/engine/engine (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable

df -h
Filesystem                                    Size  Used Avail Use% Mounted
on
/dev/mapper/centos_ovirt--hyp--01-root         50G  3.9G   47G   8% /
devtmpfs                                      7.7G     0  7.7G   0% /dev
tmpfs                                         7.8G     0  7.8G   0% /dev/shm
tmpfs                                         7.8G  8.7M  7.7G   1% /run
tmpfs                                         7.8G     0  7.8G   0%
/sys/fs/cgroup
/dev/mapper/centos_ovirt--hyp--01-home         61G   33M   61G   1% /home
/dev/mapper/gluster_vg_sdb-gluster_lv_engine   50G  8.1G   42G  17%
/gluster_bricks/engine
/dev/sda1                                     497M  173M  325M  35% /boot
/dev/mapper/gluster_vg_sdb-gluster_lv_data    730G  157G  574G  22%
/gluster_bricks/data
tmpfs                                         1.6G     0  1.6G   0%
/run/user/0
ovirt-hyp-01.reis.com:engine                   50G  8.1G   42G  17%
/rhev/data-center/mnt/glusterSD/ovirt-hyp-01.reis.com:engine

gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.170.141:/gluster_bricks/data/
data                                        49157     0          Y
11967
Brick 192.168.170.143:/gluster_bricks/data/
data                                        49157     0          Y
2901
Brick 192.168.170.147:/gluster_bricks/data/
data                                        49158     0          Y
2626
Self-heal Daemon on localhost               N/A       N/A        Y
16211
Self-heal Daemon on 192.168.170.147         N/A       N/A        Y
3402
Self-heal Daemon on 192.168.170.143         N/A       N/A        Y
20254

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

gluster peer status
Number of Peers: 2

Hostname: 192.168.170.143
Uuid: b2b30d05-cf91-4567-92fd-022575e082f5
State: Peer in Cluster (Connected)
Other names:
10.0.0.2

Hostname: 192.168.170.147
Uuid: 4e50acc4-f3cb-422d-b499-fb5796a53529
State: Peer in Cluster (Connected)
Other names:
10.0.0.3

Any assistance in understanding how and why the volume no longer mounts and
a possible resolution would be greatly appreciated.

Thank you,

Joel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170627/c623b49d/attachment.html>


More information about the Gluster-users mailing list