[Bugs] [Bug 1484156] Can't attach volume tier to create hot tier
bugzilla at redhat.com
bugzilla at redhat.com
Mon Aug 28 12:39:25 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1484156
--- Comment #1 from Fidel Rodriguez <fidelito17 at hotmail.com> ---
gluster volume status:
Status of volume: vmVolume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.0.11:/vmVolume/.bricksvm 49154 0 Y 3230
Brick 172.16.0.11:/vmVolume2/.bricksvm 49155 0 Y 3213
Brick 172.16.0.12:/vmVolume/.bricksvm 49154 0 Y 2348
Brick 172.16.0.12:/vmVolume2/.bricksvm 49155 0 Y 2375
Brick 172.16.0.13:/vmVolume/.bricksvm 49154 0 Y 3216
Brick 172.16.0.13:/vmVolume2/.bricksvm 49155 0 Y 3225
Brick 172.16.0.14:/vmVolume/.bricksvm 49154 0 Y 3203
Brick 172.16.0.14:/vmVolume2/.bricksvm 49155 0 Y 3209
Self-heal Daemon on localhost N/A N/A Y 3312
Self-heal Daemon on 172.16.0.14 N/A N/A Y 3366
Self-heal Daemon on 172.16.0.12 N/A N/A Y 3234
Self-heal Daemon on 172.16.0.13 N/A N/A Y 3302
Task Status of Volume vmVolume
------------------------------------------------------------------------------
There are no active volume tasks
Gluster volume info vmVolume:
Volume Name: vmVolume
Type: Distributed-Replicate
Volume ID: 10a58c68-b042-4354-b3a5-cc20076bf0fd
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 172.16.0.11:/vmVolume/.bricksvm
Brick2: 172.16.0.11:/vmVolume2/.bricksvm
Brick3: 172.16.0.12:/vmVolume/.bricksvm
Brick4: 172.16.0.12:/vmVolume2/.bricksvm
Brick5: 172.16.0.13:/vmVolume/.bricksvm
Brick6: 172.16.0.13:/vmVolume2/.bricksvm
Brick7: 172.16.0.14:/vmVolume/.bricksvm
Brick8: 172.16.0.14:/vmVolume2/.bricksvm
Options Reconfigured:
nfs.disable: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-gid: 36
storage.owner-uid: 36
Please let me know if there is any other logs I can provide
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list