[Bugs] [Bug 1744883] GlusterFS problem dataloss
bugzilla at redhat.com
bugzilla at redhat.com
Tue Aug 27 07:27:26 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1744883
Nicola battista <nicola.battista89 at gmail.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(nicola.battista89 |
|@gmail.com) |
--- Comment #2 from Nicola battista <nicola.battista89 at gmail.com> ---
Hi,
Sure this is the output :
[root at cstore-pm01 ~]# glusterfs --version
glusterfs 6.5
gluster> volume status
Status of volume: dbroot1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.31.5:/usr/local/mariadb/column
store/gluster/brick1 49152 0 Y 12001
Brick 172.16.31.6:/usr/local/mariadb/column
store/gluster/brick1 49152 0 Y 11632
Brick 172.16.31.7:/usr/local/mariadb/column
store/gluster/brick1 49152 0 Y 11640
Self-heal Daemon on localhost N/A N/A Y 12021
Self-heal Daemon on 172.16.31.6 N/A N/A Y 11663
Self-heal Daemon on 172.16.31.7 N/A N/A Y 11673
Task Status of Volume dbroot1
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: dbroot2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.31.5:/usr/local/mariadb/column
store/gluster/brick2 49153 0 Y 12000
Brick 172.16.31.6:/usr/local/mariadb/column
store/gluster/brick2 49153 0 Y 11633
Brick 172.16.31.7:/usr/local/mariadb/column
store/gluster/brick2 49153 0 Y 11651
Self-heal Daemon on localhost N/A N/A Y 12021
Self-heal Daemon on 172.16.31.6 N/A N/A Y 11663
Self-heal Daemon on 172.16.31.7 N/A N/A Y 11673
Task Status of Volume dbroot2
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: dbroot3
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.31.5:/usr/local/mariadb/column
store/gluster/brick3 49154 0 Y 12002
Brick 172.16.31.6:/usr/local/mariadb/column
store/gluster/brick3 49154 0 Y 11648
Brick 172.16.31.7:/usr/local/mariadb/column
store/gluster/brick3 49154 0 Y 11662
Self-heal Daemon on localhost N/A N/A Y 12021
Self-heal Daemon on 172.16.31.6 N/A N/A Y 11663
Self-heal Daemon on 172.16.31.7 N/A N/A Y 11673
Task Status of Volume dbroot3
------------------------------------------------------------------------------
There are no active volume tasks
gluster> volume info all
Volume Name: dbroot1
Type: Replicate
Volume ID: ecf4fd04-2e96-47d9-8a40-4f84a48657fb
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.31.5:/usr/local/mariadb/columnstore/gluster/brick1
Brick2: 172.16.31.6:/usr/local/mariadb/columnstore/gluster/brick1
Brick3: 172.16.31.7:/usr/local/mariadb/columnstore/gluster/brick1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: dbroot2
Type: Replicate
Volume ID: f2b49f9f-3a91-4ac4-8eb3-4a327d0dbc61
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.31.5:/usr/local/mariadb/columnstore/gluster/brick2
Brick2: 172.16.31.6:/usr/local/mariadb/columnstore/gluster/brick2
Brick3: 172.16.31.7:/usr/local/mariadb/columnstore/gluster/brick2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: dbroot3
Type: Replicate
Volume ID: 73b96917-c842-4fc2-8bca-099735c4aa6a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.31.5:/usr/local/mariadb/columnstore/gluster/brick3
Brick2: 172.16.31.6:/usr/local/mariadb/columnstore/gluster/brick3
Brick3: 172.16.31.7:/usr/local/mariadb/columnstore/gluster/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Thanks,
Regards
Nicola Battista
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list