[Gluster-users] gluster volume status show second node is offline
Strahil Nikolov
hunter86_bg at yahoo.com
Tue Sep 7 05:28:09 UTC 2021
No, it's not normal.Go to the virt2 and in /var/log/gluster directory you will find 'bricks' . Check the logs in bricks for more information.
Best Regards,Strahil Nikolov
On Tue, Sep 7, 2021 at 1:13, Dario Lesca<d.lesca at solinos.it> wrote: Hello everybody!
I'm a novice with gluster. I have setup my first cluster with two
nodes
This is the current volume info:
[root at s-virt1 ~]# gluster volume info gfsvol1
Volume Name: gfsvol1
Type: Replicate
Volume ID: 5bad4a23-58cc-44d7-8195-88409720b941
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: virt1.local:/gfsvol1/brick1
Brick2: virt2.local:/gfsvol1/brick1
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
storage.owner-uid: 107
storage.owner-gid: 107
server.allow-insecure: on
For now all seem work fine.
I have mount the gfs volume on all two nodes and use the VM into it
But today I noticed that the second node (virt2) is offline:
[root at s-virt1 ~]# gluster volume status
Status of volume: gfsvol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick virt1.local:/gfsvol1/brick1 49152 0 Y 3090
Brick virt2.local:/gfsvol1/brick1 N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 3105
Self-heal Daemon on virt2.local N/A N/A Y 3140
Task Status of Volume gfsvol1
------------------------------------------------------------------------------
There are no active volume tasks
[root at s-virt1 ~]# gluster volume status gfsvol1 detail
Status of volume: gfsvol1
------------------------------------------------------------------------------
Brick : Brick virt1.local:/gfsvol1/brick1
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 3090
File System : xfs
Device : /dev/mapper/rl-gfsvol1
Mount Options : rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=128,swidth=128,noquota
Inode Size : 512
Disk Space Free : 146.4GB
Total Disk Space : 999.9GB
Inode Count : 307030856
Free Inodes : 307026149
------------------------------------------------------------------------------
Brick : Brick virt2.local:/gfsvol1/brick1
TCP Port : N/A
RDMA Port : N/A
Online : N
Pid : N/A
File System : xfs
Device : /dev/mapper/rl-gfsvol1
Mount Options : rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=128,swidth=128,noquota
Inode Size : 512
Disk Space Free : 146.4GB
Total Disk Space : 999.9GB
Inode Count : 307052016
Free Inodes : 307047307
What does it mean?
What's wrong?
Is this normal or I missing some setting?
If you need more information let me know
Many thanks for your help
--
Dario Lesca
(inviato dal mio Linux Fedora 34 Workstation)
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210907/240b44e6/attachment.html>
More information about the Gluster-users
mailing list