<html><head></head><body><div>These are last line into /var/log/glusterfs/bricks/gfsvol1-brick1.log log</div><div><br></div><pre>[2021-09-06 21:29:02.165238 +0000] I [addr.c:54:compare_addr_and_update] 0-/gfsvol1/brick1: allowed = "*", received addr = "172.16.3.1"</pre><pre>[2021-09-06 21:29:02.165365 +0000] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 12261a60-60a5-4791-a3f1-6da397046ee5</pre><pre>[2021-09-06 21:29:02.165402 +0000] I [MSGID: 115029] [server-handshake.c:561:server_setvolume] 0-gfsvol1-server: accepted client from CTX_ID:444e0582-ac68-4f20-9552-c4dbc7724967-GRAPH_ID:0-PID:227500-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0 (version: 9.3) with subvol /gfsvol1/brick1 </pre><pre>[2021-09-06 21:29:02.179387 +0000] W [socket.c:767:__socket_rwv] 0-tcp.gfsvol1-server: readv on 172.16.3.1:49144 failed (No data available)</pre><pre>[2021-09-06 21:29:02.179451 +0000] I [MSGID: 115036] [server.c:500:server_rpc_notify] 0-gfsvol1-server: disconnecting connection [{client-uid=CTX_ID:444e0582-ac68-4f20-9552-c4dbc7724967-GRAPH_ID:0-PID:227500-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0}] </pre><pre>[2021-09-06 21:29:02.179877 +0000] I [MSGID: 101055] [client_t.c:397:gf_client_unref] 0-gfsvol1-server: Shutting down connection CTX_ID:444e0582-ac68-4f20-9552-c4dbc7724967-GRAPH_ID:0-PID:227500-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0 </pre><pre>[2021-09-06 21:29:10.254230 +0000] I [addr.c:54:compare_addr_and_update] 0-/gfsvol1/brick1: allowed = "*", received addr = "172.16.3.1"</pre><pre>[2021-09-06 21:29:10.254283 +0000] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 12261a60-60a5-4791-a3f1-6da397046ee5</pre><pre>[2021-09-06 21:29:10.254300 +0000] I [MSGID: 115029] [server-handshake.c:561:server_setvolume] 0-gfsvol1-server: accepted client from CTX_ID:fef710c3-11bf-4a91-b749-f52a536d6dad-GRAPH_ID:0-PID:227541-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0 (version: 9.3) with subvol /gfsvol1/brick1 </pre><pre>[2021-09-06 21:29:10.272069 +0000] W [socket.c:767:__socket_rwv] 0-tcp.gfsvol1-server: readv on 172.16.3.1:49140 failed (No data available)</pre><pre>[2021-09-06 21:29:10.272133 +0000] I [MSGID: 115036] [server.c:500:server_rpc_notify] 0-gfsvol1-server: disconnecting connection [{client-uid=CTX_ID:fef710c3-11bf-4a91-b749-f52a536d6dad-GRAPH_ID:0-PID:227541-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0}] </pre><pre>[2021-09-06 21:29:10.272430 +0000] I [MSGID: 101055] [client_t.c:397:gf_client_unref] 0-gfsvol1-server: Shutting down connection CTX_ID:fef710c3-11bf-4a91-b749-f52a536d6dad-GRAPH_ID:0-PID:227541-HOST:s-virt1.realdomain.it-PC_NAME:gfsvol1-client-1-RECON_NO:-0 </pre><div><br></div><div>I have a network adapter reserved and direct connected from the two server with dedicated IP 172.16.3.1/30 and 172.16.3.2/30, named via /etc/hosts virt1.local and virt2.local</div><div><br></div><div>In this logs I see also the real server name ( ... HOST:s-virt1.realdomain.it-PC_NAME: ...) which has another IP on another network.</div><div><span></span></div><div><br></div><div>Now this cluster is in production and support some VM.</div><div><br></div><div>What is the bes way to solve this dangerous situation without risk?</div><div><br></div><div>Many thanks</div><div>Dario</div><div><br></div><div>Il giorno mar, 07/09/2021 alle 05.28 +0000, Strahil Nikolov ha scritto:</div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div>No, it's not normal.</div><div id="yMail_cursorElementTracker_1630992395546">Go to the virt2 and in /var/log/gluster directory you will find 'bricks' . Check the logs in bricks for more information.</div><div id="yMail_cursorElementTracker_1630992459641"><br></div><div id="yMail_cursorElementTracker_1630992459852">Best Regards,</div><div id="yMail_cursorElementTracker_1630992463011">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1630992452434"> <br> <br><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div style="font-family:Roboto, sans-serif; color:#6D00F6;"><div>On Tue, Sep 7, 2021 at 1:13, Dario Lesca</div><div><d.lesca@solinos.it> wrote:</div></div><div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"><div dir="ltr">Hello everybody!<br></div><div dir="ltr">I'm a novice with gluster. I have setup my first cluster with two<br></div><div dir="ltr">nodes <br></div><div dir="ltr"><br></div><div dir="ltr">This is the current volume info:<br></div><div dir="ltr"><br></div><div dir="ltr"> [<a ymailto="mailto:root@s-virt1" href="mailto:root@s-virt1">root@s-virt1</a> ~]# gluster volume info gfsvol1<br></div><div dir="ltr"> Volume Name: gfsvol1<br></div><div dir="ltr"> Type: Replicate<br></div><div dir="ltr"> Volume ID: 5bad4a23-58cc-44d7-8195-88409720b941<br></div><div dir="ltr"> Status: Started<br></div><div dir="ltr"> Snapshot Count: 0<br></div><div dir="ltr"> Number of Bricks: 1 x 2 = 2<br></div><div dir="ltr"> Transport-type: tcp<br></div><div dir="ltr"> Bricks:<br></div><div dir="ltr"> Brick1: virt1.local:/gfsvol1/brick1<br></div><div dir="ltr"> Brick2: virt2.local:/gfsvol1/brick1<br></div><div dir="ltr"> Options Reconfigured:<br></div><div dir="ltr"> performance.client-io-threads: off<br></div><div dir="ltr"> nfs.disable: on<br></div><div dir="ltr"> transport.address-family: inet<br></div><div dir="ltr"> storage.fips-mode-rchecksum: on<br></div><div dir="ltr"> cluster.granular-entry-heal: on<br></div><div dir="ltr"> storage.owner-uid: 107<br></div><div dir="ltr"> storage.owner-gid: 107<br></div><div dir="ltr"> server.allow-insecure: on<br></div><div dir="ltr"><br></div><div dir="ltr">For now all seem work fine.<br></div><div dir="ltr"><br></div><div dir="ltr">I have mount the gfs volume on all two nodes and use the VM into it<br></div><div dir="ltr"><br></div><div dir="ltr">But today I noticed that the second node (virt2) is offline:<br></div><div dir="ltr"><br></div><div dir="ltr"> [<a ymailto="mailto:root@s-virt1" href="mailto:root@s-virt1">root@s-virt1</a> ~]# gluster volume status<br></div><div dir="ltr"> Status of volume: gfsvol1<br></div><div dir="ltr"> Gluster process TCP Port RDMA Port Online Pid<br></div><div dir="ltr"> ------------------------------------------------------------------------------<br></div><div dir="ltr"> Brick virt1.local:/gfsvol1/brick1 49152 0 Y 3090 <br></div><div dir="ltr"> Brick virt2.local:/gfsvol1/brick1 N/A N/A N N/A <br></div><div dir="ltr"> Self-heal Daemon on localhost N/A N/A Y 3105 <br></div><div dir="ltr"> Self-heal Daemon on virt2.local N/A N/A Y 3140 <br></div><div dir="ltr"> <br></div><div dir="ltr"> Task Status of Volume gfsvol1<br></div><div dir="ltr"> ------------------------------------------------------------------------------<br></div><div dir="ltr"> There are no active volume tasks<br></div><div dir="ltr"> <br></div><div dir="ltr"> [<a ymailto="mailto:root@s-virt1" href="mailto:root@s-virt1">root@s-virt1</a> ~]# gluster volume status gfsvol1 detail<br></div><div dir="ltr"> Status of volume: gfsvol1<br></div><div dir="ltr"> ------------------------------------------------------------------------------<br></div><div dir="ltr"> Brick : Brick virt1.local:/gfsvol1/brick1<br></div><div dir="ltr"> TCP Port : 49152 <br></div><div dir="ltr"> RDMA Port : 0 <br></div><div dir="ltr"> Online : Y <br></div><div dir="ltr"> Pid : 3090 <br></div><div dir="ltr"> File System : xfs <br></div><div dir="ltr"> Device : /dev/mapper/rl-gfsvol1<br></div><div dir="ltr"> Mount Options : rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=128,swidth=128,noquota<br></div><div dir="ltr"> Inode Size : 512 <br></div><div dir="ltr"> Disk Space Free : 146.4GB <br></div><div dir="ltr"> Total Disk Space : 999.9GB <br></div><div dir="ltr"> Inode Count : 307030856 <br></div><div dir="ltr"> Free Inodes : 307026149 <br></div><div dir="ltr"> ------------------------------------------------------------------------------<br></div><div dir="ltr"> Brick : Brick virt2.local:/gfsvol1/brick1<br></div><div dir="ltr"> TCP Port : N/A <br></div><div dir="ltr"> RDMA Port : N/A <br></div><div dir="ltr"> Online : N <br></div><div dir="ltr"> Pid : N/A <br></div><div dir="ltr"> File System : xfs <br></div><div dir="ltr"> Device : /dev/mapper/rl-gfsvol1<br></div><div dir="ltr"> Mount Options : rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=128,swidth=128,noquota<br></div><div dir="ltr"> Inode Size : 512 <br></div><div dir="ltr"> Disk Space Free : 146.4GB <br></div><div dir="ltr"> Total Disk Space : 999.9GB <br></div><div dir="ltr"> Inode Count : 307052016 <br></div><div dir="ltr"> Free Inodes : 307047307<br></div><div dir="ltr"> <br></div><div dir="ltr">What does it mean?<br></div><div dir="ltr">What's wrong?<br></div><div dir="ltr">Is this normal or I missing some setting?<br></div><div dir="ltr"><br></div><div dir="ltr">If you need more information let me know<br></div><div dir="ltr"><br></div><div dir="ltr">Many thanks for your help<br></div><div dir="ltr"><br></div><div dir="ltr"><br></div></div></blockquote></div></blockquote></body></html>