<!DOCTYPE html>
<html><head>
    <meta charset="UTF-8">
</head><body><p>Hi Gluster Team,</p><p><br></p><p>I am trying to implment gluster fs in podman containers, which is running except the problems described below.</p><p><br></p><p>My observations:</p><p>- The bricks on the server is gong offline when one of the podman container is restarted or the appropriate server is rebooted.</p><p>- Althoug the status of the bricks are offline, the replication seems to be working, as data will be replicated.</p><p>- I see that the replicated data will also be replicated on arbiter node, where I was expecting to see only meta data.</p><p>&#160;</p><p>My configuration.</p><p>I created glusterfs for replication in 3 nodes on centos7 but in podman containers</p><p>The containers in the first and second nodes should be normal replication and 3rd node arbiter node.</p><p>After creation replication and enabling heal processes I can see also that 3rd node is marked as arbiter node.</p><p>According to description of arbiter, the arbiter node should store only metadata but in my configuration the replicated data will be stored in all bricks including arbiter node.</p><p>&#160;</p><p>Questions:</p><p>When rebooting one of the server or restarting one of the glusterfs container the restarted container is not going online until gluster volume is stopped and started again. Is it a solution inbetween to resolve this problem?</p><p>-why Arbiter node stores all the data, allthough it should only have some metadata to restore the replicated data on other nodes. I would not have problem that replication is done in all three nodes. I just need to know&#160;</p><p>- Can you give me feedback, whether some one experience or similar porblems with glusterfs implemented in podman containers?</p><p><br></p><p>Here are my configurations:</p><p>on all containers I have :CentOS Linux release 7.7.1908 glusterfs version 7.3 and systemctl is enabled for glusterd service</p><p><br></p><p>My gluster volume creation:</p><p>gluster volume create cgvol1 replica 2 arbiter 1 transport tcp avm1:/cbricks/brick1/data avm2:/cbricks/brick1/data dvm1:/cbricks/brick1/data force<br></p><p><br></p><p class="ox-c92abcee73-default-style">gluster peer status excuted on avm2:<br>Number of Peers: 2</p><p class="ox-c92abcee73-default-style">Hostname: avm1<br>Uuid: 5d1dc6a7-8f34-45a3-a7c9-c69c442b66dc<br>State: Peer in Cluster (Connected)</p><p class="ox-c92abcee73-default-style">Hostname: dvm1<br>Uuid: 310ffd58-28ab-43f1-88d3-1e381bd46ab3<br>State: Peer in Cluster (Connected)<br></p><p><br></p><p class="ox-c92abcee73-default-style"><strong>gluster volume info</strong></p><p class="ox-c92abcee73-default-style">Volume Name: cgvol1<br>Type: Replicate<br>Volume ID: da975178-b68f-410c-884c-a7f635e4381a<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: arvm1:/cbricks/brick1/data<br>Brick2: avm2:/cbricks/brick1/data<br>Brick3: devm1:/cbricks/brick1/data (arbiter)<br>Options Reconfigured:<br>cluster.self-heal-daemon: on<br>cluster.entry-self-heal: on<br>cluster.metadata-self-heal: on<br>cluster.data-self-heal: on<br>transport.address-family: inet<br>storage.fips-mode-rchecksum: on<br>nfs.disable: on<br>performance.client-io-threads: off<br></p><p class="ox-c92abcee73-default-style"><br></p><p><strong>gluster volume status</strong><br>Status of volume: cgvol1<br>Gluster process TCP Port RDMA Port Online Pid</p><p class="ox-c92abcee73-default-style">------------------------------------------------------------------------------<br>Brick avm1:/cbricks/brick1/data 49152 0 Y 516<br>Brick avm2:/cbricks/brick1/data 49152 0 Y 353<br>Brick dvm1:/cbricks/brick1/data 49152 0 Y 572<br>Self-heal Daemon on localhost N/A N/A Y 537<br>Self-heal Daemon on dvm1 N/A N/A Y 593<br>Self-heal Daemon on avm2 N/A N/A Y 374</p><p class="ox-c92abcee73-default-style">Task Status of Volume cgvol1<br>------------------------------------------------------------------------------<br>There are no active volume tasks</p><p class="ox-c92abcee73-default-style"><strong>gluster volume heal cgvol1 info</strong><br>Brick avm1:/cbricks/brick1/data<br>Status: Connected<br>Number of entries: 0</p><p class="ox-c92abcee73-default-style">Brick avm2:/cbricks/brick1/data<br>Status: Connected<br>Number of entries: 0</p><p class="ox-c92abcee73-default-style">Brick dvm1:/cbricks/brick1/data<br>Status: Connected<br>Number of entries: 0<br></p><p class="ox-c92abcee73-default-style"><br></p><p class="ox-c92abcee73-default-style"><br></p><p class="ox-c92abcee73-default-style">Best Regards, </p><p class="ox-c92abcee73-default-style">Rifat Ucal</p><p class="ox-c92abcee73-default-style"><br></p><p class="ox-c92abcee73-default-style"><br></p><blockquote type="cite">Jorick Astrego &#60;jorick@netbulae.eu&#62; hat am 14. Februar 2020 um 10:10 geschrieben: <br> <br><p>Hi,</p><p>It looks like you have a two node setup?</p><p>Then it&#39;s expected as with two nodes you don&#39;t have quorum and this can lead to split brains.</p><p>To have HA, add another node or an arbiter node.</p><p><a class="ox-c92abcee73-ox-174a575e7c-moz-txt-link-freetext" href="https://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/">https://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/</a></p><p>You can also modify the quorum but then you shouldn&#39;t be too attachted to the data you have on it.</p><p>Regards, Jorick<br></p><div class="ox-c92abcee73-ox-174a575e7c-moz-cite-prefix">On 2/14/20 9:27 AM, Cloud Udupi wrote: <br></div><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Hi,<div><br></div><div>I am new to glusterfs. I have used this guide on CentOS-7.6.&#160;</div><div><a href="https://microdevsys.com/wp/glusterfs-configuration-and-setup-w-nfs-ganesha-for-an-ha-nfs-cluster/" style="font-family: -webkit-standard;">https://microdevsys.com/wp/glusterfs-configuration-and-setup-w-nfs-ganesha-for-an-ha-nfs-cluster/</a> <br><div><p style="margin: 0px; font-stretch: normal; font-size: 12px; font-family: Tahoma; color: #08275f;"><br></p><p style="margin: 0px; font-stretch: normal;"><span style="background-color: #ffffff;"><span style="color: #000000; font-family: arial, sans-serif;">glusterfs &#8212;version</span></span></p><p style="margin: 0px; font-stretch: normal;"><span style="background-color: #ffffff;"><span style="color: #000000; font-family: arial, sans-serif;">glusterfs 7.2</span></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #000000; font-family: arial, sans-serif;"><br> </span></p></div></div><div>Firewall is disabled. Self heal is enabled.</div><div>Everything works fine until I reboot one of the&#160;servers. When the server reboots the brick doesn&#39;t come online.</div><div><br></div><div><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em><strong>gluster volume status</strong></em></span></p><p style="margin: 0px; font-stretch: normal; min-height: 14px;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em><br> </em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Status of volume: gv01</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Gluster process &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; TCP Port&#160; RDMA Port&#160; Online&#160; Pid</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>------------------------------------------------------------------------------</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Brick server1:/bricks/0/gv0 &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; N/A &#160; &#160; &#160; N/A&#160; &#160; &#160; &#160; N &#160; &#160; &#160; N/A &#160;</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Brick server2:/bricks/0/gv0 &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 49152 &#160; &#160; 0&#160; &#160; &#160; &#160; &#160; Y &#160; &#160; &#160; 99870</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Self-heal Daemon on localhost &#160; &#160; &#160; &#160; &#160; &#160; &#160; N/A &#160; &#160; &#160; N/A&#160; &#160; &#160; &#160; Y &#160; &#160; &#160; 109802</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Self-heal Daemon on server1 &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; N/A &#160; &#160; &#160; N/A&#160; &#160; &#160; &#160; Y &#160; &#160; &#160; 2142&#160;</em></span></p><p style="margin: 0px; font-stretch: normal; min-height: 13px;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>&#160;</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Task Status of Volume gv01</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>------------------------------------------------------------------------------</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>There are no active volume tasks</em></span></p><p style="margin: 0px; font-stretch: normal; color: #000000;"><br></p><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>gluster volume heal gv01</em></span></strong></p><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em><br> </em></span></strong></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Launching heal operation to perform index self heal on volume gv01 has been unsuccessful:</em></span></p><p style="margin: 0px; font-stretch: normal;">&#160;<br></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Glusterd Syncop Mgmt brick op &#39;Heal&#39; failed. Please check glustershd log file for details</em></span><span style="color: #000000; font-size: 11px; font-family: Menlo;">.</span></p></div><div><br></div><div><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>gluster volume heal gv01 info</em></span></strong></p><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em><br> </em></span></strong></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>&#160;gluster volume heal gv01 info</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Brick server1:/bricks/0/gv0</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Status: Transport endpoint is not connected</em></span></p><p style="margin: 0px; font-stretch: normal;">&#160;<br></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em>Number of entries: -</em></span></p><p style="margin: 0px; font-stretch: normal;"><span style="color: #073763; font-family: arial, sans-serif; font-size: xx-small;"><em><br> </em></span></p><p style="margin: 0px; font-stretch: normal;">&#160;<br></p><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #000000; font-family: arial, sans-serif;">When I do</span><span style="color: #08275f; font-size: 12px; font-family: Tahoma;"> &#34;gluster volume start gv01 force&#34; </span><span style="color: #000000; font-family: Tahoma;">brick starts.</span></strong></p><p style="margin: 0px; font-stretch: normal;"><strong><span style="color: #000000; font-family: Tahoma;"><br> </span></strong></p><p style="margin: 0px; font-stretch: normal;">I want the brick to come online automatically after the reboot. I have attached log file.</p><p style="margin: 0px; font-stretch: normal;">Please help.<br></p><p style="margin: 0px; font-stretch: normal;"><br></p><p style="margin: 0px; font-stretch: normal;">Regards,</p><p style="margin: 0px; font-stretch: normal;">Mark.</p></div></div></div></div></div></div></div></div></div><br><pre class="ox-c92abcee73-ox-174a575e7c-moz-quote-pre">________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: <a class="ox-c92abcee73-ox-174a575e7c-moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: <a class="ox-c92abcee73-ox-174a575e7c-moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>

Gluster-users mailing list
<a class="ox-c92abcee73-ox-174a575e7c-moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="ox-c92abcee73-ox-174a575e7c-moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre></blockquote><br> <br> <br> <br> <span style="color: #604c78;">Met vriendelijke groet, With kind regards,<br><br>Jorick Astrego<br></span> <strong style="color: #604c78;"><br>Netbulae Virtualization Experts </strong> <br><hr style="border: none; border-top: 1px solid #ccc;"><table style="width: 522px;" class="ox-c92abcee73-mce-item-table mce-item-table"><tbody><tr><td style="width: 130px; font-size: 10px;">Tel: 053 20 30 270</td><td style="width: 130px; font-size: 10px;">info@netbulae.eu</td><td style="width: 130px; font-size: 10px;">Staalsteden 4-3A</td><td style="width: 130px; font-size: 10px;">KvK 08198180</td></tr><tr><td style="width: 130px; font-size: 10px;">Fax: 053 20 30 271</td><td style="width: 130px; font-size: 10px;">www.netbulae.eu</td><td style="width: 130px; font-size: 10px;">7547 TA Enschede</td><td style="width: 130px; font-size: 10px;">BTW NL821234584B01</td></tr></tbody></table><br><hr style="border: none; border-top: 1px solid #ccc;"><br></blockquote><p class="ox-c92abcee73-default-style"><br>&#160;</p><blockquote type="cite">________ <br> <br>Community Meeting Calendar: <br> <br>APAC Schedule - <br>Every 2nd and 4th Tuesday at 11:30 AM IST <br>Bridge: https://bluejeans.com/441850968 <br> <br>NA/EMEA Schedule - <br>Every 1st and 3rd Tuesday at 01:00 PM EDT <br>Bridge: https://bluejeans.com/441850968 <br> <br>Gluster-users mailing list <br>Gluster-users@gluster.org <br>https://lists.gluster.org/mailman/listinfo/gluster-users <br></blockquote><p class="ox-c92abcee73-default-style"><br>&#160;</p></body></html>