<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 4, 2017 at 3:19 AM, Jiffin Tony Thottan <span dir="ltr"><<a href="mailto:jthottan@redhat.com" target="_blank">jthottan@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="m_-160504379898228829m_-6946369452125059204moz-cite-prefix">On 04/05/17 02:03, Praveen George
wrote:<br>
</div>
<blockquote type="cite">
<div style="color:#000;background-color:#fff;font-family:Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif;font-size:10px">
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859">
<div id="m_-160504379898228829m_-6946369452125059204yui_3_16_0_ym19_1_1493663807923_85187">
<div style="color:#000;background-color:#fff;font-family:Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif;font-size:10px" id="m_-160504379898228829m_-6946369452125059204yui_3_16_0_ym19_1_1493663807923_85186">
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82727">Hi
Team,</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82728"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82729">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82730">We’ve
been intermittently seeing issues where postgresql is
unable to create a table, or some info is missing.</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82731"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82732">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82733">Postgresql
logs the following error:</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82734"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82735">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82736">ERROR:
unexpected data beyond EOF in block 53 of relation
base/16384/12009</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82737">HINT:
This has been seen to occur with buggy kernels;
consider updating your system.</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82738"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82739">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82740">We
are using the k8s PV/PVC to bind the volumes to the
containers and using the gluster plugin to mount the
volumes on the worker nodes and take it into the
containers.</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82741"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82742">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82743">The
issue occurs regardless of whether the k8s spec
specifies mounting of the pv using the pv provider or
mount the gluster volume directly.</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82744"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82745">
</div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82746">Just
to check if the issue is with the glusterfs client, we
mount the volume using NFS (NFS on the client talking to
gluster on the master), the issue doesn’t occur.
However, with the NFS client talking directly to _one_
of the gluster masters; this means that if that master
fails, it will not failover to the other gluster master
- we thus lose gluster HA if we go this route. </div>
<div id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82747"><br id="m_-160504379898228829m_-6946369452125059204yiv6311081859yui_3_16_0_ym19_1_1493663807923_82748">
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
If you are interested there are HA solutions available with NFS. It
depends on NFS solution which u are trying, if it is gluster
nfs(integrated nfs server with gluster) then use ctdb and for
NFS-Ganesha , we already have an integrated solution with
pacemaker/corosync<br>
<br>
Please update ur gluster version since it EOLed, you don't receive
any more update for that version.<br>
<br></div></blockquote><div><br></div><div><br></div><div>Do you notice any errors in the fuse client logs when postgresql complains about the error? <br></div><div><br></div><div>It might be useful to turn off all the performance translators in gluster and check if the problem persists.</div><div><br></div><div>Regards,</div><div>Vijay</div></div></div></div>