<html><head></head><body><div style="color:#000; background-color:#fff; font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:10px"><div id="yiv6311081859"><div id="yui_3_16_0_ym19_1_1493663807923_85187"><div style="color:#000;background-color:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:10px;" id="yui_3_16_0_ym19_1_1493663807923_85186"><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82727">Hi Team,</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82728"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82729"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82730">We’ve been intermittently seeing issues where postgresql is unable to create a table, or some info is missing.</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82731"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82732"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82733">Postgresql logs the following error:</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82734"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82735"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82736">ERROR: unexpected data beyond EOF in block 53 of relation base/16384/12009</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82737">HINT: This has been seen to occur with buggy kernels; consider updating your system.</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82738"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82739"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82740">We are using the k8s PV/PVC to bind the volumes to the containers and using the gluster plugin to mount the volumes on the worker nodes and take it into the containers.</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82741"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82742"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82743">The issue occurs regardless of whether the k8s spec specifies mounting of the pv using the pv provider or mount the gluster volume directly.</div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82744"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82745"></div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82746">Just to check if the issue is with the glusterfs client, we mount the volume using NFS (NFS on the client talking to gluster on the master), the issue doesn’t occur. However, with the NFS client talking directly to _one_ of the gluster masters; this means that if that master fails, it will not failover to the other gluster master - we thus lose gluster HA if we go this route. </div><div id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82747"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82748"></div><div dir="ltr" id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82749">Anyone faced this issue, is there any fix already available for the same. Gluster version is 3.7.20 and k8s is 1.5.2.</div><div dir="ltr" id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82750"><br id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82751"></div><div dir="ltr" id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82752">Thanks</div><div dir="ltr" id="yiv6311081859yui_3_16_0_ym19_1_1493663807923_82753">Praveen</div></div></div></div></div></body></html>