<div dir="ltr"><div><div><div><div>Hi Niels,<br><br></div>No problem we wil try to backport that patch on 3.7.6. <br><br></div>Could you please let me know in which release Gluster community is going to provide this patch and date of that release?<br><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 31, 2017 at 10:05 PM, Niels de Vos <span dir="ltr"><<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, May 31, 2017 at 04:08:06PM +0530, ABHISHEK PALIWAL wrote:<br>
> We are using 3.7.6 and on link <a href="https://review.gluster.org/#/c/16279" rel="noreferrer" target="_blank">https://review.gluster.org/#/<wbr>c/16279</a> status<br>
> is "can't merge"<br>
<br>
</span>Note that 3.7.x will not get any updates anymore. We currently maintain<br>
version 3.8.x, 3.10.x and 3.11.x. See the release schedele for more<br>
details:<br>
<a href="https://www.gluster.org/community/release-schedule/" rel="noreferrer" target="_blank">https://www.gluster.org/<wbr>community/release-schedule/</a><br>
<span class="HOEnZb"><font color="#888888"><br>
Niels<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <<a href="mailto:atumball@redhat.com">atumball@redhat.com</a>> wrote:<br>
><br>
> > This is already part of 3.11.0 release?<br>
> ><br>
> > On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> > > wrote:<br>
> ><br>
> >> Hi Atin,<br>
> >><br>
> >> Could you please let us know any time plan for deliver of this patch.<br>
> >><br>
> >> Regards,<br>
> >> Abhishek<br>
> >><br>
> >> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> >> > wrote:<br>
> >><br>
> >>> Actually it is very risky if it will reproduce in production thats is<br>
> >>> why I said it is on high priority as want to resolve it before production.<br>
> >>><br>
> >>> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> >>> wrote:<br>
> >>><br>
> >>>><br>
> >>>><br>
> >>>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <<br>
> >>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> >>>><br>
> >>>>> Hi Atin,<br>
> >>>>><br>
> >>>>> Thanks for your reply.<br>
> >>>>><br>
> >>>>><br>
> >>>>> Its urgent because this error is very rarely reproducible we have seen<br>
> >>>>> this 2 3 times in our system till now.<br>
> >>>>><br>
> >>>>> We have delivery in near future so that we want it asap. Please try to<br>
> >>>>> review it internally.<br>
> >>>>><br>
> >>>><br>
> >>>> I don't think your statements justified the reason of urgency as (a)<br>
> >>>> you have mentioned it to be *rarely* reproducible and (b) I am still<br>
> >>>> waiting for a real use case where glusterd will go through multiple<br>
> >>>> restarts in a loop?<br>
> >>>><br>
> >>>><br>
> >>>>> Regards,<br>
> >>>>> Abhishek<br>
> >>>>><br>
> >>>>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> >>>>> wrote:<br>
> >>>>><br>
> >>>>>><br>
> >>>>>><br>
> >>>>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <<br>
> >>>>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> >>>>>><br>
> >>>>>>> + Muthu-vingeshwaran<br>
> >>>>>>><br>
> >>>>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <<br>
> >>>>>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> >>>>>>><br>
> >>>>>>>> Hi Atin/Team,<br>
> >>>>>>>><br>
> >>>>>>>> We are using gluster-3.7.6 with setup of two brick and while<br>
> >>>>>>>> restart of system I have seen that the glusterd daemon is getting failed<br>
> >>>>>>>> from start.<br>
> >>>>>>>><br>
> >>>>>>>><br>
> >>>>>>>> At the time of analyzing the logs from etc-glusterfs.......log file<br>
> >>>>>>>> I have received the below logs<br>
> >>>>>>>><br>
> >>>>>>>><br>
> >>>>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]<br>
> >>>>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running<br>
> >>>>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p<br>
> >>>>>>>> /var/run/glusterd.pid --log-level INFO)<br>
> >>>>>>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478]<br>
> >>>>>>>> [glusterd.c:1350:init] 0-management: Maximum allowed open file descriptors<br>
> >>>>>>>> set to 65536<br>
> >>>>>>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479]<br>
> >>>>>>>> [glusterd.c:1399:init] 0-management: Using /system/glusterd as working<br>
> >>>>>>>> directory<br>
> >>>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]<br>
> >>>>>>>> [glusterd-store.c:2047:<wbr>glusterd_restore_op_version] 0-glusterd:<br>
> >>>>>>>> retrieved op-version: 30706<br>
> >>>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]<br>
> >>>>>>>> [glusterd-store.c:2562:<wbr>glusterd_store_update_volinfo]<br>
> >>>>>>>> 0-management: Failed to get next store iter<br>
> >>>>>>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]<br>
> >>>>>>>> [glusterd-store.c:2844:<wbr>glusterd_store_retrieve_<wbr>volume]<br>
> >>>>>>>> 0-management: Failed to update volinfo for c_glusterfs volume<br>
> >>>>>>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]<br>
> >>>>>>>> [glusterd-store.c:3042:<wbr>glusterd_store_retrieve_<wbr>volumes]<br>
> >>>>>>>> 0-management: Unable to restore volume: c_glusterfs<br>
> >>>>>>>> [2017-05-06 03:33:39.827722] E [MSGID: 101019]<br>
> >>>>>>>> [xlator.c:428:xlator_init] 0-management: Initialization of volume<br>
> >>>>>>>> 'management' failed, review your volfile again<br>
> >>>>>>>> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_<wbr>init]<br>
> >>>>>>>> 0-management: initializing translator failed<br>
> >>>>>>>> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_<wbr>activate]<br>
> >>>>>>>> 0-graph: init failed<br>
> >>>>>>>> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_<wbr>and_exit]<br>
> >>>>>>>> (-->/usr/sbin/glusterd(<wbr>glusterfs_volumes_init-<wbr>0x1b0b8)<br>
> >>>>>>>> [0x1000a648] -->/usr/sbin/glusterd(<wbr>glusterfs_process_volfp-<wbr>0x1b210)<br>
> >>>>>>>> [0x1000a4d8] -->/usr/sbin/glusterd(cleanup_<wbr>and_exit-0x1beac)<br>
> >>>>>>>> [0x100097ac] ) 0-: received signum (0), shutting down<br>
> >>>>>>>><br>
> >>>>>>><br>
> >>>>>> Abhishek,<br>
> >>>>>><br>
> >>>>>> This patch needs to be thoroughly reviewed to ensure that it doesn't<br>
> >>>>>> cause any regression given this touches on the core store management<br>
> >>>>>> functionality of glusterd. AFAICT, we get into an empty info file only when<br>
> >>>>>> volume set operation is executed and in parallel one of the glusterd<br>
> >>>>>> instance in other nodes have been brought down and whole sequence of<br>
> >>>>>> operation happens in a loop. The test case through which you can get into<br>
> >>>>>> this situation is not something you'd hit in production. Please help me to<br>
> >>>>>> understand the urgency here.<br>
> >>>>>><br>
> >>>>>> Also in one of the earlier thread, I did mention the workaround of<br>
> >>>>>> this issue back to Xin through <a href="http://lists.gluster.org/piper" rel="noreferrer" target="_blank">http://lists.gluster.org/piper</a><br>
> >>>>>> mail/gluster-users/2017-<wbr>January/029600.html<br>
> >>>>>><br>
> >>>>>> "If you end up in having a 0 byte info file you'd need to copy the same info file from other node and put it there and restart glusterd"<br>
> >>>>>><br>
> >>>>>><br>
> >>>>>>>><br>
> >>>>>>>> I have found one of the existing case is there and also solution<br>
> >>>>>>>> patch is available but the status of that patch in "cannot merge". Also the<br>
> >>>>>>>> "info" file is empty and "info.tmp" file present in "lib/glusterd/vol"<br>
> >>>>>>>> directory.<br>
> >>>>>>>><br>
> >>>>>>>> Below is the link of the existing case.<br>
> >>>>>>>><br>
> >>>>>>>> <a href="https://review.gluster.org/#/c/16279/5" rel="noreferrer" target="_blank">https://review.gluster.org/#/<wbr>c/16279/5</a><br>
> >>>>>>>><br>
> >>>>>>>> please let me know what is the plan of community to provide the<br>
> >>>>>>>> solution of this problem and in which version.<br>
> >>>>>>>><br>
> >>>>>>>> Regards<br>
> >>>>>>>> Abhishek Paliwal<br>
> >>>>>>>><br>
> >>>>>>><br>
> >>>>>>><br>
> >>>>>>><br>
> >>>>>>> --<br>
> >>>>>>><br>
> >>>>>>><br>
> >>>>>>><br>
> >>>>>>><br>
> >>>>>>> Regards<br>
> >>>>>>> Abhishek Paliwal<br>
> >>>>>>><br>
> >>>>>><br>
> >>>>>><br>
> >>>>><br>
> >>>>><br>
> >>>>> --<br>
> >>>>><br>
> >>>>><br>
> >>>>><br>
> >>>>><br>
> >>>>> Regards<br>
> >>>>> Abhishek Paliwal<br>
> >>>>><br>
> >>>><br>
> >>>><br>
> >>><br>
> >>><br>
> >>> --<br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>> Regards<br>
> >>> Abhishek Paliwal<br>
> >>><br>
> >><br>
> >><br>
> >><br>
> >> --<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >> Regards<br>
> >> Abhishek Paliwal<br>
> >><br>
> >> ______________________________<wbr>_________________<br>
> >> Gluster-devel mailing list<br>
> >> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
> >><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > Amar Tumballi (amarts)<br>
> ><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
<br>
> ______________________________<wbr>_________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>