<div dir="ltr"><div><div><div><div>Hi Niels,<br><br></div>I have backported that patch on Gluster 3.7.6 and we haven't seen any other issue due to that patch.<br><br></div>Everything is fine till now in our testing and its going on extensively.<br><br></div>Regards,<br></div>Abhishek <br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 1, 2017 at 1:46 PM, Niels de Vos <span dir="ltr"><<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Thu, Jun 01, 2017 at 01:03:25PM +0530, ABHISHEK PALIWAL wrote:<br>
> Hi Niels,<br>
><br>
> No problem we wil try to backport that patch on 3.7.6.<br>
><br>
> Could you please let me know in which release Gluster community is going to<br>
> provide this patch and date of that release?<br>
<br>
</span>It really depends on when someone has time to work on it. Our releases<br>
are time based, and will happen even when a bugfix/feature is not merged<br>
or implemented. We can't give any guarantees about availability for<br>
final patche (or backports).<br>
<br>
The best you can do is help testing a potential fix, and work with the<br>
developer(s) of that patch to improve and get it accepted in the master<br>
branch. If developers do not have time to work on it, or progress is<br>
slow, you can ask them if you can take it over from if you are<br>
comfortable with writing the code.<br>
<span class="HOEnZb"><font color="#888888"><br>
Niels<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Wed, May 31, 2017 at 10:05 PM, Niels de Vos <<a href="mailto:ndevos@redhat.com">ndevos@redhat.com</a>> wrote:<br>
><br>
> > On Wed, May 31, 2017 at 04:08:06PM +0530, ABHISHEK PALIWAL wrote:<br>
> > > We are using 3.7.6 and on link <a href="https://review.gluster.org/#/c/16279" rel="noreferrer" target="_blank">https://review.gluster.org/#/<wbr>c/16279</a><br>
> > status<br>
> > > is "can't merge"<br>
> ><br>
> > Note that 3.7.x will not get any updates anymore. We currently maintain<br>
> > version 3.8.x, 3.10.x and 3.11.x. See the release schedele for more<br>
> > details:<br>
> > <a href="https://www.gluster.org/community/release-schedule/" rel="noreferrer" target="_blank">https://www.gluster.org/<wbr>community/release-schedule/</a><br>
> ><br>
> > Niels<br>
> ><br>
> ><br>
> > ><br>
> > > On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <<a href="mailto:atumball@redhat.com">atumball@redhat.com</a>><br>
> > wrote:<br>
> > ><br>
> > > > This is already part of 3.11.0 release?<br>
> > > ><br>
> > > > On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <<br>
> > <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> > > > > wrote:<br>
> > > ><br>
> > > >> Hi Atin,<br>
> > > >><br>
> > > >> Could you please let us know any time plan for deliver of this patch.<br>
> > > >><br>
> > > >> Regards,<br>
> > > >> Abhishek<br>
> > > >><br>
> > > >> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <<br>
> > <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> > > >> > wrote:<br>
> > > >><br>
> > > >>> Actually it is very risky if it will reproduce in production thats is<br>
> > > >>> why I said it is on high priority as want to resolve it before<br>
> > production.<br>
> > > >>><br>
> > > >>> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > > >>> wrote:<br>
> > > >>><br>
> > > >>>><br>
> > > >>>><br>
> > > >>>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <<br>
> > > >>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> > > >>>><br>
> > > >>>>> Hi Atin,<br>
> > > >>>>><br>
> > > >>>>> Thanks for your reply.<br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>> Its urgent because this error is very rarely reproducible we have<br>
> > seen<br>
> > > >>>>> this 2 3 times in our system till now.<br>
> > > >>>>><br>
> > > >>>>> We have delivery in near future so that we want it asap. Please<br>
> > try to<br>
> > > >>>>> review it internally.<br>
> > > >>>>><br>
> > > >>>><br>
> > > >>>> I don't think your statements justified the reason of urgency as (a)<br>
> > > >>>> you have mentioned it to be *rarely* reproducible and (b) I am still<br>
> > > >>>> waiting for a real use case where glusterd will go through multiple<br>
> > > >>>> restarts in a loop?<br>
> > > >>>><br>
> > > >>>><br>
> > > >>>>> Regards,<br>
> > > >>>>> Abhishek<br>
> > > >>>>><br>
> > > >>>>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <<br>
> > <a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > > >>>>> wrote:<br>
> > > >>>>><br>
> > > >>>>>><br>
> > > >>>>>><br>
> > > >>>>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <<br>
> > > >>>>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> > > >>>>>><br>
> > > >>>>>>> + Muthu-vingeshwaran<br>
> > > >>>>>>><br>
> > > >>>>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <<br>
> > > >>>>>>> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>> wrote:<br>
> > > >>>>>>><br>
> > > >>>>>>>> Hi Atin/Team,<br>
> > > >>>>>>>><br>
> > > >>>>>>>> We are using gluster-3.7.6 with setup of two brick and while<br>
> > > >>>>>>>> restart of system I have seen that the glusterd daemon is<br>
> > getting failed<br>
> > > >>>>>>>> from start.<br>
> > > >>>>>>>><br>
> > > >>>>>>>><br>
> > > >>>>>>>> At the time of analyzing the logs from etc-glusterfs.......log<br>
> > file<br>
> > > >>>>>>>> I have received the below logs<br>
> > > >>>>>>>><br>
> > > >>>>>>>><br>
> > > >>>>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]<br>
> > > >>>>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running<br>
> > > >>>>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p<br>
> > > >>>>>>>> /var/run/glusterd.pid --log-level INFO)<br>
> > > >>>>>>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478]<br>
> > > >>>>>>>> [glusterd.c:1350:init] 0-management: Maximum allowed open file<br>
> > descriptors<br>
> > > >>>>>>>> set to 65536<br>
> > > >>>>>>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479]<br>
> > > >>>>>>>> [glusterd.c:1399:init] 0-management: Using /system/glusterd as<br>
> > working<br>
> > > >>>>>>>> directory<br>
> > > >>>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]<br>
> > > >>>>>>>> [glusterd-store.c:2047:<wbr>glusterd_restore_op_version] 0-glusterd:<br>
> > > >>>>>>>> retrieved op-version: 30706<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]<br>
> > > >>>>>>>> [glusterd-store.c:2562:<wbr>glusterd_store_update_volinfo]<br>
> > > >>>>>>>> 0-management: Failed to get next store iter<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]<br>
> > > >>>>>>>> [glusterd-store.c:2844:<wbr>glusterd_store_retrieve_<wbr>volume]<br>
> > > >>>>>>>> 0-management: Failed to update volinfo for c_glusterfs volume<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]<br>
> > > >>>>>>>> [glusterd-store.c:3042:<wbr>glusterd_store_retrieve_<wbr>volumes]<br>
> > > >>>>>>>> 0-management: Unable to restore volume: c_glusterfs<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827722] E [MSGID: 101019]<br>
> > > >>>>>>>> [xlator.c:428:xlator_init] 0-management: Initialization of<br>
> > volume<br>
> > > >>>>>>>> 'management' failed, review your volfile again<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_<br>
> > init]<br>
> > > >>>>>>>> 0-management: initializing translator failed<br>
> > > >>>>>>>> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_<br>
> > activate]<br>
> > > >>>>>>>> 0-graph: init failed<br>
> > > >>>>>>>> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_<br>
> > and_exit]<br>
> > > >>>>>>>> (-->/usr/sbin/glusterd(<wbr>glusterfs_volumes_init-<wbr>0x1b0b8)<br>
> > > >>>>>>>> [0x1000a648] -->/usr/sbin/glusterd(<wbr>glusterfs_process_volfp-<br>
> > 0x1b210)<br>
> > > >>>>>>>> [0x1000a4d8] -->/usr/sbin/glusterd(cleanup_<wbr>and_exit-0x1beac)<br>
> > > >>>>>>>> [0x100097ac] ) 0-: received signum (0), shutting down<br>
> > > >>>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>> Abhishek,<br>
> > > >>>>>><br>
> > > >>>>>> This patch needs to be thoroughly reviewed to ensure that it<br>
> > doesn't<br>
> > > >>>>>> cause any regression given this touches on the core store<br>
> > management<br>
> > > >>>>>> functionality of glusterd. AFAICT, we get into an empty info file<br>
> > only when<br>
> > > >>>>>> volume set operation is executed and in parallel one of the<br>
> > glusterd<br>
> > > >>>>>> instance in other nodes have been brought down and whole sequence<br>
> > of<br>
> > > >>>>>> operation happens in a loop. The test case through which you can<br>
> > get into<br>
> > > >>>>>> this situation is not something you'd hit in production. Please<br>
> > help me to<br>
> > > >>>>>> understand the urgency here.<br>
> > > >>>>>><br>
> > > >>>>>> Also in one of the earlier thread, I did mention the workaround of<br>
> > > >>>>>> this issue back to Xin through <a href="http://lists.gluster.org/piper" rel="noreferrer" target="_blank">http://lists.gluster.org/piper</a><br>
> > > >>>>>> mail/gluster-users/2017-<wbr>January/029600.html<br>
> > > >>>>>><br>
> > > >>>>>> "If you end up in having a 0 byte info file you'd need to copy<br>
> > the same info file from other node and put it there and restart glusterd"<br>
> > > >>>>>><br>
> > > >>>>>><br>
> > > >>>>>>>><br>
> > > >>>>>>>> I have found one of the existing case is there and also solution<br>
> > > >>>>>>>> patch is available but the status of that patch in "cannot<br>
> > merge". Also the<br>
> > > >>>>>>>> "info" file is empty and "info.tmp" file present in<br>
> > "lib/glusterd/vol"<br>
> > > >>>>>>>> directory.<br>
> > > >>>>>>>><br>
> > > >>>>>>>> Below is the link of the existing case.<br>
> > > >>>>>>>><br>
> > > >>>>>>>> <a href="https://review.gluster.org/#/c/16279/5" rel="noreferrer" target="_blank">https://review.gluster.org/#/<wbr>c/16279/5</a><br>
> > > >>>>>>>><br>
> > > >>>>>>>> please let me know what is the plan of community to provide the<br>
> > > >>>>>>>> solution of this problem and in which version.<br>
> > > >>>>>>>><br>
> > > >>>>>>>> Regards<br>
> > > >>>>>>>> Abhishek Paliwal<br>
> > > >>>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>> --<br>
> > > >>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>><br>
> > > >>>>>>> Regards<br>
> > > >>>>>>> Abhishek Paliwal<br>
> > > >>>>>>><br>
> > > >>>>>><br>
> > > >>>>>><br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>> --<br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>> Regards<br>
> > > >>>>> Abhishek Paliwal<br>
> > > >>>>><br>
> > > >>>><br>
> > > >>>><br>
> > > >>><br>
> > > >>><br>
> > > >>> --<br>
> > > >>><br>
> > > >>><br>
> > > >>><br>
> > > >>><br>
> > > >>> Regards<br>
> > > >>> Abhishek Paliwal<br>
> > > >>><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> --<br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> Regards<br>
> > > >> Abhishek Paliwal<br>
> > > >><br>
> > > >> ______________________________<wbr>_________________<br>
> > > >> Gluster-devel mailing list<br>
> > > >> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
> > > >><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > > Amar Tumballi (amarts)<br>
> > > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards<br>
> > > Abhishek Paliwal<br>
> ><br>
> > > ______________________________<wbr>_________________<br>
> > > Gluster-devel mailing list<br>
> > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>