[Gluster-users] timestamps getting updated during self-heal after primary brick rebuild

Joe Julian joe at julianfamily.org
Wed Mar 6 19:26:46 UTC 2013


If that was said (not saying it wasn't, I just didn't notice it) then my 
guess would be just the opposite. The team's large enough that the PM 
assigns the bugs to the right person that manages their portion of the 
program. Someone not seeing a bug would probably be because they're not 
part of that assignment.

<IMHO>
Someone complaining to a *user* mailing list about a problem is not a 
bug report nor should it be. If followup information is needed, it's 
important that the bug manager, the person reporting the bug, and anyone 
else tracking that bug be informed. Bug trackers are a critical tool 
that keeps development organized.

If, in a community organization (don't forget, gluster.org is the 
upstream community organization that's ultimately in charge of this 
development), someone feels that it's important to file bugs based on 
emails to a user list, then a community member should take that role on.
</IMHO>

JMW, by the way, is the community's advocate to Red Hat.

On 03/06/2013 06:28 AM, Whit Blauvelt wrote:
> A small question: We know that one or two members of the dev team read these
> emails. One said just yesterday he's more likely to see emails than bug
> reports. Now, sometimes the response to an email showing an obvious bug is
> "File a bug report please." But for something like this - yes timestamps are
> data so this is a serious bug - it would be a healthy thing if someone on
> the dev team would make a point of both acknowledging that it's a bug, and
> taking responsibility for being sure the bug report is filed and assigned to
> the right people, whether or not the email writer has taken that step.
>
> If the team's too small to follow through like this, is someone advocating
> with Red Hat for more staff? They've made a large investment in Gluster,
> which they might want to protect by fully staffing it. It's the fault of the
> firm, not current project staff, if the current staffing is too thin.
>
> Apologies if these reflections are out of place in a community discussion.
> But it's in the community's interest that Red Hat succeeds and profits from
> its Gluster purchase.
>
> Best,
> Whit
>
> On Wed, Mar 06, 2013 at 03:28:39AM -0500, John Mark Walker wrote:
>> Hi Todd,
>>
>> A general note here: when someone posts a question and noone responds,
>> it's generally because either no one has seen that particular behavior and
>> they don't know how to respond, or they didn't understand what you were
>> saying. In this case, I'd say it is the former.
>>
>> ----- Original Message -----
>>> something entirely different.  We see the same behavior.  After
>>> rebuilding the
>>> first brick in the 2-brick replicate cluster, all file timestamps get
>>> updated
>>> to the time self-heal copies the data back to that brick.
>>>
>>> This is obviously a bug in 3.3.1.  We basically did what's described
>>> here:
>>>
>>>    http://gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
>>>
>>> and timestamps get updated on all files.  Can someone acknowledge
>>> that this
>>> sounds like a bug?  Does anyone care?
>> Please file a bug and include the relevant information at
>>     https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
>>
>> - after searching for any similar bugs, of course.
>>
>>> Being relatively new to glusterfs, it's painful to watch the mailing
>>> list and
>>> even the IRC channel and see many folks ask questions with nothing
>>> but
>>> silence.  I honestly wasn't sure if glusterfs was actively being
>>> supported
>> ??? Our IRC channel is one of the most active in the open source world. I'm honestly not sure what mailing lists or IRC channels you've been watching.
>>
>>
>>> anymore.  Given the recent flurry of mail about lack of documentation
>>> I see
>>> that's not really true.  Unfortunately, given that what I'm seeing is
>>> a form
>>> of data corruption (yes, timestamps do matter), I'm surprised
>>> nobody's
>>> interested to help figure out what's going wrong.  Hopefully it's
>>> something
>>> about the way I've build out cluster (though it seems less and less
>>> likely
>>> given we are able to replicate the problem so easily).
>> I can understand your frustration. I would be, also. However, given that I haven't heard of this problem before, I don't know how you were able to reproduce it. The best I can offer is that we'll investigate your bug report.
>>
>> Thanks,
>> JM
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list