[Gluster-users] timestamps getting updated during self-heal after primary brick rebuild
John Mark Walker
johnmark at redhat.com
Wed Mar 6 08:28:39 UTC 2013
A general note here: when someone posts a question and noone responds, it's generally because either no one has seen that particular behavior and they don't know how to respond, or they didn't understand what you were saying. In this case, I'd say it is the former.
----- Original Message -----
> something entirely different. We see the same behavior. After
> rebuilding the
> first brick in the 2-brick replicate cluster, all file timestamps get
> to the time self-heal copies the data back to that brick.
> This is obviously a bug in 3.3.1. We basically did what's described
> and timestamps get updated on all files. Can someone acknowledge
> that this
> sounds like a bug? Does anyone care?
Please file a bug and include the relevant information at
- after searching for any similar bugs, of course.
> Being relatively new to glusterfs, it's painful to watch the mailing
> list and
> even the IRC channel and see many folks ask questions with nothing
> silence. I honestly wasn't sure if glusterfs was actively being
??? Our IRC channel is one of the most active in the open source world. I'm honestly not sure what mailing lists or IRC channels you've been watching.
> anymore. Given the recent flurry of mail about lack of documentation
> I see
> that's not really true. Unfortunately, given that what I'm seeing is
> a form
> of data corruption (yes, timestamps do matter), I'm surprised
> interested to help figure out what's going wrong. Hopefully it's
> about the way I've build out cluster (though it seems less and less
> given we are able to replicate the problem so easily).
I can understand your frustration. I would be, also. However, given that I haven't heard of this problem before, I don't know how you were able to reproduce it. The best I can offer is that we'll investigate your bug report.
More information about the Gluster-users