[Gluster-devel] [Regression-failure] glusterd status

Krishnan Parthasarathi kparthas at redhat.com
Wed May 27 11:28:48 UTC 2015



----- Original Message -----
> I'm currently running the test in a loop on slave0. I've not had any
> failures yet.
> I'm running on commit d1ff9dead (glusterd: Fix conf->generation to
> stop new peers participating in a transaction, while the transaction
> is in progress.) , Avra's fix which was merged yesterday on master.
> 
> I did a small change to log the peerinfo objects addresses in
> __glusterd_peer_rpc_notify as before. What I'm observing is that the
> change in memory address is due to glusterd being restarted during the
> test. So we can rule out any duplication of peerinfos leading to
> problems that were observed.

Great news! We can revisit this if we see duplicate peerinfos in a single
glusterd 'session'. For now this is a non-issue.

> 
> I'll keep running the test in a loop to see if I can hit any failures.
> If I get a failure I'll debug it.

I'd leave this to you.


More information about the Gluster-devel mailing list