[Gluster-users] add/replace brick corrupting data

WK wkmail at bneit.com
Tue May 17 00:02:17 UTC 2016

That should be an important clue.

That being said, when we lose a brick, we've traditionally just live 
migrated those VMs off onto other clusters because we didn't want to 
take the heal hit which at best slowed down our VMs at on the pickier 
ones cause them to RO out.

We have not yet upgraded to 3.7.x yet (still on 3.4 cuz it aint broke) 
and are hoping that sharding solves that problem.  But it seems 
everytime it looks like things are 'safe' for 3.7.x, something comes up. 
Fortunately, we like the fuse mount so maybe we are still ok.


On 5/16/2016 4:42 AM, Lindsay Mathieson wrote:
> Ok, this is probably an interesting data point. I was unable to 
> reproduce the problem when using the fuse mount.
> Its late here so I might not have time to repeat with the gfapi, but I 
> will tomorrow.
> On 16/05/2016 4:55 PM, Krutika Dhananjay wrote:
>> Yes, that would probably be useful in terms of at least having access 
>> to the client logs.
>> -Krutika
>> On Mon, May 16, 2016 at 12:18 PM, Lindsay Mathieson 
>> <lindsay.mathieson at gmail.com> wrote:
>>     On 16 May 2016 at 16:46, Krutika Dhananjay <kdhananj at redhat.com>
>>     wrote:
>>     > Could you share the mount and glustershd logs for investigation?
>>     Can do, though its via gfapi rather than the fuse mount.
>>     If I can replicate the problem off the fuse mount would that be
>>     more useful?
>>     --
>>     Lindsay
> -- 
> Lindsay Mathieson
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160516/2b349034/attachment.html>

More information about the Gluster-users mailing list