[Gluster-users] add-brick and fix-layout takes some VMs offline
nmajeran at suntradingllc.com
Thu Feb 20 02:57:02 UTC 2014
Sorry for the delay -- here's what the volume looks like. It's pretty basic:
option rpc-auth-allow-insecure on
option auth.addr./mnt/bulk.allow *
option auth.login.39da7ae0-8cee-4152-a612-674c48da544e.password 62a8ff81-cd3d-4872-a9b9-bad5da242f10
option auth.login./mnt/bulk.allow 39da7ae0-8cee-4152-a612-674c48da544e
option transport-type tcp
I'll try and dig up some client logs, but the last event was over a month ago.
----- Original Message -----
From: "Vijay Bellur" <vbellur at redhat.com>
To: "Nicholas Majeran" <nmajeran at suntradingllc.com>, gluster-users at gluster.org, "Shyamsundar Ranganathan" <srangana at redhat.com>
Sent: Monday, February 17, 2014 10:16:56 AM
Subject: Re: [Gluster-users] add-brick and fix-layout takes some VMs offline
On 02/13/2014 08:52 PM, Nicholas Majeran wrote:
> Hi there,
> We have a distributed-replicated volume hosting KVM guests running
> Gluster 3.4.1.
> We've grown from 1 x 2 -> 2 x 2 -> 3 x 2,but each time we've added nodes
> or run a fix layout,
> some of our guests go offline (or worse with error=continue they
> silently error).
> After the last addition we didn't even run fix-layout as the guests are
> becoming increasingly important.
Would it be possible to share the client log files and your volume
> Those guests are currently are using a combination of FUSE and libgfapi.
> Is there a setting or group of settings we should use to ameliorate this
> Is FUSE or libgfapi more forgiving when add-brick or fix-layout is run?
The behavior of FUSE or libgfapi should mostly be the same with either
add-brick or fix-layout.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users