[Gluster-users] Continue to work in "degraded mode" (missing brick)

Nithya Balachandran nbalacha at redhat.com
Thu Aug 8 08:44:04 UTC 2019


Hi,

This is the expected behaviour for a distribute volume. Files that hash to
a brick that is down will not be created. This is to prevent issues in case
the file already exists on that brick.

To prevent this, please use distribute-replicate volumes.

Regards,
Nithya

On Thu, 8 Aug 2019 at 13:17, Nux! <nux at li.nux.ro> wrote:

> Sorry, I meant to say distributed, not replicated!
> I'm on 6.4 from CentOs7 SIG.
>
> I was hoping the volume might still be fully usable write-wise, with
> files going on the remaining bricks, but it doesn't seem to be the case.
>
> ---
> Sent from the Delta quadrant using Borg technology!
>
> On 2019-08-08 06:54, Ravishankar N wrote:
> > On 07/08/19 9:53 PM, Nux! wrote:
> >> Hello,
> >>
> >> I'm testing a replicated volume with 3 bricks. I've killed a brick,
> >> but the volume is still mounted and can see the files from the bricks
> >> that are still online and can do operations on them.
> >> What I cannot do is create new files in the volume, e.g.:
> >>
> >> dd: failed to open ‘test1000’: Transport endpoint is not connected
> >>
> >>
> >> Is there a way to make this volume continue to work while one of the
> >> bricks is offline? There is still space available in the remaining
> >> bricks, shouldn't it try to use it?
> >
> > If 2 bricks are online and the clients are connected to them, writes
> > should work.  Unless the brick that was down was the only good copy,
> > i.e. the only one that successfully witnessed all previous writes.
> > What version of gluster are you using? Check the mount log for more
> > details.
> >
> >>
> >> Regards
> >>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190808/d232a7f2/attachment.html>


More information about the Gluster-users mailing list