[Gluster-users] Wrong volume size for distributed dispersed volume on 4.1.5
Nithya Balachandran
nbalacha at redhat.com
Tue Oct 16 14:46:50 UTC 2018
On 16 October 2018 at 20:04, <jring at mail.de> wrote:
> Hi,
>
> > > So we did a quick grep shared-brick-count /var/lib/glusterd/vols/data_vol1/*
> on all boxes and found that on 5 out of 6 boxes this was
> shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks.
> > >
> > > Is this the expected result or should we have all 1 everywhere (as the
> quick fix script from the case sets it)?
> >
> > No , this is fine. The shared-brick-count only needs to be 1 for the
> local bricks. The value for the remote bricks can be 0.
> >
> > > Also on one box (the one where we created the volume from, btw) we
> have shared-brick-count=0 for all remote bricks and 10 for the local bricks.
> >
> > This is a problem. The shared-brick-count should be 1 for the local
> bricks here as well.
> >
> > > Is it possible that the bug from 3.4 still exists in 4.1.5 and should
> we try the filter script which sets shared-brick-count=1 for all bricks?
> > >
> >
> > Can you try
> > 1. restarting glusterd on all the nodes one after another (not at the
> same time)
> > 2. Setting a volume option (say gluster volume set <volname>
> cluster.min-free-disk 11%)
> >
> > and see if it fixes the issue?
>
> Hi,
>
> ok, this was a quick fix - volume size is correct again and the
> shared-brick-count is correct everywhere.
>
> We'll duly note this in our wiki.
>
> Thanks a lot!
>
If there were any directories created on the volume when the sizes were
wrong, the layouts sets on them are probably incorrect. You might want to
do a fix-layout on the volume.
Regards,
Nithya
>
> Joachim
> ------------------------------------------------------------
> -------------------------------------
> FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181016/b810366f/attachment.html>
More information about the Gluster-users
mailing list