[Gluster-users] State of Gluster project
Mahdi Adnan
mahdi at sysmin.io
Mon Jun 22 13:13:29 UTC 2020
We had a distributed replicated volume of 3 x 7 HDD, the volume was used
for small files workload with heavy IO, we decided to replace the
bricks with SSDs because of IO saturation to the disks, so we started by
swapping the bricks one by one, and the fun started, some files lost its
attributes and we had to manually fix the missing attributes by removing
the file and its gfid and copy the file again to the volume.
This issue affected 5 of the 21 bricks.
On another volume, we had a disk failure and during the replace brick
process, the mount point of one of the clients crashed.
On Mon, Jun 22, 2020 at 10:55 AM Gionatan Danti <g.danti at assyoma.it> wrote:
> Il 2020-06-21 20:41 Mahdi Adnan ha scritto:
> > Hello Gionatan,
> >
> > Using Gluster brick in a RAID configuration might be safer and
> > require less work from Gluster admins but, it is a waste of disk
> > space.
> > Gluster bricks are replicated "assuming you're creating a
> > distributed-replica volume" so when brick went down, it should be easy
> > to recover it and should not affect the client's IO.
> > We are using JBOD in all of our Gluster setups, overall, performance
> > is good, and replacing a brick would work "most" of the time without
> > issues.
>
> Hi Mahdi,
> thank you for reporting. I am interested in the "most of the time
> without isses" statement. Can you elaborate on what happened the few
> times when it did not work correctly?
>
> Thanks.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it [1]
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
>
--
Respectfully
Mahdi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200622/2e1a2e00/attachment.html>
More information about the Gluster-users
mailing list