[Gluster-users] State of Gluster project

Gionatan Danti g.danti at assyoma.it
Sun Jun 21 07:53:10 UTC 2020


Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
> The efforts are  far less than reconstructing the disk of a VM from
> CEPH. In gluster , just run a find on the brick  searching for the
> name of the VM disk and you will find the VM_IMAGE.xyz  (where xyz is
> just a number)  and then concatenate the  list into a single file.

Sure, but it is somewhat impractical with a 6 TB fileserver image and 
500 users screaming for their files ;)

And I fully expect to be the reconstruction much easier than Ceph but, 
from what I read, Ceph is less likely to broke in the first place. But I 
admit I never seriously run a Ceph cluster, so maybe it is more fragile 
than I expect.

> That's  true,  but  you  could  also  use  NFS Ganesha,  which  is
> more  performant  than FUSE and also as  reliable  as  it.

 From this very list I read about many users with various problems when 
using NFS Ganesha. Is that a wrong impression?

> It's  not so hard to  do it  -  just  use  either  'reset-brick' or
> 'replace-brick' .

Sure - the command itself is simple enough. The point it that each 
reconstruction is quite more "riskier" than a simple RAID 
reconstruction. Do you run a full Gluster SDS, skipping RAID? How do you 
found this setup?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8


More information about the Gluster-users mailing list