[Gluster-users] State of the gluster project

Diego Zuccato diego.zuccato at unibo.it
Fri Oct 27 08:51:35 UTC 2023


Maybe a bit OT...

I'm no expert on either, but the concepts are quite similar.
Both require "extra" nodes (metadata and monitor), but those can be 
virtual machines or you can host the services on OSD machines.

We don't use snapshots, so I can't comment on that.

My experience with Ceph is limited to having it working on Proxmox. No 
experience yet with CephFS.

BeeGFS is more like a "freemium" FS: the base functionality is free, but 
if you need "enterprise" features (quota, replication...) you have to 
pay (quite a lot... probably not to compromise lucrative GPFS licensing).

We also saw more than 30 minutes for an ls on a Gluster directory 
containing about 50 files when we had many millions of files on the fs 
(with one disk per brick, which also lead to many memory issues). After 
last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory 
pressure wend down drastically, but desyncs still happen even if the 
nodes are connected via IPoIB links that are really rock-solid (and in 
the worst case they could fallback to 1Gbps Ethernet connectivity).

Diego

Il 27/10/2023 10:30, Marcus Pedersén ha scritto:
> Hi Diego,
> I have had a look at BeeGFS and is seems more similar
> to ceph then to gluster. It requires extra management
> nodes similar to ceph, right?
> Second of all there are no snapshots in BeeGFS, as
> I understand it.
> I know ceph has snapshots so for us this seems a
> better alternative. What is your experience of ceph?
> 
> I am sorry to hear about your problems with gluster,
> from my experience we had quite some issues with gluster
> when it was "young", I thing the first version we installed
> whas 3.5 or so. It was also extremly slow, an ls took forever.
> But later versions has been "kind" to us and worked quite well
> and file access has become really comfortable.
> 
> Best regards
> Marcus
> 
> On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>>
>>
>> Hi.
>>
>> I'm also migrating to BeeGFS and CephFS (depending on usage).
>>
>> What I liked most about Gluster was that files were easily recoverable
>> from bricks even in case of disaster and that it said it supported RDMA.
>> But I soon found that RDMA was being phased out, and I always find
>> entries that are not healing after a couple months of (not really heavy)
>> use, directories that can't be removed because not all files have been
>> deleted from all the bricks and files or directories that become
>> inaccessible with no apparent reason.
>> Given that I currently have 3 nodes with 30 12TB disks each in replica 3
>> arbiter 1 it's become a major showstopper: can't stop production, backup
>> everything and restart from scratch every 3-4 months. And there are no
>> tools helping, just log digging :( Even at version 9.6 seems it's not
>> really "production ready"... More like v0.9.6 IMVHO. And now it being
>> EOLed makes it way worse.
>>
>> Diego
>>
>> Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:
>>> Hi,
>>>
>>> Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
>>> projects, so Gluster doesn't get much attention. From my experience, it
>>> has deteriorated since about version 9.0, and we're migrating to
>>> alternatives.
>>>
>>> /Z
>>>
>>> On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén <marcus.pedersen at slu.se
>>> <mailto:marcus.pedersen at slu.se>> wrote:
>>>
>>>      Hi all,
>>>      I just have a general thought about the gluster
>>>      project.
>>>      I have got the feeling that things has slowed down
>>>      in the gluster project.
>>>      I have had a look at github and to me the project
>>>      seems to slow down, for gluster version 11 there has
>>>      been no minor releases, we are still on 11.0 and I have
>>>      not found any references to 11.1.
>>>      There is a milestone called 12 but it seems to be
>>>      stale.
>>>      I have hit the issue:
>>>      https://github.com/gluster/glusterfs/issues/4085
>>>      <https://github.com/gluster/glusterfs/issues/4085>
>>>      that seems to have no sollution.
>>>      I noticed when version 11 was released that you
>>>      could not bump OP version to 11 and reported this,
>>>      but this is still not available.
>>>
>>>      I am just wondering if I am missing something here?
>>>
>>>      We have been using gluster for many years in production
>>>      and I think that gluster is great!! It has served as well over
>>>      the years and we have seen some great improvments
>>>      of stabilility and speed increase.
>>>
>>>      So is there something going on or have I got
>>>      the wrong impression (and feeling)?
>>>
>>>      Best regards
>>>      Marcus
>>>      ---
>>>      När du skickar e-post till SLU så innebär detta att SLU behandlar
>>>      dina personuppgifter. För att läsa mer om hur detta går till, klicka
>>>      här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/
>>>      <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>>
>>>      E-mailing SLU will result in SLU processing your personal data. For
>>>      more information on how this is done, click here
>>>      <https://www.slu.se/en/about-slu/contact-slu/personal-data/
>>>      <https://www.slu.se/en/about-slu/contact-slu/personal-data/>>
>>>      ________
>>>
>>>
>>>
>>>      Community Meeting Calendar:
>>>
>>>      Schedule -
>>>      Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>      Bridge: https://meet.google.com/cpu-eiue-hvk
>>>      <https://meet.google.com/cpu-eiue-hvk>
>>>      Gluster-users mailing list
>>>      Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>>      https://lists.gluster.org/mailman/listinfo/gluster-users
>>>      <https://lists.gluster.org/mailman/listinfo/gluster-users>
>>>
>>>
>>> ________
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> --
>> Diego Zuccato
>> DIFA - Dip. di Fisica e Astronomia
>> Servizi Informatici
>> Alma Mater Studiorum - Università di Bologna
>> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>> tel.: +39 051 20 95786
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click here <https://www.slu.se/en/about-slu/contact-slu/personal-data/>

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786


More information about the Gluster-users mailing list