<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi Aravinda,<br>
    </p>
    <p>Interesting, I had no idea you were trying to do this.</p>
    <p>We've used Gluster from the v3 days and have had few problems
      over the years (well performance, but there are ways of dealing
      with that to a certain extent). We have no short-term plans to
      migrate away from Gluster but are obviously concerned with the
      lack of visible activity with the project.</p>
    <p>Hopefully the companies who have built products on Gluster can
      come together and share the load and those using Gluster across
      their systems can help either financially or technically to
      support that.</p>
    <p>It would be a real shame to see the project abandoned.</p>
    <p>Ronny<br>
    </p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">Aravinda wrote on 27/10/2023 10:22:<br>
    </div>
    <blockquote type="cite"
      cite="mid:18b7071a49e.490ac05b296486.5586940550425421035@kadalu.tech">
      <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
      <div style="font-family: Verdana, Arial, Helvetica, sans-serif;
        font-size: 10pt;">
        <div>It is very unfortunate that Gluster is not maintained. From
          Kadalu Technologies, we are trying to set up a small team
          dedicated to maintain GlusterFS for the next three years. This
          will be only possible if we get funding from community and
          companies. The details about the proposal is here <a
            target="_blank" data-zeanchor="true"
            href="https://kadalu.tech/gluster/" moz-do-not-send="true">https://kadalu.tech/gluster/</a><br>
        </div>
        <div><br>
        </div>
        <div><b>About Kadalu Technologies</b>: Kadalu Technologies was
          started in 2019 by a few Gluster maintainers to provide the
          persistent storage for the applications running in Kubernetes.
          The solution (<a target="_blank" data-zeanchor="true"
            href="https://github.com/kadalu/kadalu"
            moz-do-not-send="true">https://github.com/kadalu/kadalu</a>)
          is based on GlusterFS and doesn't use the management layer
          Glusterd (Natively integrated using Kubernetes APIs). Kadalu
          Technologies also maintains many of the GlusterFS tools like
          gdash (<a target="_blank" data-zeanchor="true"
            href="https://github.com/kadalu/gdash"
            moz-do-not-send="true">https://github.com/kadalu/gdash</a>),
          gluster-metrics-exporter (<a target="_blank"
            data-zeanchor="true"
            href="https://github.com/kadalu/gluster-metrics-exporter"
            moz-do-not-send="true">https://github.com/kadalu/gluster-metrics-exporter</a>)
          etc.<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div id="Zm-_Id_-Sgn" data-zbluepencil-ignore="true"
          data-sigid="3848334000000010003">
          <div>Aravinda<br>
          </div>
        </div>
        <div><a class="moz-txt-link-freetext" href="https://kadalu.tech">https://kadalu.tech</a></div>
        <div class="zmail_extra_hr" style="border-top: 1px solid
          rgb(204, 204, 204); height: 0px; margin-top: 10px;
          margin-bottom: 10px; line-height: 0px;"><br>
        </div>
        <div class="zmail_extra" data-zbluepencil-ignore="true">
          <div><br>
          </div>
          <div id="Zm-_Id_-Sgn1">---- On Fri, 27 Oct 2023 14:21:35 +0530
            <b>Diego Zuccato <a class="moz-txt-link-rfc2396E" href="mailto:diego.zuccato@unibo.it"><diego.zuccato@unibo.it></a></b> wrote
            ---<br>
          </div>
          <div><br>
          </div>
          <blockquote id="blockquote_zmail" style="margin: 0px;">
            <div>Maybe a bit OT...<br>
              <br>
              I'm no expert on either, but the concepts are quite
              similar.<br>
              Both require "extra" nodes (metadata and monitor), but
              those can be <br>
              virtual machines or you can host the services on OSD
              machines.<br>
              <br>
              We don't use snapshots, so I can't comment on that.<br>
              <br>
              My experience with Ceph is limited to having it working on
              Proxmox. No <br>
              experience yet with CephFS.<br>
              <br>
              BeeGFS is more like a "freemium" FS: the base
              functionality is free, but <br>
              if you need "enterprise" features (quota, replication...)
              you have to <br>
              pay (quite a lot... probably not to compromise lucrative
              GPFS licensing).<br>
              <br>
              We also saw more than 30 minutes for an ls on a Gluster
              directory <br>
              containing about 50 files when we had many millions of
              files on the fs <br>
              (with one disk per brick, which also lead to many memory
              issues). After <br>
              last rebuild I created 5-disks RAID5 bricks (about 44TB
              each) and memory <br>
              pressure wend down drastically, but desyncs still happen
              even if the <br>
              nodes are connected via IPoIB links that are really
              rock-solid (and in <br>
              the worst case they could fallback to 1Gbps Ethernet
              connectivity).<br>
              <br>
              Diego<br>
              <br>
              Il 27/10/2023 10:30, Marcus Pedersén ha scritto:<br>
              > Hi Diego,<br>
              > I have had a look at BeeGFS and is seems more similar<br>
              > to ceph then to gluster. It requires extra management<br>
              > nodes similar to ceph, right?<br>
              > Second of all there are no snapshots in BeeGFS, as<br>
              > I understand it.<br>
              > I know ceph has snapshots so for us this seems a<br>
              > better alternative. What is your experience of ceph?<br>
              > <br>
              > I am sorry to hear about your problems with gluster,<br>
              > from my experience we had quite some issues with
              gluster<br>
              > when it was "young", I thing the first version we
              installed<br>
              > whas 3.5 or so. It was also extremly slow, an ls took
              forever.<br>
              > But later versions has been "kind" to us and worked
              quite well<br>
              > and file access has become really comfortable.<br>
              > <br>
              > Best regards<br>
              > Marcus<br>
              > <br>
              > On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego
              Zuccato wrote:<br>
              >> CAUTION: This email originated from outside of
              the organization. Do not click links or open attachments
              unless you recognize the sender and know the content is
              safe.<br>
              >><br>
              >><br>
              >> Hi.<br>
              >><br>
              >> I'm also migrating to BeeGFS and CephFS
              (depending on usage).<br>
              >><br>
              >> What I liked most about Gluster was that files
              were easily recoverable<br>
              >> from bricks even in case of disaster and that it
              said it supported RDMA.<br>
              >> But I soon found that RDMA was being phased out,
              and I always find<br>
              >> entries that are not healing after a couple
              months of (not really heavy)<br>
              >> use, directories that can't be removed because
              not all files have been<br>
              >> deleted from all the bricks and files or
              directories that become<br>
              >> inaccessible with no apparent reason.<br>
              >> Given that I currently have 3 nodes with 30 12TB
              disks each in replica 3<br>
              >> arbiter 1 it's become a major showstopper: can't
              stop production, backup<br>
              >> everything and restart from scratch every 3-4
              months. And there are no<br>
              >> tools helping, just log digging :( Even at
              version 9.6 seems it's not<br>
              >> really "production ready"... More like v0.9.6
              IMVHO. And now it being<br>
              >> EOLed makes it way worse.<br>
              >><br>
              >> Diego<br>
              >><br>
              >> Il 27/10/2023 09:40, Zakhar Kirpichenko ha
              scritto:<br>
              >>> Hi,<br>
              >>><br>
              >>> Red Hat Gluster Storage is EOL, Red Hat moved
              Gluster devs to other<br>
              >>> projects, so Gluster doesn't get much
              attention. From my experience, it<br>
              >>> has deteriorated since about version 9.0, and
              we're migrating to<br>
              >>> alternatives.<br>
              >>><br>
              >>> /Z<br>
              >>><br>
              >>> On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén
              <<a href="mailto:marcus.pedersen@slu.se"
                target="_blank" moz-do-not-send="true">marcus.pedersen@slu.se</a><br>
              >>> <mailto:<a
                href="mailto:marcus.pedersen@slu.se" target="_blank"
                moz-do-not-send="true">marcus.pedersen@slu.se</a>>>
              wrote:<br>
              >>><br>
              >>> Hi all,<br>
              >>> I just have a general thought about the
              gluster<br>
              >>> project.<br>
              >>> I have got the feeling that things has slowed
              down<br>
              >>> in the gluster project.<br>
              >>> I have had a look at github and to me the
              project<br>
              >>> seems to slow down, for gluster version 11
              there has<br>
              >>> been no minor releases, we are still on 11.0
              and I have<br>
              >>> not found any references to 11.1.<br>
              >>> There is a milestone called 12 but it seems
              to be<br>
              >>> stale.<br>
              >>> I have hit the issue:<br>
              >>> <a
                href="https://github.com/gluster/glusterfs/issues/4085"
                target="_blank" moz-do-not-send="true">https://github.com/gluster/glusterfs/issues/4085</a><br>
              >>> <<a
                href="https://github.com/gluster/glusterfs/issues/4085&"
                target="_blank" moz-do-not-send="true">https://github.com/gluster/glusterfs/issues/4085&</a>gt;<br>
              >>> that seems to have no sollution.<br>
              >>> I noticed when version 11 was released that
              you<br>
              >>> could not bump OP version to 11 and reported
              this,<br>
              >>> but this is still not available.<br>
              >>><br>
              >>> I am just wondering if I am missing something
              here?<br>
              >>><br>
              >>> We have been using gluster for many years in
              production<br>
              >>> and I think that gluster is great!! It has
              served as well over<br>
              >>> the years and we have seen some great
              improvments<br>
              >>> of stabilility and speed increase.<br>
              >>><br>
              >>> So is there something going on or have I got<br>
              >>> the wrong impression (and feeling)?<br>
              >>><br>
              >>> Best regards<br>
              >>> Marcus<br>
              >>> ---<br>
              >>> När du skickar e-post till SLU så innebär
              detta att SLU behandlar<br>
              >>> dina personuppgifter. För att läsa mer om hur
              detta går till, klicka<br>
              >>> här <<a
                href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/"
                target="_blank" moz-do-not-send="true">https://www.slu.se/om-slu/kontakta-slu/personuppgifter/</a><br>
              >>> <<a
                href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/&"
                target="_blank" moz-do-not-send="true">https://www.slu.se/om-slu/kontakta-slu/personuppgifter/&</a>gt;><br>
              >>> E-mailing SLU will result in SLU processing
              your personal data. For<br>
              >>> more information on how this is done, click
              here<br>
              >>> <<a
                href="https://www.slu.se/en/about-slu/contact-slu/personal-data/"
                target="_blank" moz-do-not-send="true">https://www.slu.se/en/about-slu/contact-slu/personal-data/</a><br>
              >>> <<a
                href="https://www.slu.se/en/about-slu/contact-slu/personal-data/&"
                target="_blank" moz-do-not-send="true">https://www.slu.se/en/about-slu/contact-slu/personal-data/&</a>gt;><br>
              >>> ________<br>
              >>><br>
              >>><br>
              >>><br>
              >>> Community Meeting Calendar:<br>
              >>><br>
              >>> Schedule -<br>
              >>> Every 2nd and 4th Tuesday at 14:30 IST /
              09:00 UTC<br>
              >>> Bridge: <a
                href="https://meet.google.com/cpu-eiue-hvk"
                target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a><br>
              >>> <<a
                href="https://meet.google.com/cpu-eiue-hvk&"
                target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk&</a>gt;<br>
              >>> Gluster-users mailing list<br>
              >>> <a href="mailto:Gluster-users@gluster.org"
                target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
              <mailto:<a href="mailto:Gluster-users@gluster.org"
                target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>><br>
              >>> <a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
              >>> <<a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users&"
                target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users&</a>gt;<br>
              >>><br>
              >>><br>
              >>> ________<br>
              >>><br>
              >>><br>
              >>><br>
              >>> Community Meeting Calendar:<br>
              >>><br>
              >>> Schedule -<br>
              >>> Every 2nd and 4th Tuesday at 14:30 IST /
              09:00 UTC<br>
              >>> Bridge: <a
                href="https://meet.google.com/cpu-eiue-hvk"
                target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a><br>
              >>> Gluster-users mailing list<br>
              >>> <a href="mailto:Gluster-users@gluster.org"
                target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
              >>> <a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
              >><br>
              >> --<br>
              >> Diego Zuccato<br>
              >> DIFA - Dip. di Fisica e Astronomia<br>
              >> Servizi Informatici<br>
              >> Alma Mater Studiorum - Università di Bologna<br>
              >> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
              >> tel.: +39 051 20 95786<br>
              >> ________<br>
              >><br>
              >><br>
              >><br>
              >> Community Meeting Calendar:<br>
              >><br>
              >> Schedule -<br>
              >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00
              UTC<br>
              >> Bridge: <a
                href="https://meet.google.com/cpu-eiue-hvk"
                target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a><br>
              >> Gluster-users mailing list<br>
              >> <a href="mailto:Gluster-users@gluster.org"
                target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
              >> <a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
              > ---<br>
              > När du skickar e-post till SLU så innebär detta att
              SLU behandlar dina personuppgifter. För att läsa mer om
              hur detta går till, klicka här <<a
                href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/&"
                target="_blank" moz-do-not-send="true">https://www.slu.se/om-slu/kontakta-slu/personuppgifter/&</a>gt;<br>
              > E-mailing SLU will result in SLU processing your
              personal data. For more information on how this is done,
              click here <<a
                href="https://www.slu.se/en/about-slu/contact-slu/personal-data/&"
                target="_blank" moz-do-not-send="true">https://www.slu.se/en/about-slu/contact-slu/personal-data/&</a>gt;<br>
              <br>
              -- <br>
              Diego Zuccato<br>
              DIFA - Dip. di Fisica e Astronomia<br>
              Servizi Informatici<br>
              Alma Mater Studiorum - Università di Bologna<br>
              V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
              tel.: +39 051 20 95786<br>
              ________<br>
              <br>
              <br>
              <br>
              Community Meeting Calendar:<br>
              <br>
              Schedule -<br>
              Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
              Bridge: <a href="https://meet.google.com/cpu-eiue-hvk"
                target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a><br>
              Gluster-users mailing list<br>
              <a href="mailto:Gluster-users@gluster.org" target="_blank"
                moz-do-not-send="true">Gluster-users@gluster.org</a><br>
              <a
                href="https://lists.gluster.org/mailman/listinfo/gluster-users"
                target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
            </div>
          </blockquote>
        </div>
        <div><br>
        </div>
      </div>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <pre class="moz-quote-pre" wrap="">________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
    <div class="moz-signature">-- <br>
      <pre>Ronny Adsetts
Technical Director
Amazing Internet Ltd, London
t: +44 20 8977 8943
w: <a class="moz-txt-link-abbreviated" href="http://www.amazinginternet.com">www.amazinginternet.com</a>

Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ
Registered in England. Company No. 4042957
</pre>
    </div>
  </body>
</html>