<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi,</p>
    <p>The problem is files generated by wordpress, and uploads etc ...
      so copying them to frontend hosts whilst making perfect sense
      assumes I have control over the code to not write to the local
      front-end, else we could have relied on something like lsync.</p>
    <p>As it stands, performance is acceptable with nl-cache enabled,
      but the fact that we get those ENOENT errors are highly
      problematic.<br>
    </p>
    <p><br>
    </p>
    <div class="moz-signature">
      <style type="text/css">
* { padding: 0px; margin: 0px; }
body, html { font-family: Arial, San-Serif; font-size: small; color: black; padding-left: 10px; padding-top: 3px; }
a { text-decoration: none; color: #818285; }
h1 { font-size: large; }
table { font-size: 12px; }
p + p { padding-top: 1em; }
</style>
      <p>Kind Regards,<br>
        Jaco Kroon<br>
      </p>
      <p><br>
      </p>
      <p>n 2022/12/14 14:04, Péter Károly JUHÁSZ wrote:<br>
      </p>
    </div>
    <blockquote type="cite"
cite="mid:CAAA01ixmpHgZVBPiyG_DkC3GWMm54ZeOXkHvKaxb19LY6Kq_cg@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="auto">When we used glusterfs for websites, we copied the
        web dir from gluster to local on frontend boots, then served it
        from there.</div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">Jaco Kroon <<a
            href="mailto:jaco@uls.co.za" moz-do-not-send="true"
            class="moz-txt-link-freetext">jaco@uls.co.za</a>> 于
          2022年12月14日周三 12:49写道:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0 0 0
          .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All,<br>
          <br>
          We've got a glusterfs cluster that houses some php web sites.<br>
          <br>
          This is generally considered a bad idea and we can see why.<br>
          <br>
          With performance.nl-cache on it actually turns out to be very
          <br>
          reasonable, however, with this turned of performance is
          roughly 5x <br>
          worse.  meaning a request that would take sub 500ms now takes
          2500ms.  <br>
          In other cases we see far, far worse cases, eg, with nl-cache
          takes <br>
          ~1500ms, without takes ~30s (20x worse).<br>
          <br>
          So why not use nl-cache?  Well, it results in readdir
          reporting files <br>
          which then fails to open with ENOENT.  The cache also never
          clears even <br>
          though the configuration says nl-cache entries should only be
          cached for <br>
          60s.  Even for "ls -lah" in affected folders you'll notice
          ???? mark <br>
          entries for attributes on files.  If this recovers in a
          reasonable time <br>
          (say, a few seconds, sure).<br>
          <br>
          # gluster volume info<br>
          Type: Replicate<br>
          Volume ID: cbe08331-8b83-41ac-b56d-88ef30c0f5c7<br>
          Status: Started<br>
          Snapshot Count: 0<br>
          Number of Bricks: 1 x 2 = 2<br>
          Transport-type: tcp<br>
          Options Reconfigured:<br>
          performance.nl-cache: on<br>
          cluster.readdir-optimize: on<br>
          config.client-threads: 2<br>
          config.brick-threads: 4<br>
          config.global-threading: on<br>
          performance.iot-pass-through: on<br>
          storage.fips-mode-rchecksum: on<br>
          cluster.granular-entry-heal: enable<br>
          cluster.data-self-heal-algorithm: full<br>
          cluster.locking-scheme: granular<br>
          client.event-threads: 2<br>
          server.event-threads: 2<br>
          transport.address-family: inet<br>
          nfs.disable: on<br>
          cluster.metadata-self-heal: off<br>
          cluster.entry-self-heal: off<br>
          cluster.data-self-heal: off<br>
          cluster.self-heal-daemon: on<br>
          server.allow-insecure: on<br>
          features.ctime: off<br>
          performance.io-cache: on<br>
          performance.cache-invalidation: on<br>
          features.cache-invalidation: on<br>
          performance.qr-cache-timeout: 600<br>
          features.cache-invalidation-timeout: 600<br>
          performance.io-cache-size: 128MB<br>
          performance.cache-size: 128MB<br>
          <br>
          Are there any other recommendations short of abandon all hope
          of <br>
          redundancy and to revert to a single-server setup (for the web
          code at <br>
          least).  Currently the cost of the redundancy seems to
          outweigh the benefit.<br>
          <br>
          Glusterfs version 10.2.  With patch for --inode-table-size,
          mounts <br>
          happen with:<br>
          <br>
          /usr/sbin/glusterfs --acl --reader-thread-count=2
          --lru-limit=524288 <br>
          --inode-table-size=524288 --invalidate-limit=16
          --background-qlen=32 <br>
          --fuse-mountopts=nodev,nosuid,noexec,noatime --process-name
          fuse <br>
          --volfile-server=127.0.0.1 --volfile-id=gv_home <br>
          --fuse-mountopts=nodev,nosuid,noexec,noatime /home<br>
          <br>
          Kind Regards,<br>
          Jaco<br>
          <br>
          ________<br>
          <br>
          <br>
          <br>
          Community Meeting Calendar:<br>
          <br>
          Schedule -<br>
          Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
          Bridge: <a href="https://meet.google.com/cpu-eiue-hvk"
            rel="noreferrer noreferrer" target="_blank"
            moz-do-not-send="true" class="moz-txt-link-freetext">https://meet.google.com/cpu-eiue-hvk</a><br>
          Gluster-users mailing list<br>
          <a href="mailto:Gluster-users@gluster.org" target="_blank"
            rel="noreferrer" moz-do-not-send="true"
            class="moz-txt-link-freetext">Gluster-users@gluster.org</a><br>
          <a
            href="https://lists.gluster.org/mailman/listinfo/gluster-users"
            rel="noreferrer noreferrer" target="_blank"
            moz-do-not-send="true" class="moz-txt-link-freetext">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
        </blockquote>
      </div>
    </blockquote>
  </body>
</html>