<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>My standard response to someone needing filesystem performance
      for www traffic is generally, "you're doing it wrong". <a
        class="moz-txt-link-freetext"
href="https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/">https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/</a><br>
      <br>
      That said, you might also look at these mount options:
      attribute-timeout, entry-timeout, negative-timeout (set to some
      large amount of time), and fopen-keep-cache.</p>
    <br>
    <div class="moz-cite-prefix">On 07/11/2017 07:48 AM, Jo Goossens
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:zarafa.5964e533.4b1c.70763d1a6208ff82@web.hosted-power.com">
      <meta name="Generator" content="Zarafa WebAccess v7.1.14-51822">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <title>RE: [Gluster-users] Gluster native mount is really slow
        compared to nfs</title>
      <style type="text/css">
      body
      {
        font-family: Arial, Verdana, Sans-Serif ! important;
        font-size: 12px;
        padding: 5px 5px 5px 5px;
        margin: 0px;
        border-style: none;
        background-color: #ffffff;
      }

      p, ul, li
      {
        margin-top: 0px;
        margin-bottom: 0px;
      }
  </style>
      <p>Hello,</p>
      <p> </p>
      <p> </p>
      <p>Here is the volume info as requested by soumya:</p>
      <p> </p>
      <div>#gluster volume info www</div>
      <div> </div>
      <div>Volume Name: www</div>
      <div>Type: Replicate</div>
      <div>Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff</div>
      <div>Status: Started</div>
      <div>Snapshot Count: 0</div>
      <div>Number of Bricks: 1 x 3 = 3</div>
      <div>Transport-type: tcp</div>
      <div>Bricks:</div>
      <div>Brick1: 192.168.140.41:/gluster/www</div>
      <div>Brick2: 192.168.140.42:/gluster/www</div>
      <div>Brick3: 192.168.140.43:/gluster/www</div>
      <div>Options Reconfigured:</div>
      <div>cluster.read-hash-mode: 0</div>
      <div>performance.quick-read: on</div>
      <div>performance.write-behind-window-size: 4MB</div>
      <div>server.allow-insecure: on</div>
      <div>performance.read-ahead: disable</div>
      <div>performance.readdir-ahead: on</div>
      <div>performance.io-thread-count: 64</div>
      <div>performance.io-cache: on</div>
      <div>performance.client-io-threads: on</div>
      <div>server.outstanding-rpc-limit: 128</div>
      <div>server.event-threads: 3</div>
      <div>client.event-threads: 3</div>
      <div>performance.cache-size: 32MB</div>
      <div>transport.address-family: inet</div>
      <div>nfs.disable: on</div>
      <div>nfs.addr-namelookup: off</div>
      <div>nfs.export-volumes: on</div>
      <div>nfs.rpc-auth-allow: 192.168.140.*</div>
      <div>features.cache-invalidation: on</div>
      <div>features.cache-invalidation-timeout: 600</div>
      <div>performance.stat-prefetch: on</div>
      <div>performance.cache-samba-metadata: on</div>
      <div>performance.cache-invalidation: on</div>
      <div>performance.md-cache-timeout: 600</div>
      <div>network.inode-lru-limit: 100000</div>
      <div>performance.parallel-readdir: on</div>
      <div>performance.cache-refresh-timeout: 60</div>
      <div>performance.rda-cache-limit: 50MB</div>
      <div>cluster.nufa: on</div>
      <div>network.ping-timeout: 5</div>
      <div>cluster.lookup-optimize: on</div>
      <div>cluster.quorum-type: auto</div>
      <div> </div>
      <div>I started with none of them set and I added/changed while
        testing. But it was always slow, by tuning some kernel
        parameters it improved slightly (just a few percent, nothing
        reasonable)</div>
      <div> </div>
      <div>I also tried ceph just to compare, I got this with default
        settings and no tweaks:</div>
      <div> </div>
      <div>
        <div> ./smallfile_cli.py  --top /var/www/test --host-set
          192.168.140.41 --threads 8 --files 5000 --file-size 64
          --record-size 64</div>
        <div>smallfile version 3.0</div>
        <div>                           hosts in test :
          ['192.168.140.41']</div>
        <div>                   top test directory(s) :
          ['/var/www/test']</div>
        <div>                               operation : cleanup</div>
        <div>                            files/thread : 5000</div>
        <div>                                 threads : 8</div>
        <div>           record size (KB, 0 = maximum) : 64</div>
        <div>                          file size (KB) : 64</div>
        <div>                  file size distribution : fixed</div>
        <div>                           files per dir : 100</div>
        <div>                            dirs per dir : 10</div>
        <div>              threads share directories? : N</div>
        <div>                         filename prefix :</div>
        <div>                         filename suffix :</div>
        <div>             hash file number into dir.? : N</div>
        <div>                     fsync after modify? : N</div>
        <div>          pause between files (microsec) : 0</div>
        <div>                    finish all requests? : Y</div>
        <div>                              stonewall? : Y</div>
        <div>                 measure response times? : N</div>
        <div>                            verify read? : Y</div>
        <div>                                verbose? : False</div>
        <div>                          log to stderr? : False</div>
        <div>                           ext.attr.size : 0</div>
        <div>                          ext.attr.count : 0</div>
        <div>               permute host directories? : N</div>
        <div>                remote program directory :
          /root/smallfile-master</div>
        <div>               network thread sync. dir. :
          /var/www/test/network_shared</div>
        <div>starting all threads by creating starting gate file
          /var/www/test/network_shared/starting_gate.tmp</div>
        <div>host = 192.168.140.41,thr = 00,elapsed = 1.339621,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 01,elapsed = 1.436776,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 02,elapsed = 1.498681,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 03,elapsed = 1.483886,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 04,elapsed = 1.454833,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 05,elapsed = 1.469340,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 06,elapsed = 1.439060,files =
          5000,records = 0,status = ok</div>
        <div>host = 192.168.140.41,thr = 07,elapsed = 1.375074,files =
          5000,records = 0,status = ok</div>
        <div>total threads = 8</div>
        <div>total files = 40000</div>
        <div>100.00% of requested files processed, minimum is  70.00</div>
        <div>1.498681 sec elapsed time</div>
        <div>26690.134975 files/sec</div>
        <div> </div>
        <div> </div>
      </div>
      <p><br>
        Regards</p>
      <p>Jo</p>
      <p> </p>
      <blockquote style="border-left: 2px solid #325FBA; padding-left:
        5px;margin-left:5px;">-----Original message-----<br>
        <strong>From:</strong> Jo Goossens
        <a class="moz-txt-link-rfc2396E" href="mailto:jo.goossens@hosted-power.com">&lt;jo.goossens@hosted-power.com&gt;</a><br>
        <strong>Sent:</strong> Tue 11-07-2017 12:15<br>
        <strong>Subject:</strong> Re: [Gluster-users] Gluster native
        mount is really slow compared to nfs<br>
        <strong>To:</strong> Soumya Koduri <a class="moz-txt-link-rfc2396E" href="mailto:skoduri@redhat.com">&lt;skoduri@redhat.com&gt;</a>;
        <a class="moz-txt-link-abbreviated" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>; <br>
        <strong>CC:</strong> Ambarish Soman <a class="moz-txt-link-rfc2396E" href="mailto:asoman@redhat.com">&lt;asoman@redhat.com&gt;</a>; <br>
        <style type="text/css">body { font-family: monospace; }</style>
        <style type="text/css">       .bodyclass       {         font-family: Arial, Verdana, Sans-Serif ! important;         font-size: 12px;         padding: 5px 5px 5px 5px;         margin: 0px;         border-style: none;         background-color: #ffffff;       }        p, ul, li       {         margin-top: 0px;         margin-bottom: 0px;       }   </style>
        <div>
          <p>Hello,</p>
          <p> </p>
          <p> </p>
          <p>Here is some speedtest with a new setup we just made with
            gluster 3.10, there are no other differences, except
            glusterfs versus nfs. The nfs is about 80 times faster:</p>
          <p> </p>
          <p> </p>
          <div>root@app1:~/smallfile-master# mount -t glusterfs -o
            use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
            192.168.140.41:/www /var/www</div>
          <div>root@app1:~/smallfile-master# ./smallfile_cli.py  --top
            /var/www/test --host-set 192.168.140.41 --threads 8 --files
            500 --file-size 64 --record-size 64</div>
          <div>smallfile version 3.0</div>
          <div>                           hosts in test :
            ['192.168.140.41']</div>
          <div>                   top test directory(s) :
            ['/var/www/test']</div>
          <div>                               operation : cleanup</div>
          <div>                            files/thread : 500</div>
          <div>                                 threads : 8</div>
          <div>           record size (KB, 0 = maximum) : 64</div>
          <div>                          file size (KB) : 64</div>
          <div>                  file size distribution : fixed</div>
          <div>                           files per dir : 100</div>
          <div>                            dirs per dir : 10</div>
          <div>              threads share directories? : N</div>
          <div>                         filename prefix :</div>
          <div>                         filename suffix :</div>
          <div>             hash file number into dir.? : N</div>
          <div>                     fsync after modify? : N</div>
          <div>          pause between files (microsec) : 0</div>
          <div>                    finish all requests? : Y</div>
          <div>                              stonewall? : Y</div>
          <div>                 measure response times? : N</div>
          <div>                            verify read? : Y</div>
          <div>                                verbose? : False</div>
          <div>                          log to stderr? : False</div>
          <div>                           ext.attr.size : 0</div>
          <div>                          ext.attr.count : 0</div>
          <div>               permute host directories? : N</div>
          <div>                remote program directory :
            /root/smallfile-master</div>
          <div>               network thread sync. dir. :
            /var/www/test/network_shared</div>
          <div>starting all threads by creating starting gate file
            /var/www/test/network_shared/starting_gate.tmp</div>
          <div>host = 192.168.140.41,thr = 00,elapsed = 68.845450,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 01,elapsed = 67.601088,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 02,elapsed = 58.677994,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 03,elapsed = 65.901922,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 04,elapsed = 66.971720,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 05,elapsed = 71.245102,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 06,elapsed = 67.574845,files
            = 500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 07,elapsed = 54.263242,files
            = 500,records = 0,status = ok</div>
          <div>total threads = 8</div>
          <div>total files = 4000</div>
          <div>100.00% of requested files processed, minimum is  70.00</div>
          <div>71.245102 sec elapsed time</div>
          <div>56.144211 files/sec</div>
          <div> </div>
          <div>umount /var/www</div>
          <div> </div>
          <div>root@app1:~/smallfile-master# mount -t nfs -o tcp
            192.168.140.41:/www /var/www</div>
          <div>root@app1:~/smallfile-master# ./smallfile_cli.py  --top
            /var/www/test --host-set 192.168.140.41 --threads 8 --files
            500 --file-size 64 --record-size 64</div>
          <div>smallfile version 3.0</div>
          <div>                           hosts in test :
            ['192.168.140.41']</div>
          <div>                   top test directory(s) :
            ['/var/www/test']</div>
          <div>                               operation : cleanup</div>
          <div>                            files/thread : 500</div>
          <div>                                 threads : 8</div>
          <div>           record size (KB, 0 = maximum) : 64</div>
          <div>                          file size (KB) : 64</div>
          <div>                  file size distribution : fixed</div>
          <div>                           files per dir : 100</div>
          <div>                            dirs per dir : 10</div>
          <div>              threads share directories? : N</div>
          <div>                         filename prefix :</div>
          <div>                         filename suffix :</div>
          <div>             hash file number into dir.? : N</div>
          <div>                     fsync after modify? : N</div>
          <div>          pause between files (microsec) : 0</div>
          <div>                    finish all requests? : Y</div>
          <div>                              stonewall? : Y</div>
          <div>                 measure response times? : N</div>
          <div>                            verify read? : Y</div>
          <div>                                verbose? : False</div>
          <div>                          log to stderr? : False</div>
          <div>                           ext.attr.size : 0</div>
          <div>                          ext.attr.count : 0</div>
          <div>               permute host directories? : N</div>
          <div>                remote program directory :
            /root/smallfile-master</div>
          <div>               network thread sync. dir. :
            /var/www/test/network_shared</div>
          <div>starting all threads by creating starting gate file
            /var/www/test/network_shared/starting_gate.tmp</div>
          <div>host = 192.168.140.41,thr = 00,elapsed = 0.962424,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 01,elapsed = 0.942673,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 02,elapsed = 0.940622,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 03,elapsed = 0.915218,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 04,elapsed = 0.934349,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 05,elapsed = 0.922466,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 06,elapsed = 0.954381,files =
            500,records = 0,status = ok</div>
          <div>host = 192.168.140.41,thr = 07,elapsed = 0.946127,files =
            500,records = 0,status = ok</div>
          <div>total threads = 8</div>
          <div>total files = 4000</div>
          <div>100.00% of requested files processed, minimum is  70.00</div>
          <div>0.962424 sec elapsed time</div>
          <div>4156.173189 files/sec</div>
          <div> </div>
          <p> </p>
          <p> </p>
          <blockquote style="border-left: 2px solid #325FBA;
            padding-left: 5px;margin-left:5px;">-----Original
            message-----<br>
            <strong>From:</strong> Jo Goossens
            <a class="moz-txt-link-rfc2396E" href="mailto:jo.goossens@hosted-power.com">&lt;jo.goossens@hosted-power.com&gt;</a><br>
            <strong>Sent:</strong> Tue 11-07-2017 11:26<br>
            <strong>Subject:</strong> Re: [Gluster-users] Gluster native
            mount is really slow compared to nfs<br>
            <strong>To:</strong> <a class="moz-txt-link-abbreviated" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>; Soumya
            Koduri <a class="moz-txt-link-rfc2396E" href="mailto:skoduri@redhat.com">&lt;skoduri@redhat.com&gt;</a>; <br>
            <strong>CC:</strong> Ambarish Soman
            <a class="moz-txt-link-rfc2396E" href="mailto:asoman@redhat.com">&lt;asoman@redhat.com&gt;</a>; <br>
            <style type="text/css">.bodyclass { font-family: monospace; }</style>
            <style type="text/css">       .bodyclass       {         font-family: Arial, Verdana, Sans-Serif ! important;         font-size: 12px;         padding: 5px 5px 5px 5px;         margin: 0px;         border-style: none;         background-color: #ffffff;       }        p, ul, li       {         margin-top: 0px;         margin-bottom: 0px;       }   </style>
            <div>
              <p>Hi all,</p>
              <p> </p>
              <p> </p>
              <p>One more thing, we have 3 apps servers with the gluster
                on it, replicated on 3 different gluster nodes. (So the
                gluster nodes are app servers at the same time). We
                could actually almost work locally if we wouldn't need
                to have the same files on the 3 nodes and redundancy :)</p>
              <p> </p>
              <p>Initial cluster was created like this:</p>
              <p> </p>
              <div>gluster volume create www replica 3 transport tcp
                192.168.140.41:/gluster/www 192.168.140.42:/gluster/www
                192.168.140.43:/gluster/www force</div>
              <div>gluster volume set www network.ping-timeout 5</div>
              <div>gluster volume set www performance.cache-size 1024MB</div>
              <div>gluster volume set www nfs.disable on # No need for
                NFS currently</div>
              <div>gluster volume start www</div>
              <div> </div>
              <div>To my understanding it still wouldn't explain why nfs
                has such great performance compared to native ...</div>
              <div> </div>
              <p> </p>
              <p>Regards</p>
              <p>Jo</p>
              <p> </p>
              <p><br>
                 </p>
              <blockquote style="border-left: 2px solid #325FBA;
                padding-left: 5px;margin-left:5px;">-----Original
                message-----<br>
                <strong>From:</strong> Soumya Koduri
                <a class="moz-txt-link-rfc2396E" href="mailto:skoduri@redhat.com">&lt;skoduri@redhat.com&gt;</a><br>
                <strong>Sent:</strong> Tue 11-07-2017 11:16<br>
                <strong>Subject:</strong> Re: [Gluster-users] Gluster
                native mount is really slow compared to nfs<br>
                <strong>To:</strong> Jo Goossens
                <a class="moz-txt-link-rfc2396E" href="mailto:jo.goossens@hosted-power.com">&lt;jo.goossens@hosted-power.com&gt;</a>;
                <a class="moz-txt-link-abbreviated" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>; <br>
                <strong>CC:</strong> Ambarish Soman
                <a class="moz-txt-link-rfc2396E" href="mailto:asoman@redhat.com">&lt;asoman@redhat.com&gt;</a>; Karan Sandha
                <a class="moz-txt-link-rfc2396E" href="mailto:ksandha@redhat.com">&lt;ksandha@redhat.com&gt;</a>; <br>
                + Ambarish<br>
                <br>
                On 07/11/2017 02:31 PM, Jo Goossens wrote:<br>
                &gt; Hello,<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; We tried tons of settings to get a php app running
                on a native gluster<br>
                &gt; mount:<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; e.g.: 192.168.140.41:/www /var/www glusterfs<br>
                &gt;
defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable<br>
                &gt; 0 0<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; I tried some mount variants in order to speed up
                things without luck.<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; After that I tried nfs (native gluster nfs 3 and
                ganesha nfs 4), it was<br>
                &gt; a crazy performance difference.<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; e.g.: 192.168.140.41:/www /var/www nfs4
                defaults,_netdev 0 0<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; I tried a test like this to confirm the slowness:<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; ./smallfile_cli.py  --top /var/www/test --host-set
                192.168.140.41<br>
                &gt; --threads 8 --files 5000 --file-size 64
                --record-size 64<br>
                &gt;<br>
                &gt; This test finished in around 1.5 seconds with NFS
                and in more than 250<br>
                &gt; seconds without nfs (can't remember exact numbers,
                but I reproduced it<br>
                &gt; several times for both).<br>
                &gt;<br>
                &gt; With the native gluster mount the php app had
                loading times of over 10<br>
                &gt; seconds, with the nfs mount the php app loaded
                around 1 second maximum<br>
                &gt; and even less. (reproduced several times)<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; I tried all kind of performance settings and
                variants of this but not<br>
                &gt; helped , the difference stayed huge, here are some
                of the settings<br>
                &gt; played with in random order:<br>
                &gt;<br>
                <br>
                Request Ambarish &amp; Karan (cc'ed who have been
                working on evaluating <br>
                performance of various access protocols gluster
                supports) to look at the <br>
                below settings and provide inputs.<br>
                <br>
                Thanks,<br>
                Soumya<br>
                <br>
                &gt;<br>
                &gt;<br>
                &gt; gluster volume set www features.cache-invalidation
                on<br>
                &gt; gluster volume set www
                features.cache-invalidation-timeout 600<br>
                &gt; gluster volume set www performance.stat-prefetch on<br>
                &gt; gluster volume set www
                performance.cache-samba-metadata on<br>
                &gt; gluster volume set www
                performance.cache-invalidation on<br>
                &gt; gluster volume set www performance.md-cache-timeout
                600<br>
                &gt; gluster volume set www network.inode-lru-limit
                250000<br>
                &gt;<br>
                &gt; gluster volume set www
                performance.cache-refresh-timeout 60<br>
                &gt; gluster volume set www performance.read-ahead
                disable<br>
                &gt; gluster volume set www performance.readdir-ahead on<br>
                &gt; gluster volume set www performance.parallel-readdir
                on<br>
                &gt; gluster volume set www
                performance.write-behind-window-size 4MB<br>
                &gt; gluster volume set www performance.io-thread-count
                64<br>
                &gt;<br>
                &gt; gluster volume set www
                performance.client-io-threads on<br>
                &gt;<br>
                &gt; gluster volume set www performance.cache-size 1GB<br>
                &gt; gluster volume set www performance.quick-read on<br>
                &gt; gluster volume set www performance.flush-behind on<br>
                &gt; gluster volume set www performance.write-behind on<br>
                &gt; gluster volume set www nfs.disable on<br>
                &gt;<br>
                &gt; gluster volume set www client.event-threads 3<br>
                &gt; gluster volume set www server.event-threads 3<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; The NFS ha adds a lot of complexity which we
                wouldn't need at all in our<br>
                &gt; setup, could you please explain what is going on
                here? Is NFS the only<br>
                &gt; solution to get acceptable performance? Did I miss
                one crucial settting<br>
                &gt; perhaps?<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; We're really desperate, thanks a lot for your help!<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; PS: We tried with gluster 3.11 and 3.8 on Debian,
                both had terrible<br>
                &gt; performance when not used with nfs.<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; Kind regards<br>
                &gt;<br>
                &gt; Jo Goossens<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt; _______________________________________________<br>
                &gt; Gluster-users mailing list<br>
                &gt; <a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
                &gt;
                <a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
                &gt;<br>
              </blockquote>
            </div>
            <pre> _______________________________________________
 Gluster-users mailing list
 <a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
 <a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
          </blockquote>
        </div>
        <pre>_______________________________________________
 Gluster-users mailing list
 <a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
 <a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
      </blockquote>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>