<div>                Hi Hubert,<br><br>I think it will be better to open a separate thread for your case .<br>If you have HW Raid1 presented as disks, then you can easily use striped LVM or md raid ( level 0 ) to stripe the disks.<br>One advantage is that you won't have to worry about gluster rebalance or overloaded brick (multiple file access requests to the same brick), but of course it has disadvantages.<br><br>Keep in mind that negative searches (searches of non-existing/deleted objects) has highest penalty.<br><br>Best Regards,<br>Strahil Nikolov<br><br><br>            </div>            <div class="yahoo_quoted" style="margin:10px 0px 0px 0.8ex;border-left:1px solid #ccc;padding-left:1ex;">                        <div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">                                <div>                    В неделя, 26 март 2023 г., 08:52:18 ч. Гринуич+3, Hu Bert <revirii@googlemail.com> написа:                </div>                <div><br></div>                <div><br></div>                <div><div dir="ltr">Hi,<br clear="none">sry if i hijack this, but maybe it's helpful for other gluster users...<br clear="none"><br clear="none">> pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.<br clear="none">> I would choose  LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several  (not the built-in ones) controllers.<br clear="none"><br clear="none">Well, we have to take what our provider (hetzner) offers - SATA hdds<br clear="none">or sata|nvme ssds.<br clear="none"><br clear="none">Volume Name: workdata<br clear="none">Type: Distributed-Replicate<br clear="none">Number of Bricks: 5 x 3 = 15<br clear="none">Bricks:<br clear="none">Brick1: gls1:/gluster/md3/workdata<br clear="none">Brick2: gls2:/gluster/md3/workdata<br clear="none">Brick3: gls3:/gluster/md3/workdata<br clear="none">Brick4: gls1:/gluster/md4/workdata<br clear="none">Brick5: gls2:/gluster/md4/workdata<br clear="none">Brick6: gls3:/gluster/md4/workdata<br clear="none">etc.<br clear="none">Below are the volume settings.<br clear="none"><br clear="none">Each brick is a sw raid1 (made out of 10TB hdds). file access to the<br clear="none">backends is pretty slow, even with low system load (which reaches >100<br clear="none">on the servers on high traffic days); even a simple 'ls' on a<br clear="none">directory with ~1000 sub-directories will take a couple of seconds.<br clear="none"><br clear="none">Some images:<br clear="none"><a shape="rect" href="https://abload.de/img/gls-diskutilfti5d.png" target="_blank">https://abload.de/img/gls-diskutilfti5d.png</a><br clear="none"><a shape="rect" href="https://abload.de/img/gls-io6cfgp.png" target="_blank">https://abload.de/img/gls-io6cfgp.png</a><br clear="none"><a shape="rect" href="https://abload.de/img/gls-throughput3oicf.png" target="_blank">https://abload.de/img/gls-throughput3oicf.png</a><br clear="none"><br clear="none">As you mentioned it: is a raid10 better than x*raid1? Anything misconfigured?<br clear="none"><br clear="none"><br clear="none">Thx a lot & best regards,<div class="yqt7190092076" id="yqtfd73152"><br clear="none"><br clear="none">Hubert</div><br clear="none"><br clear="none">Options Reconfigured:<br clear="none">performance.client-io-threads: off<br clear="none">nfs.disable: on<br clear="none">transport.address-family: inet<br clear="none">performance.cache-invalidation: on<br clear="none">performance.stat-prefetch: on<br clear="none">features.cache-invalidation-timeout: 600<br clear="none">features.cache-invalidation: on<br clear="none">performance.read-ahead: off<br clear="none">performance.io-cache: off<br clear="none">performance.quick-read: on<br clear="none">cluster.self-heal-window-size: 16<br clear="none">cluster.heal-wait-queue-length: 10000<br clear="none">cluster.data-self-heal-algorithm: full<br clear="none">cluster.background-self-heal-count: 256<br clear="none">network.inode-lru-limit: 200000<br clear="none">cluster.shd-max-threads: 8<br clear="none">server.outstanding-rpc-limit: 128<br clear="none">transport.listen-backlog: 100<br clear="none">performance.least-prio-threads: 8<br clear="none">performance.cache-size: 6GB<br clear="none">cluster.min-free-disk: 1%<br clear="none">performance.io-thread-count: 32<br clear="none">performance.write-behind-window-size: 16MB<br clear="none">performance.cache-max-file-size: 128MB<br clear="none">client.event-threads: 8<br clear="none">server.event-threads: 8<br clear="none">performance.parallel-readdir: on<br clear="none">performance.cache-refresh-timeout: 4<br clear="none">cluster.readdir-optimize: off<br clear="none">performance.md-cache-timeout: 600<br clear="none">performance.nl-cache: off<br clear="none">cluster.lookup-unhashed: on<br clear="none">cluster.shd-wait-qlength: 10000<br clear="none">performance.readdir-ahead: on<br clear="none">storage.build-pgfid: off<div class="yqt7190092076" id="yqtfd12919"><br clear="none"></div></div></div>            </div>                </div>