<div dir="ltr"><br><div class="gmail_quote"><div dir="ltr"><div><div><div><div><div>Hello all,<br><br></div>I&#39;m trying to take advantage of the shard xlator, however I&#39;ve found it causes a lot of issues that I hope is easily resolvable<br><br></div>1) large file operations work well (copy file from folder a to folder b<br></div>2) seek operations and list operations frequently fail (ls directory, read bytes xyz at offset 235567)<br></div><div>3) Another issue is samba shares through samba-vfs show all files as 4MB, I&#39;ve also seen this when mounting with fuse, however nfs-ganesha reflects correct file sizes always-<br><br></div><div><br></div>Turning off the shard feature resolves this issue for new files created in the volume. mounted using the gluster fuse mount<br><br></div>here&#39;s my volume settings, please let me know if there&#39;s some changes I can make.<br><br></div></div>Volume Name: storage2<br>Type: Distributed-Replicate<br>Volume ID: adaabca5-25ed-4e7f-ae86-2f20fc0143a8<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x (2 + 1) = 9<br>Transport-type: tcp<br>Bricks:<br>Brick1: fd00:0:0:3::6:/mnt/gluster/storage/brick0/glusterfs2<br>Brick2: fd00:0:0:3::8:/mnt/gluster/storage/brick0/glusterfs2<br>Brick3: fd00:0:0:3::10:/mnt/gluster/storage/brick0/glusterfs (arbiter)<br>Brick4: fd00:0:0:3::6:/mnt/gluster/storage/brick1/glusterfs2<br>Brick5: fd00:0:0:3::8:/mnt/gluster/storage/brick1/glusterfs2<br>Brick6: fd00:0:0:3::10:/mnt/gluster/storage/brick1/glusterfs (arbiter)<br>Brick7: fd00:0:0:3::6:/mnt/gluster/storage/brick2/glusterfs2<br>Brick8: fd00:0:0:3::8:/mnt/gluster/storage/brick2/glusterfs2<br>Brick9: fd00:0:0:3::10:/mnt/gluster/storage/brick2/glusterfs (arbiter)<br>Options Reconfigured:<br>features.ctr-enabled: on<br>features.shard-block-size: 4MB<br>network.inode-lru-limit: 90000<br>features.cache-invalidation: on<br>performance.readdir-ahead: on<br>client.event-threads: 3<br>performance.cache-ima-xattrs: on<br>cluster.data-self-heal-algorithm: diff<br>network.remote-dio: disable<br>cluster.use-compound-fops: on<br>cluster.read-freq-threshold: 2<br>cluster.write-freq-threshold: 2<br>features.record-counters: on<br>disperse.shd-max-threads: 4<br>performance.parallel-readdir: on<br>performance.client-io-threads: on<br>server.event-threads: 3<br>cluster.lookup-optimize: on<br>performance.open-behind: on<br>performance.stat-prefetch: on<br>performance.quick-read: off<br>performance.io-cache: on<br>performance.read-ahead: off<br>performance.write-behind: on<br>features.scrub: Active<br>features.bitrot: on<br>features.leases: on<br>features.shard: off<br>transport.address-family: inet6<br>nfs.disable: on<br>server.allow-insecure: on<br>cluster.shd-max-threads: 8<br>performance.low-prio-threads: 32<br>cluster.locking-scheme: granular<br>cluster.shd-wait-qlength: 10000<br>user.cifs: off<br>cluster.eager-lock: enable<br>cluster.quorum-type: auto<br>cluster.server-quorum-type: server<br>cluster.tier-compact: on<br>storage.linux-aio: on<br>transport.keepalive: on<br>performance.write-behind-window-size: 2GB<br>performance.flush-behind: on<br>performance.cache-size: 1GB<br>cluster.choose-local: on<br>performance.io-thread-count: 64<br>cluster.brick-multiplex: off<br>cluster.enable-shared-storage: enable<br>nfs-ganesha: enable<br></div>