[Gluster-users] Fwd: Reliability issues with Gluster 3.10 and shard

Benjamin Kingston ben at nexusnebula.net
Sat May 13 06:46:26 UTC 2017


Hello all,

I'm trying to take advantage of the shard xlator, however I've found it
causes a lot of issues that I hope is easily resolvable

1) large file operations work well (copy file from folder a to folder b
2) seek operations and list operations frequently fail (ls directory, read
bytes xyz at offset 235567)
3) Another issue is samba shares through samba-vfs show all files as 4MB,
I've also seen this when mounting with fuse, however nfs-ganesha reflects
correct file sizes always-


Turning off the shard feature resolves this issue for new files created in
the volume. mounted using the gluster fuse mount

here's my volume settings, please let me know if there's some changes I can
make.

Volume Name: storage2
Type: Distributed-Replicate
Volume ID: adaabca5-25ed-4e7f-ae86-2f20fc0143a8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: fd00:0:0:3::6:/mnt/gluster/storage/brick0/glusterfs2
Brick2: fd00:0:0:3::8:/mnt/gluster/storage/brick0/glusterfs2
Brick3: fd00:0:0:3::10:/mnt/gluster/storage/brick0/glusterfs (arbiter)
Brick4: fd00:0:0:3::6:/mnt/gluster/storage/brick1/glusterfs2
Brick5: fd00:0:0:3::8:/mnt/gluster/storage/brick1/glusterfs2
Brick6: fd00:0:0:3::10:/mnt/gluster/storage/brick1/glusterfs (arbiter)
Brick7: fd00:0:0:3::6:/mnt/gluster/storage/brick2/glusterfs2
Brick8: fd00:0:0:3::8:/mnt/gluster/storage/brick2/glusterfs2
Brick9: fd00:0:0:3::10:/mnt/gluster/storage/brick2/glusterfs (arbiter)
Options Reconfigured:
features.ctr-enabled: on
features.shard-block-size: 4MB
network.inode-lru-limit: 90000
features.cache-invalidation: on
performance.readdir-ahead: on
client.event-threads: 3
performance.cache-ima-xattrs: on
cluster.data-self-heal-algorithm: diff
network.remote-dio: disable
cluster.use-compound-fops: on
cluster.read-freq-threshold: 2
cluster.write-freq-threshold: 2
features.record-counters: on
disperse.shd-max-threads: 4
performance.parallel-readdir: on
performance.client-io-threads: on
server.event-threads: 3
cluster.lookup-optimize: on
performance.open-behind: on
performance.stat-prefetch: on
performance.quick-read: off
performance.io-cache: on
performance.read-ahead: off
performance.write-behind: on
features.scrub: Active
features.bitrot: on
features.leases: on
features.shard: off
transport.address-family: inet6
nfs.disable: on
server.allow-insecure: on
cluster.shd-max-threads: 8
performance.low-prio-threads: 32
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
user.cifs: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.tier-compact: on
storage.linux-aio: on
transport.keepalive: on
performance.write-behind-window-size: 2GB
performance.flush-behind: on
performance.cache-size: 1GB
cluster.choose-local: on
performance.io-thread-count: 64
cluster.brick-multiplex: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170512/167a150b/attachment.html>


More information about the Gluster-users mailing list