[Gluster-users] gluster share as home

Richard Neuboeck hawk at tbi.univie.ac.at
Sun Nov 19 10:32:03 UTC 2017


Hi Gluster Group,

I've been using gluster as storage back end for oVirt for some years now
without the slightest hitch at all.

Excited with this I wanted to switch our home share from NFS over to a
replica 3 gluster volume as well. Since small file performance was not
particular good I applied all performance enhancing settings I could
find in the gluster blog and on other sites. Those settings made it ok
to use. Not great but good enough considering we got an always up, self
healing system in place.

I've compelled a hand full of users to test the new share in 'real life'
with mixed results. In my daily usage I feel the performance impact but
otherwise all is well. So was it for another users. But two others
experienced other problems.

One got random access problems to his files that I could not correlate
to any log message on the client machine or a server (no error, no
warning). Even stranger that we could not reproduce those problems
repeatedly - that's why I never asked this group.

What I did find is that after enabling directory quota the 'indexing
process' seems to lock files that resulted also in permission denied on
clients when they tried to access the file in question. Here I got a
permission denied log message (but no reason).

Another user got the 'file changed as we read it' error I've posted
about to this group some days ago. Enabling consistent metadata didn't
resolve the problem. A side effect was that access performance worsened
(~1 second for an ls in a directory with 252 files, ~50sec to extract a
16MB tar archive and ~60sec to remove the extracted files again).
Another side effect were hundreds of metadata selfheal log messages per
minute. Meanwhile I disabled the consistend metadata setting again but
the log messages are still present. I could not find an error or warning
as to why self healing is running at all.

[2017-11-19 10:18:56.460792] I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-home-replicate-0: performing metadata selfheal on
7ebd2d61-3521-437e-992b-639b094b7ae9
[2017-11-19 10:18:56.470377] I [MSGID: 108026]
[afr-self-heal-common.c:1328:afr_log_selfheal] 0-home-replicate-0:
Completed metadata selfheal on 7ebd2d61-3521-437e-992b-639b094b7ae9.
sources=[0]  sinks=1 2
[2017-11-19 10:18:56.516641] I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-home-replicate-0: performing metadata selfheal on
c0a02d97-5cff-4d0a-a687-11d5e3e747bb

For the meantime I've transferred all users away from gluster just to be
on the safe side.

Is gluster simply not a good choice for small file storage (homes)?

As I like the concept of gluster, the easy setup in comparison to others
and its integration in RH I would really prefer to use gluster over
other solutions.

I would very much appreciate any feedback I can get about gluster
volumes as small file storage, optimizations, potential problems.

Thank's a lot!
Cheers
Richard

PS: our current gluster home setup, all clients are using the fuse client

# gluster volume info home

Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: sphere-six:/srv/gluster_home/brick
Brick2: sphere-five:/srv/gluster_home/brick
Brick3: sphere-four:/srv/gluster_home/brick
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.readdir-optimize: on
cluster.lookup-optimize: on
performance.client-io-threads: on
performance.cache-size: 1GB
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.barrier: disable
[cluster.consistent-metadata: on] - this is off again
cluster.localtime-logging: enable
cluster.server-quorum-ratio: 51%

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 228 bytes
Desc: OpenPGP digital signature
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171119/ef8d97c4/attachment.sig>


More information about the Gluster-users mailing list