[Gluster-users] modifying data via fues causes heal problem
lejeczek
peljasz at yahoo.co.uk
Tue Aug 29 12:44:29 UTC 2017
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 0
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 2
Brick
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 1
# end of QEMU-VMs:
which heals(automatically) later ok, but, why would this
happen in the first place? Is this expected?
Clients(all peers) mount fuse with help of autofs, like
this(eg, on 10.5.6.49 peer):
QEMU-VMs -fstype=glusterfs,acl
127.0.0.1,10.5.6.100,10.5.6.32:/QEMU-VMs
Is this some tuning, tweaking problems(latencies, etc)?
Is this autofs mount problem?
Or some other problems?
many thanks, L.
More information about the Gluster-users
mailing list