[Gluster-devel] Gluster volume start <volname> force + EC nfs mount
Pranith Kumar Karampuri
pkarampu at redhat.com
Tue May 19 07:15:23 UTC 2015
hi Xavi,
All gluster commands which can restart nfs process can lead to
inconsistent versions on the file/directory if the gluster-nfs process
dies just at the time of updating versions. I don't see any way to fix
this problem as the NFS process is killed with SIGKILL.
Directory and metadata heals can recover from versions not being same. I
think we need to add logic in data self-heal code where even when the
versions don't match it should go ahead and check if the data matches on
the bricks. i.e. Read from 'k' (n=k+m) number of bricks and see if the
data matches on the rest of the redundancy bricks('m'). If it all
matches then it should just set the versions same.
Any other ideas?
Also added gluster-devel to check if anyother component had to deal with
similar problems.
Pranith
More information about the Gluster-devel
mailing list