[Gluster-users] geo-replication command rsync returned with 3
Kotresh Hiremath Ravishankar
khiremat at redhat.com
Tue Feb 6 09:44:43 UTC 2018
As a quick workaround for geo-replication to work. Please configure the
gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config
The above option will not do the lazy umount and as a result, all the
master and slave volume mounts
maintained by geo-replication can be accessed by others. It's also visible
in df output.
There might be cases where the mount points not get cleaned up when worker
goes faulty and come back.
These needs manual cleaning.
On Tue, Feb 6, 2018 at 12:37 AM, Florian Weimer <fweimer at redhat.com> wrote:
> On 02/05/2018 01:33 PM, Florian Weimer wrote:
> Do you have strace output going further back, at least to the proceeding
>> getcwd call? It would be interesting to see which path the kernel reports,
>> and if it starts with "(unreachable)".
> I got the strace output now, but it very difficult to read (chdir in a
> multi-threaded process …).
> My current inclination is to blame rsync because it does an unconditional
> getcwd during startup, which now fails if the current directory is
> Further references:
> Andreas Schwab agrees that rsync is buggy:
> Gluster-users mailing list
> Gluster-users at gluster.org
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users