[Gluster-users] geo-replications invalid names when using rsyncd

Brian Ericson bericson at ptc.com
Fri Oct 16 16:16:44 UTC 2015


I don't see any errors.  But...

As a work-around, I tried the (equivalent of the) following:
	mkdir /mnt/.tmp
	chown nobody:nobody /mnt/tmp #rsync uses "nobody"
	ln -s ../.tmp /mnt/master-volume

I then changed /etc/rsyncd.conf's [master-volume] entry to:
	path = /mnt/./master-volume

My rsync command then became:
	rsync --temp-dir /.tmp file rsync://master/master-volume

Thus, the chroot for [master-volume] starts at /mnt, but clients
still enter at /mnt/master-volume.  The symbolic link allows the
client to pass /.tmp as a temp dir (it's "/" or entry point is
still /mnt/master-volume, so it's /.tmp is actually the symbolic
link /mnt/master-volume/.tmp).

Critically, while the volume sees the .tmp symbolic link, it does
*not* see /mnt/.tmp.  Consequently, the volume only sees the file 
*after* rsync completes copying it to master and moves it into
/mnt/master-volume.

This works.  Sorta......

My geo-replication network is, really, as follows:
	master----slave----sub0
         	      \----sub1

(master replicates to slave, slave replicates to sub0 & sub1)

This trick does replicate the .tmp symbolic link, which is broken
everywhere but on the master. The slave is fine with this, but
both sub0 and sub1 immediately balk with "Too many levels of
symbolic links", putting slave->sub0 and slave->sub1 replication
into a "Faulty" state.  I've temporarily deleted the .tmp symbolic
link on slave, sub0, and sub1 and that fixed the (immediate)
problem.

With all of this -- as it currently stands! -- rsync now works:
file is now properly geo-replicated everywhere.

I'm unsure of the legitimacy of this work around -- in particular,
will GlusterFS eventually catch up with the .tmp symbolic link
deletion and put me right back into the faulty state?

On 10/16/2015 01:35 AM, Aravinda wrote:
> Do you see any errors in Master logs?
> (/var/log/glusterfs/geo-replication/<MASTERVOL>/*.log)
>
> regards
> Aravinda
>
> On 10/15/2015 07:51 PM, Brian Ericson wrote:
>> Thanks!
>>
>> As near as I can tell, the GlusterFS thinks it's done -- I finally
>> ended up renaming the files myself after waiting a couple of days.
>>
>> If I take an idle master/slave (no pending writes) and do an rsync to
>> copy a file to the master volume, I can see that the file is otherwise
>> correct (sha1sum of the file on master matches sha1sum of .file.6chars
>> on slave) and that the "last synced" time is bumped.  But, for as long
>> as I've been willing to wait, I've yet to see the .file.6chars moved
>> to file.
>>
>> I'm using
>> # rpm -qa gluster*
>> glusterfs-fuse-3.7.5-1.el7.x86_64
>> glusterfs-3.7.5-1.el7.x86_64
>> glusterfs-cli-3.7.5-1.el7.x86_64
>> glusterfs-libs-3.7.5-1.el7.x86_64
>> glusterfs-api-3.7.5-1.el7.x86_64
>> glusterfs-geo-replication-3.7.5-1.el7.x86_64
>> glusterfs-server-3.7.5-1.el7.x86_64
>> glusterfs-client-xlators-3.7.5-1.el7.x86_64
>>
>> On 10/15/2015 06:35 AM, Aravinda wrote:
>>> Slave will be eventually consistent. If rsync created temp files in
>>> Master Volume and renamed, that gets recorded in Changelogs(Journal).
>>> Exact same steps will be replayed in Slave Volume. If no errors, Geo-rep
>>> should unlink temp files in Slave and retain actual files.
>>>
>>> Let us know if Issue persists even after sometime. Also let us know the
>>> Gluster Version you are using.
>>>
>>> regards
>>> Aravinda
>>> http://aravindavk.in
>>>
>>> On 10/15/2015 05:20 AM, Brian Ericson wrote:
>>>> Admittedly an odd case, but...
>>>>
>>>> o I have simple a simple geo-replication setup:  master -> slave.
>>>> o I've mounted the master's volume on the master host.
>>>> o I've also setup rsyncd server on the master:
>>>>   [master-volume]
>>>>          path = /mnt/master-volume
>>>>          read only = false
>>>> o I now rsync from a client to the master using the rsync protocol:
>>>>   rsync file rsync://master/master-volume
>>>>
>>>> What I see is "file" when looking at the master volume, but that's not
>>>> I see in the slave volume.  This is what is replicated to the slave:
>>>>
>>>>   .file.6chars
>>>>
>>>> where "6chars" is some random letters & numbers.
>>>>
>>>> I'm pretty sure the .file.6chars version is due to my client's rsync
>>>> and represents the name rsync gives the file during transport, after
>>>> which it renames it to file.  Is this rename at such a low level
>>>> that glusterfs's geo-replication doesn't catch it and doesn't see
>>>> that it should be doing a rename?
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>> .
>>>
>
> .
>


More information about the Gluster-users mailing list