[Gluster-users] Timestamp on replicated files and dirs

Matthew J. Salerno vagabond_king at yahoo.com
Thu May 28 15:37:56 UTC 2009


I'm still unable to find a resolution.  Has anyone else come across this?

----- Original Message ----
From: Matthew J. Salerno <vagabond_king at yahoo.com>
To: gluster-users at gluster.org
Sent: Friday, May 22, 2009 2:19:34 PM
Subject: Timestamp on replicated files and dirs

Version: glusterfs 2.0.1
type cluster/replicate
RHEL 5.2 x86_64 (2.6.18-92.el5)

My config is server to server file replication.  At this point, I have everything working, and it works well.  The only problem I have is that the file modify date/time stamp.

Here's the scenario:

Local mount (drive) on each server to be replicated
/usr/local/repl

Local mount point on each server to be replicated
/usr/local/client
-- mount -t glusterfs /usr/local/etc/glusterfs/glusterfs-client.vol /usr/local/client/


Server1 and Server2 are replicating just fine.  To simulate a failure, I shut down the service on server2, umount /usr/local/client and delete all files and dirs under /usr/local/repl.  Each server mounts /usr/local/client from the locally running server.

Once I restart the service and remount the client mount point, all of the files start to trickle in as expected.  The problem is that in the /usr/locla/repl location, all of the files and dirs have the current date/time for the timestamp.  Now if I stop server1 and rm -rf /usr/local/repl/ and then restart the service and re-mount the ./client dir, all of the files come back, but same thing, all timestamps are overwritten.

So, the question is...  How can I setup file replication so that timestamps get replicated as well?  I tried "option metadata-self-heal on", but that didn't seem to make a difference.

Any assistance would be greatly appreciated.

Thanks

Server Config:

volume posix
  type storage/posix
  option directory /usr/local/repl
end-volume

volume locks
  type features/posix-locks
  subvolumes posix
end-volume

volume brick1
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume brick2
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.225.63.103
   option remote-subvolume brick1
end-volume

volume replicate
   type cluster/replicate
   option metadata-self-heal on
   subvolumes brick2 brick1
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.replicate.allow *
   option auth.ip.brick1.allow *
   option auth.ip.replicate.allow *

  subvolumes brick1 replicate
end-volume


Client config: 
volume brick
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.225.63.99
   option remote-subvolume replicate
end-volume



      




More information about the Gluster-users mailing list