[Gluster-users] Timestamp on replicated files and dirs

Matthew J. Salerno vagabond_king at yahoo.com
Fri May 22 18:19:34 UTC 2009

Version: glusterfs 2.0.1
type cluster/replicate
RHEL 5.2 x86_64 (2.6.18-92.el5)

My config is server to server file replication.  At this point, I have everything working, and it works well.  The only problem I have is that the file modify date/time stamp.

Here's the scenario:

Local mount (drive) on each server to be replicated

Local mount point on each server to be replicated
-- mount -t glusterfs /usr/local/etc/glusterfs/glusterfs-client.vol /usr/local/client/

Server1 and Server2 are replicating just fine.  To simulate a failure, I shut down the service on server2, umount /usr/local/client and delete all files and dirs under /usr/local/repl.  Each server mounts /usr/local/client from the locally running server.

Once I restart the service and remount the client mount point, all of the files start to trickle in as expected.  The problem is that in the /usr/locla/repl location, all of the files and dirs have the current date/time for the timestamp.  Now if I stop server1 and rm -rf /usr/local/repl/ and then restart the service and re-mount the ./client dir, all of the files come back, but same thing, all timestamps are overwritten.

So, the question is...  How can I setup file replication so that timestamps get replicated as well?  I tried "option metadata-self-heal on", but that didn't seem to make a difference.

Any assistance would be greatly appreciated.


Server Config:

volume posix
  type storage/posix
  option directory /usr/local/repl

volume locks
  type features/posix-locks
  subvolumes posix

volume brick1
  type performance/io-threads
  option thread-count 8
  subvolumes locks

volume brick2
   type protocol/client
   option transport-type tcp/client
   option remote-host
   option remote-subvolume brick1

volume replicate
   type cluster/replicate
   option metadata-self-heal on
   subvolumes brick2 brick1

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.replicate.allow *
   option auth.ip.brick1.allow *
   option auth.ip.replicate.allow *

  subvolumes brick1 replicate

Client config: 
volume brick
   type protocol/client
   option transport-type tcp/client
   option remote-host
   option remote-subvolume replicate


More information about the Gluster-users mailing list