[Gluster-users] timestamps getting updated during self-heal after primary brick rebuild

Todd Stansell todd at stansell.org
Thu Feb 28 23:23:34 UTC 2013


We're looking at using glusterfs to provide a shared filesystem between two
nodes, using just local disk.  They are both gluster servers as well as
clients.  This is on CentOS 5.9 64-bit.  The bricks are simply ext3
filesystems on top of LVM:

    /dev/mapper/VolGroup00-LogVol0 on /gfs0 type ext3 (rw,user_xattr)

We set up a test volume with:

    host14# gluster volume create gv0 replica 2 transport tcp host14:/gfs0 host13:/gfs0
    host14# gluster volume set gv0 nfs.disable on
    host14# gluster volume start gv0

This works just fine.  The issue is simulating hardware failure where we need
to rebuild an entire node.  In this case, we kickstart our server which creates
all fresh new filesystems.  We have a kickstart postinstall script that sets
the glusterd UUID of the server so that it never changes.  It then does a probe
of the other server, looks for existing volumes, sets up fstab entries for them
(to also act as a client) and also sets up an init script to force a full heal
every time the server boots just to ensure all data is replicated to both
nodes.  All of this works great when I'm rebuilding the second brick.

The issue I have is when we rebuild the server that hosts the primary brick
(host14:/gfs0).  It will come online and start copying data from host13:/gfs0,
but as it does so, it sets the timestamps of the files on host13:/gfs0 to the
time it healed the data on host14:/gfs0.  As a result, all files in the
filesystem end up with timestamps of when the first brick was healed.

I enabled client debug logs and the following indicates that it *thinks* it is
doing the right thing:

    after rebuilding gv0-client-1:
    [2013-02-28 00:01:37.264018] D [afr-self-heal-metadata.c:329:afr_sh_metadata_sync] 0-gv0-replicate-0: self-healing metadata of /data/bin/sync-data from gv0-client-0 to gv0-client-1

    after rebuilding gv0-client-0:
    [2013-02-28 00:17:03.578377] D [afr-self-heal-metadata.c:329:afr_sh_metadata_sync] 0-gv0-replicate-0: self-healing metadata of /data/bin/sync-data from gv0-client-1 to gv0-client-0

Unfortunately, in the second case, the timestamp of the files changed from:

    -r-xr-xr-x 1 root root 2717 Feb 27 23:32 /data/data/bin/sync-data*

 to:

    -r-xr-xr-x 1 root root 2717 Feb 28 00:17 /data/data/bin/sync-data*

And remember, there's nothing accessing any data in this volume so there's no
"client" access going on anywhere.  No changes happening on the filesystem,
other than self-heal screwing things up.

The only thing I could find in any logs that would indicate a problem was this
in the brick log:

    [2013-02-28 00:17:03.583063] D [posix.c:323:posix_do_utimes] 0-gv0-posix: /gfs0/data/bin/sync-data (Function not implemented)

I've also now built a Centos 6 host and verified that the same behavior
happens there, though I get a slightly different brick debug log (which makes
me think this has nothing to do with what I'm seeing):

    [2013-02-28 23:07:41.879440] D [posix.c:262:posix_do_chmod] 0-gv0-posix: /gfs0/data/bin/sync-data (Function not implemented)

Here's any basic info that might help folks know what's going on:

# rpm -qa | grep gluster
glusterfs-server-3.3.1-1.el5
glusterfs-3.3.1-1.el5
glusterfs-fuse-3.3.1-1.el5

# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 7cec2ba3-f69c-409a-a259-0d055792b11a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: host14:/gfs0
Brick2: host13:/gfs0
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
diagnostics.client-log-level: DEBUG
nfs.disable: on

Todd



More information about the Gluster-users mailing list