[Gluster-users] Gluster Volume Rebalance inode modify change times

Matthew Benstead matthewb at uvic.ca
Wed Apr 1 21:50:19 UTC 2020


Hello,

I have a question about volume rebalancing and modify/change timestamps. 
We're running Gluster 5.11 on CentOS 7.

We recently added an 8th node to our 7 node distrubute cluster. We ran 
the necessary fix-layout and rebalance commands after adding the new 
brick, and the storage usage balanced out as expected.

However, we had some unexpected behavior from our backup clients. We use 
Tivoli Storage Manager (TSM) to backup this volume to tape, and we're 
backing up from the volume mountpoint.

We basically saw a large number of files and directories (around the 
number that got moved in the rebalance) get re-backed up despite the 
files not changing... This pushed our backup footprint up nearly 40TB.

When investigating some of the files we saw that the Modify date hadn't 
changed, but the "Change" time had. For directories the Modify and 
Change dates were updated. This caused the backup client to think the 
files had changed... See below:

[root at gluster01 ~]# stat 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
   File: 
‘/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png’
   Size: 2587          Blocks: 6          IO Block: 131072 regular file
Device: 29h/41d    Inode: 13389859243885309381  Links: 1
Access: (0664/-rw-rw-r--)  Uid: (69618/bveerman)   Gid: ( 50/     ftp)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-03-30 10:32:32.326169725 -0700
Modify: 2014-11-24 17:16:57.000000000 -0800
Change: 2020-03-13 21:52:41.158610077 -0700
  Birth: -

[root at gluster01 ~]# stat 
/storage/data/projects/comp_support/rat/data/basemaps
   File: ‘/storage/data/projects/comp_support/rat/data/basemaps’
   Size: 4096          Blocks: 8          IO Block: 131072 directory
Device: 29h/41d    Inode: 13774747307766344103  Links: 2
Access: (2775/drwxrwsr-x)  Uid: (69618/bveerman)   Gid: ( 50/     ftp)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-04-01 03:20:58.644695834 -0700
Modify: 2020-03-14 00:51:31.120718996 -0700
Change: 2020-03-14 00:51:31.384725500 -0700
  Birth: -


If we look at the files in TSM we find that they were backed up because 
the Inode changed. Is this expected for rebalancing? Or is there 
something else going on here?


            Size        Backup Date                Mgmt Class           
A/I File
            ----        ----------- ----------           --- ----
          2,587  B  2020-03-16 12:14:11 DEFAULT              A 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
          Modified: 2014-11-24 17:16:57  Accessed: 2020-03-13 21:52:41  
Inode changed: 2020-03-13 21:52:41
          Compression Type: None  Encryption Type: None  
Client-deduplicated: NO  Migrated: NO  Inode#: 809741765
   Media Class: Library  Volume ID: 0375  Restore Order: 
00000000-00003684-00000000-0046F92D
          2,587  B  2019-10-18 17:01:22 DEFAULT              I 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
          Modified: 2014-11-24 17:16:57  Accessed: 2019-08-08 00:22:50  
Inode changed: 2019-08-07 10:55:21
          Compression Type: None  Encryption Type: None  
Client-deduplicated: NO  Migrated: NO  Inode#: 809741765
   Media Class: Library  Volume ID: 33040  Restore Order: 
00000000-0000D9EB-00000000-000890E3



Volume details:


[root at gluster01 ~]# df -h /storage/
Filesystem            Size  Used Avail Use% Mounted on
10.0.231.50:/storage  291T  210T   82T  72% /storage

[root at gluster01 ~]# gluster --version
glusterfs 5.11
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

[root at gluster01 ~]# cat /proc/mounts | egrep "/storage|raid6-storage"
/dev/sda1 /mnt/raid6-storage xfs 
rw,seclabel,relatime,attr2,inode64,noquota 0 0
10.0.231.50:/storage /storage fuse.glusterfs 
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 
0 0

[root at gluster01 ~]# cat /etc/fstab | egrep "/storage|raid6-storage"
UUID=104f089e-6171-4750-a592-d41759c67f0c    /mnt/raid6-storage xfs 
defaults 0 0
10.0.231.50:/storage /storage glusterfs 
defaults,log-level=WARNING,backupvolfile-server=10.0.231.51 0 0

[root at gluster01 ~]# gluster volume info storage

Volume Name: storage
Type: Distribute
Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: 10.0.231.50:/mnt/raid6-storage/storage
Brick2: 10.0.231.51:/mnt/raid6-storage/storage
Brick3: 10.0.231.52:/mnt/raid6-storage/storage
Brick4: 10.0.231.53:/mnt/raid6-storage/storage
Brick5: 10.0.231.54:/mnt/raid6-storage/storage
Brick6: 10.0.231.55:/mnt/raid6-storage/storage
Brick7: 10.0.231.56:/mnt/raid6-storage/storage
Brick8: 10.0.231.57:/mnt/raid6-storage/storage
Options Reconfigured:
features.read-only: off
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
nfs.disable: on
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
transport.address-family: inet
features.quota-deem-statfs: on
changelog.changelog: on
diagnostics.client-log-level: INFO

Thanks,
  -Matthew


More information about the Gluster-users mailing list