[Gluster-users] XFS, WORM and the Year-2038 Problem
David Spisla
spisla80 at gmail.com
Mon Apr 29 08:49:08 UTC 2019
Hello Gluster Community,
here is a possible explanation why the LastAccess date is changed at brick
level resp why can XFS ever have a date of e.g. Can store 2070 in an INT32
field:
It's amazing that you can set timestamps well above 2038 for the atime and
these are also displayed via the usual system tools. After a while, it was
observed that the values change and are mapped to the range between
1902-1969. I suspect that the initially successful setting of a well over
2038 stationary atime corresponds to an *in-memory* representation of the
timestamp. This seems to allow setting over 2038. The *on-disk*
representation of XFS, on the other hand, only allows the maximum value of
2038, values above are then mapped to the range 1902-1969, which is the
negative number range of a signed int32. This is what I have taken from
this thread: https://lkml.org/lkml/2014/6/1/240
Finally I observed, that after reboot or remount of the XFS Filesystem the
in-memory representation changes to the on-disk representation. Concerning
the WORM functionality it seems to be neccessary to enable the ctime
feature, otherwise the information of the Retention would be lost, if the
Retention date is above 2038 in case of reboot or remount of the XFS
Filesystem.
Regards
David Spisla
Am Mo., 15. Apr. 2019 um 11:51 Uhr schrieb David Spisla <spisla80 at gmail.com
>:
> Hello Amar,
>
> Am Mo., 15. Apr. 2019 um 11:27 Uhr schrieb Amar Tumballi Suryanarayan <
> atumball at redhat.com>:
>
>>
>>
>> On Mon, Apr 15, 2019 at 2:40 PM David Spisla <spisla80 at gmail.com> wrote:
>>
>>> Hi folks,
>>> I tried out default retention periods e.g. to set the Retention date to
>>> 2071. When I did the WORMing, everything seems to be OK. From FUSE and also
>>> at Brick-Level, the retention was set to 2071 on all nodes.Additionally I
>>> enabled the storage.ctime option, so that the timestamps are stored in the
>>> mdata xattr, too. But after a while I obeserved, that on Brick-Level the
>>> atime (which stores the retention) was switched to 1934:
>>>
>>> # stat /gluster/brick1/glusterbrick/data/file3.txt
>>> File: /gluster/brick1/glusterbrick/data/file3.txt
>>> Size: 5 Blocks: 16 IO Block: 4096 regular file
>>> Device: 830h/2096d Inode: 115 Links: 2
>>> Access: (0544/-r-xr--r--) Uid: ( 2000/ gluster) Gid: ( 2000/
>>> gluster)
>>> Access: 1934-12-13 20:45:51.000000000 +0000
>>> Modify: 2019-04-10 09:50:09.000000000 +0000
>>> Change: 2019-04-10 10:13:39.703623917 +0000
>>> Birth: -
>>>
>>> From FUSE I get the correct atime:
>>> # stat /gluster/volume1/data/file3.txt
>>> File: /gluster/volume1/data/file3.txt
>>> Size: 5 Blocks: 1 IO Block: 131072 regular file
>>> Device: 2eh/46d Inode: 10812026387234582248 Links: 1
>>> Access: (0544/-r-xr--r--) Uid: ( 2000/ gluster) Gid: ( 2000/
>>> gluster)
>>> Access: 2071-01-19 03:14:07.000000000 +0000
>>> Modify: 2019-04-10 09:50:09.000000000 +0000
>>> Change: 2019-04-10 10:13:39.705341476 +0000
>>> Birth: -
>>>
>>>
>> From FUSE you get the time of what the clients set, as we now store
>> timestamp as extended attribute, not the 'stat->st_atime'.
>>
>> This is called 'ctime' feature which we introduced in glusterfs-5.0, It
>> helps us to support statx() variables.
>>
> So I am assuming that the values in the default xfs timestamps are not
> important for WORM, if I use storage.ctime?
> Does it work correctly with other clients like samba-vfs-glusterfs?
>
>>
>>
>>> I find out that XFS supports only 32-Bit timestamp values. So in my
>>> expectation it should not be possible to set the atime to 2071. But at
>>> first it was 2071 and later it was switched to 1934 due to the YEAR-2038
>>> problem. I am asking myself:
>>> 1. Why it is possible to set atime on XFS greater than 2038?
>>> 2. And why this atime switched to a time lower 1970 after a while?
>>>
>>> Regards
>>> David Spisla
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190429/fd4f7183/attachment.html>
More information about the Gluster-users
mailing list