[Gluster-users] GlusterFS behaviour on stat syscall with relatime activated

Simon Turcotte-Langevin simon.turcotte-langevin at ubisoft.com
Mon Feb 8 16:27:57 UTC 2016


Good day to you Pranith,

Once again, thank you for your time. Our use case does include a lot of small files and the read performances must not be impacted by a RELATIME-based solution. Even though this option could fix the RELATIME behavior on GlusterFS, it looks like the impact of the performance could be too great for us. Therefore, we will test the solution, but we will also consider alternative ways to detect usage of the files we serve.

Simon

From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: 8 février 2016 00:26
To: Simon Turcotte-Langevin <simon.turcotte-langevin at ubisoft.com>; gluster-users at gluster.org
Cc: UPS_Development <UPS_Development at ubisoft.com>
Subject: Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime activated


On 02/06/2016 12:19 AM, Simon Turcotte-Langevin wrote:
Good day to you Pranith,

Thank you for your answer, it was exactly this. However, we still have an issue with RELATIME on GlusterFS.

Stating the file does not modify atime anymore, with quick-read disabled, however cat-ing the file does not replicate the atime.
This is because of open-behind feature. Disable open-behind with: "gluster volume set <volname> open-behind off". I believe you will see the atime behavior you want to see with it. This will reduce the performance of small file reads (< 64KB). Instead of one lookup over the network now it will do, lookup + open(This will be sent to both the replica bricks which updates atime) + read (Only one of the bricks). Let me know if you want any more information.

Pranith



If I touch manually the file, the atime (or utimes) is replicated correctly.

So to sum it up:


·         [node1] Touch –a file1

o   --> Access time is right on [node1] [node2] and [node3]

·         [node1] Cat file1

o   --> Access time is right on [node1]

o   --> Access time is wrong on [node2] and [node3]

Would you have any idea what is going on behind the curtain, and if there is any way to fix that behavior?

Thank you,
Simon

From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: 5 février 2016 00:55
To: Simon Turcotte-Langevin <simon.turcotte-langevin at ubisoft.com><mailto:simon.turcotte-langevin at ubisoft.com>; gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Cc: UPS_Development <UPS_Development at ubisoft.com><mailto:UPS_Development at ubisoft.com>
Subject: Re: [Gluster-users] GlusterFS behaviour on stat syscall with relatime activated


On 02/03/2016 10:12 PM, Simon Turcotte-Langevin wrote:
Hi, we have multiple clusters of GlusterFS which are mostly alike. The typical setup is as such:


-          Cluster of 3 nodes

-          Replication factor of 3

-          Each node has 1 brick, mounted on XFS with RELATIME and NODIRATIME

-          Each node has 8 disks in RAID 0 hardware

The main problem we are facing is that observation of the access time of a file on the volume will update the access time.

The steps to reproduce the problem are:


-          Create a file (echo ‘some data’ > /mnt/gv0/file)

-          Touch its mtime and atime to some past date (touch –d 19700101 /mnt/gv0/file)

-          Touch its mtime to the current timestamp (touch –m /mnt/gv0/file)

-          Stat the file until atime is updated (stat /mnt/gv0/file)

o   Sometimes it’s instant, sometimes it requires to execute the above command a couple of time
atime changes on open call.

Quick-read xlator opens the file and reads the content on 'lookup' which gets triggered in stat. It does that to serve reads from memory to reduce number of network round trips for small files. Could you disable that xlator and try the experiment? On my machine the time didn't change after I disabled that feature using:

"gluster volume set <volname> quick-read off"

Pranith





On the IRC channel, I spoke to a developer (nickname ndevos) who said that it might be a getxattr() syscall that could be called when stat() is called on a replicated volume.



Anybody can reproduce this issue? Is it a bug, or is it working as intended? Is there any workaround?



Thank you,

Simon





_______________________________________________

Gluster-users mailing list

Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>

http://www.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160208/125d1704/attachment.html>


More information about the Gluster-users mailing list