[Gluster-devel] Update md-cache after triggering FOP via syncop framework?

David Spisla spisla80 at gmail.com
Tue Jun 5 09:04:00 UTC 2018

Hello Niels,

thank you. Now I understand this better.
I am triggering the FOPs via syncop directly from the WORM Xlator which is
unfortunately below the upcall xlator.
I don't have a separate xlator, so I am searching for a solution which is
working inside of the WORM Xlator.
E.g. the autocommit function of the WORM Xlator is using the syncop
framework to change the atime
of a file. I don't know if there is a difference between FOPs triggered by
syncop or by clients from outside.
My guess is that there is no difference, but I am not sure.


2018-06-05 9:51 GMT+02:00 Niels de Vos <ndevos at redhat.com>:

> On Mon, Jun 04, 2018 at 03:23:05PM +0200, David Spisla wrote:
> > Dear Gluster-Devels,
> >
> > I'm currently using the syncop framework to trigger certain file
> operations
> > within the Server Translators stack. At the same time, file attributes
> such
> > as file rights and timestamps are changed (atime, mtime). I noticed that
> > the md-cache does not get the changed attributes or only when the upcall
> > xlator is activated eg by a READDIR (executing " $ stat * ").
> > However, I would find it cleaner if right after triggering a file
> operation
> > by the syncop framework that would update md-cache. Is there a way to
> > programmatically do this within the Server Translators stack?
> Hi David,
> If you place your xlator above upcall, upcall should inform the clients
> about the changed attributes. In case it is below upcall, the internal
> FOPs can not be tracked by upcall.
> Upcall tracks all clients that have shown interest in a particular
> inode. If that inode is modified, the callback on the brick stack will
> trigger a cache-invalidation on the client. I do not think there should
> be a difference between FOPs from other clients, or locally created ones
> through the syncop framework.
> In case this does not help or work, provide a little more details (.vol
> file?).
> HTH,
> Niels
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180605/5aa4e8e6/attachment.html>

More information about the Gluster-devel mailing list