[Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?
Raghavendra G
raghavendra at gluster.com
Wed Jul 27 11:03:59 UTC 2016
On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C <rkavunga at redhat.com>
wrote:
> Thanks for your feedback.
>
> In fact meta xlator is loaded only on fuse mount, is there any particular
> reason to not to use meta-autoload xltor for nfs server and libgfapi ?
>
I think its because of lack of resources. I am not aware of any technical
reason for not using on NFSv3 server and gfapi.
> Regards
>
> Rafi KC
> On 07/26/2016 04:05 PM, Niels de Vos wrote:
>
> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
>
> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai <ppai at redhat.com> <ppai at redhat.com> wrote:
>
> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
>
> -Prashanth Pai
>
> ----- Original Message -----
>
> From: "Mohammed Rafi K C" <rkavunga at redhat.com> <rkavunga at redhat.com>
> To: "gluster-users" <gluster-users at gluster.org> <gluster-users at gluster.org>, "Gluster Devel" <gluster-devel at gluster.org> <gluster-devel at gluster.org>
> Sent: Tuesday, 26 July, 2016 10:44:15 AM
> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
>
> Hi,
>
> Gluster stack has it's own caching mechanism , mostly on client side.
> But there is no concrete method to see how much memory are consuming by
> gluster for caching and if needed there is no way to flush the cache memory.
>
> So my first question is, Do we require to implement this two features
> for gluster cache?
>
>
> If so I would like to discuss some of our thoughts towards it.
>
> (If you are not interested in implementation discussion, you can skip
> this part :)
>
> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
> the cache, and for getxattr we can print the aggregated cache size.
>
> 2) Currently in gluster native client support .meta virtual directory to
> get meta data information as analogues to proc. we can implement a
> virtual file inside the .meta directory to read the cache size. Also we
> can flush the cache using a special write into the file, (similar to
> echoing into proc file) . This approach may be difficult to implement in
> other clients.
>
> +1 for making use of the meta-xlator. We should be making more use of it.
>
> Indeed, this would be nice. Maybe this can also expose the memory
> allocations like /proc/slabinfo.
>
> The io-stats xlator can dump some statistics to
> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
> be acceptible too, and allows to get statistics from server-side
> processes without involving any clients.
>
> HTH,
> Niels
>
>
>
> 3) A cli command to display and flush the data with ip and port as an
> argument. GlusterD need to send the op to client from the connected
> client list. But this approach would be difficult to implement for
> libgfapi based clients. For me, it doesn't seems to be a good option.
>
> Your suggestions and comments are most welcome.
>
> Thanks to Talur and Poornima for their suggestions.
>
> Regards
>
> Rafi KC
>
> _______________________________________________
> Gluster-devel mailing listGluster-devel at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-devel mailing listGluster-devel at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-devel mailing listGluster-devel at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160727/3a223e3f/attachment.html>
More information about the Gluster-users
mailing list