[Gluster-devel] [Gluster-users] Need a way to display and flush gluster cache ?
Niels de Vos
ndevos at redhat.com
Thu Jul 28 14:26:24 UTC 2016
On Thu, Jul 28, 2016 at 05:58:15PM +0530, Mohammed Rafi K C wrote:
>
>
> On 07/27/2016 04:33 PM, Raghavendra G wrote:
> >
> >
> > On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C
> > <rkavunga at redhat.com <mailto:rkavunga at redhat.com>> wrote:
> >
> > Thanks for your feedback.
> >
> > In fact meta xlator is loaded only on fuse mount, is there any
> > particular reason to not to use meta-autoload xltor for nfs server
> > and libgfapi ?
> >
> >
> > I think its because of lack of resources. I am not aware of any
> > technical reason for not using on NFSv3 server and gfapi.
>
> Cool. I will try to see how we can implement meta-autoliad feature for
> nfs-server and libgfapi. Once we have the feature in place, I will
> implement the cache memory display/flush feature using meta xlators.
In case you plan to have this ready in a month (before the end of
August), you should propose it as a 3.9 feature. Click the "Edir this
page on GitHub" link on the bottom of
https://www.gluster.org/community/roadmap/3.9/ :)
Thanks,
Niels
>
> Thanks for your valuable feedback.
> Rafi KC
>
> >
> >
> > Regards
> >
> > Rafi KC
> >
> > On 07/26/2016 04:05 PM, Niels de Vos wrote:
> >> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
> >>> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai <ppai at redhat.com> <mailto:ppai at redhat.com> wrote:
> >>>> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
> >>>>
> >>>> -Prashanth Pai
> >>>>
> >>>> ----- Original Message -----
> >>>>> From: "Mohammed Rafi K C" <rkavunga at redhat.com> <mailto:rkavunga at redhat.com>
> >>>>> To: "gluster-users" <gluster-users at gluster.org> <mailto:gluster-users at gluster.org>, "Gluster Devel" <gluster-devel at gluster.org> <mailto:gluster-devel at gluster.org>
> >>>>> Sent: Tuesday, 26 July, 2016 10:44:15 AM
> >>>>> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> Gluster stack has it's own caching mechanism , mostly on client side.
> >>>>> But there is no concrete method to see how much memory are consuming by
> >>>>> gluster for caching and if needed there is no way to flush the cache memory.
> >>>>>
> >>>>> So my first question is, Do we require to implement this two features
> >>>>> for gluster cache?
> >>>>>
> >>>>>
> >>>>> If so I would like to discuss some of our thoughts towards it.
> >>>>>
> >>>>> (If you are not interested in implementation discussion, you can skip
> >>>>> this part :)
> >>>>>
> >>>>> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
> >>>>> the cache, and for getxattr we can print the aggregated cache size.
> >>>>>
> >>>>> 2) Currently in gluster native client support .meta virtual directory to
> >>>>> get meta data information as analogues to proc. we can implement a
> >>>>> virtual file inside the .meta directory to read the cache size. Also we
> >>>>> can flush the cache using a special write into the file, (similar to
> >>>>> echoing into proc file) . This approach may be difficult to implement in
> >>>>> other clients.
> >>> +1 for making use of the meta-xlator. We should be making more use of it.
> >> Indeed, this would be nice. Maybe this can also expose the memory
> >> allocations like /proc/slabinfo.
> >>
> >> The io-stats xlator can dump some statistics to
> >> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
> >> be acceptible too, and allows to get statistics from server-side
> >> processes without involving any clients.
> >>
> >> HTH,
> >> Niels
> >>
> >>
> >>>>> 3) A cli command to display and flush the data with ip and port as an
> >>>>> argument. GlusterD need to send the op to client from the connected
> >>>>> client list. But this approach would be difficult to implement for
> >>>>> libgfapi based clients. For me, it doesn't seems to be a good option.
> >>>>>
> >>>>> Your suggestions and comments are most welcome.
> >>>>>
> >>>>> Thanks to Talur and Poornima for their suggestions.
> >>>>>
> >>>>> Regards
> >>>>>
> >>>>> Rafi KC
> >>>>>
> >>>>> _______________________________________________
> >>>>> Gluster-devel mailing list
> >>>>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> >>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>>>
> >>>> _______________________________________________
> >>>> Gluster-devel mailing list
> >>>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> >>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >>> http://www.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>>
> >>> _______________________________________________
> >>> Gluster-devel mailing list
> >>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> >>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> >
> > --
> > Raghavendra G
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160728/52bd264b/attachment-0001.sig>
More information about the Gluster-devel
mailing list