[Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

Kaushal M kshlmster at gmail.com
Tue Jul 26 07:13:56 UTC 2016


On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai <ppai at redhat.com> wrote:
> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
>
>  -Prashanth Pai
>
> ----- Original Message -----
>> From: "Mohammed Rafi K C" <rkavunga at redhat.com>
>> To: "gluster-users" <gluster-users at gluster.org>, "Gluster Devel" <gluster-devel at gluster.org>
>> Sent: Tuesday, 26 July, 2016 10:44:15 AM
>> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
>>
>> Hi,
>>
>> Gluster stack has it's own caching mechanism , mostly on client side.
>> But there is no concrete method to see how much memory are consuming by
>> gluster for caching and if needed there is no way to flush the cache memory.
>>
>> So my first question is, Do we require to implement this two features
>> for gluster cache?
>>
>>
>> If so I would like to discuss some of our thoughts towards it.
>>
>> (If you are not interested in implementation discussion, you can skip
>> this part :)
>>
>> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
>> the cache, and for getxattr we can print the aggregated cache size.
>>
>> 2) Currently in gluster native client support .meta virtual directory to
>> get meta data information as analogues to proc. we can implement a
>> virtual file inside the .meta directory to read  the cache size. Also we
>> can flush the cache using a special write into the file, (similar to
>> echoing into proc file) . This approach may be difficult to implement in
>> other clients.

+1 for making use of the meta-xlator. We should be making more use of it.

>>
>> 3) A cli command to display and flush the data with ip and port as an
>> argument. GlusterD need to send the op to client from the connected
>> client list. But this approach would be difficult to implement for
>> libgfapi based clients. For me, it doesn't seems to be a good option.
>>
>> Your suggestions and comments are most welcome.
>>
>> Thanks to Talur and Poornima for their suggestions.
>>
>> Regards
>>
>> Rafi KC
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-users mailing list