[Gluster-devel] libgfapi usage issues: overwrites 'THIS' and use after free

Poornima Gurusiddaiah pgurusid at redhat.com
Fri Apr 17 10:23:19 UTC 2015


Hi, 

There are two concerns in the usage of libgfapi which have been present from day one, but now 
with new users of libgfapi its a necessity to fix these: 

1. When libgfapi is used by the GlusterFS internal xlators, 'THIS' gets overwritten: 
Eg: when snapview-server creates a new fs instance for every snap that is created. 
Currently any libgfapi calls inside the xlator overwrites the THIS value to contain glfs-master(gfapi xlator). 
Hence after the api exits, any further code in the parent xlator referring to THIS will refer 
to the glfs-master(gfapi xlator). 

Proposed solutions: 
- Store and restore THIS in every API exposed by libgfapi, patch for the same can be found at http://review.gluster.org/#/c/9797/ 
- Other solution suggested by Niels was to not have internal xlators calling libgfapi and 
move the core functionality to libglusterfs. But even with this the nested mount/ctx issue can still exist. 

2. When libgfapi APIs are called by the application on a fs object, that is already closed(glfs_fini()'ed): 
Ideally it is the applications responsibility to take care, to not do such things. But its also good 
to not crash libgfapi when such ops are performed by the application. 
We have already seen these issues in snapview server. 

Proposed Solutions/workarounds: 
- Do not free the fs object(leaks few bytes) have a state bit to say valid or invalid. In every API check 
for the fs validity before proceeding. Patch for same @http://review.gluster.org/#/c/9797/ 
- As suggested by Niels, have a fs global pool which tracks allocated/freed fs objects. 
- Have the applications fix it, so that they do not call fops on closed fs object(unmounted fs). 
This mandates multithreaded/asynchronous applications to have some synchronization mechanism. 

Please let me know your comments on the same. 

Regards, 
Poornima 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150417/561a70bd/attachment.html>


More information about the Gluster-devel mailing list