[Gluster-users] how to shrink client translator

yang.bin18 at zte.com.cn yang.bin18 at zte.com.cn
Tue Feb 10 11:17:20 UTC 2015


Thank you ,it work well.

Best Regards!
BinYang.




发件人:         Jeff Darcy <jdarcy at redhat.com>
收件人:         yang bin18 <yang.bin18 at zte.com.cn>, 
抄送:   Vijay Bellur <vbellur at redhat.com>, gluster-users at gluster.org
日期:   2015/02/03 03:59
主题:   Re: [Gluster-users] how to shrink client translator



> >gluster volume set <volname> open-behind off turns off this xlator in
> >the client stack. There is no way to turn off debug/io-stats. Any 
reason
> >why you would like to turn off io-stats translator?
>
> For improving efficiency.

It might not be a very fruitful kind of optimization.  Repeating an
experiment someone else had done a while ago, I just ran an experiment to
compare a normal client volfile vs. one with a *hundred* extra do-nothing
translators added.  There was no statistically significant difference,
even on a fairly capable SSD-equipped system.  I/O latency variation and
other general measurement noise still far outweigh the cost of a few extra
function calls to invoke translators that aren't doing any I/O themselves.

> Is there any command to show the current translator tree after dynamic 
adding
> or deletting any xlator?

The new graph should show up in the logs.  Also, you can always use 
"gluster
system getspec xxx" to get the current client volfile for any volume xxx.


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150210/dc3a515f/attachment.html>


More information about the Gluster-users mailing list