[Gluster-devel] Hello, I have a question about the erasure code translator, hope someone give me some advice, thank you!

PSC 1173701037 at qq.com
Mon Apr 8 06:41:24 UTC 2019


Hi, I am a storage software coder who is interested in Gluster. I am trying to improve the read/write performance of it.  
 

 
 
I noticed that gluster is using Vandermonde matrix in erasure code encoding and decoding process. However, it is quite complicate to generate inverse matrix of a Vandermonde matrix, which is necessary for decode. The cost is O(n³).  
 

 
 
Use a Cauchy matrix, can greatly cut down the cost of the process to find an inverse matrix. Which is O(n²).
 

 
 
I use intel storage accelerate library to replace the original ec encode/decode part of gluster. And it reduce the encode and decode time to about 50% of the original one.
 

 
 
However, when I test the whole system. The read/write performance is almost the same as the original gluster.  
 

 
 
I test it on three machines as servers. Each one had two bricks, both of them are SSD. So the total  amount of bricks is 6. Use two of them as coding bricks. That is a 4+2 disperse volume configure.
 

 
 
The capability of network card is 10000Mbps. Theoretically it can support read and write with the speed faster than 1000MB/s.
 

 
 
The actually performance of read is about 492MB/s.
 
The actually performance of write is about 336MB/s.
 

 
 
While the original one read at 461MB/s, write at 322MB/s
 

 
 
Is there someone who can give me some advice about how to improve its performance? Which part is the critical defect on its performance if it’s not the ec translator?  
 

 
 
I did a time count on translators. It show me EC translator just take 7% in the whole read\write process. Even though I knew that some translators are run asynchronous, so the real percentage can be some how lager than that.  
 

 
 
Sincerely thank you for your patient to read my question!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190408/9a999642/attachment.html>


More information about the Gluster-devel mailing list