[Gluster-users] striping & read-ahead & volfiles

ruben malchow ruben.malchow at googlemail.com
Tue Feb 19 09:10:16 UTC 2013


i have a somewhat more specific question about the way striping and read-ahead work. our test setup is as follows:

10x dumb nodes with 4x1TByte bricks & 1GBE network running the server
1x head node with no bricks and 10GBE running both server and client.

the idea was to have these machines in one cluster, and have the head node participate in the cluster as well as mounting
it and re-exporting it.

for the workload we're planning, we have rather few clients (in the end, less than 20, for testing, something like 4) that need 
rather high throughputs - some clients need 400MBit/s, some need as much as possible north of that.

i thought i could achieve this by doing striping. in this scenarion, i would do stripe+replicate, with nodes 1-5 acting as
a stripe set and nodes 6-10 acting as it's replica. 

with this setup, i would expect something in the range of 300-500MByte/s … but actually, i see something in the range 
of 100-200. 

i assume that i am doing something wrong - either in the way i deal with block sizes and/or with the way i try to use read-ahead.
i was hoping read-ahead could be configured to start preparing the next stripe while the first one is reading and so on 
as a remedy for latency issues - is this correct?

if yes, what would be the steps to creating this kind of stacking from the CLI?

client --> server --> unify --> read-ahead --> replicate 2 --> stripe 5 --> posix --> brick

and what is configurable there (in terms of read-ahead buffers)? or is this a completely wrong approach?



More information about the Gluster-users mailing list