[Gluster-devel] Full bore.

Chris Johnson johnson at nmr.mgh.harvard.edu
Fri Nov 16 16:24:05 UTC 2007


On Fri, 16 Nov 2007, Anand Avati wrote:

      Doesn't get any simpler than this.

volume client
   type protocol/client
   option transport-type tcp/client
   option remote-host xxx.xxx.xxx.xxx
   option remote-subvolume brick1
end-volume

      Tx's are there to protect the inocent.

> Your configuration doesnt seem to be what you want. do it someting like this
> -
>
> volume brick1
>  ..
> end-volume
>
> volume write-behind
>  ...
>  subvolume brick1
> end-volume
>
> volume read-ahead
>  ...
>  subvolume write-behind
> end-volume
>
> volume server
>  ...
>  auth.ip.read-ahead.allow *
>  subvolume read-ahead
> end-volume
>
>
> also can you post your client config?
>
> avati
>
> 2007/11/16, Chris Johnson <johnson at nmr.mgh.harvard.edu>:
>>
>>       Ok, hi.
>>
>>       I think I'm committing a major blunder here which may be why I'm
>> not seeing better through put.
>>
>>       These xlators should be stacked, is that right?  I defined the
>> following;
>>
>> volume brick1
>>    type storage/posix
>>    option directory /home/sdm1
>> end-volume
>>
>> volume server
>>    type protocol/server
>>    subvolumes brick1
>>    option transport-type tcp/server     # For TCP/IP transport
>> #  option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>>    option auth.ip.brick1.allow *
>> end-volume
>>
>> volume writebehind
>>    type performance/write-behind
>>    option aggregate-size 131072 # in bytes
>>    subvolumes brick1
>> end-volume
>>
>> volume readahead
>>    type performance/read-ahead
>>    option page-size 65536 ### in bytes
>>    option page-count 16 ### memory cache size is page-count x page-size
>> per file
>>    subvolumes brick1
>> end-volume
>>
>> Should I have used the 'server' volume as the subvolume for read-ahead
>> and write-behind in the above?  Or should read-ahead and write-behind
>> be between the basic brick and the server volume?  Is there a
>> diffrence in performance?
>>
>>       I grabbed 5 volumes from the SATA Beast.  I think the best way to
>> test this is with the real files and jobs.  So it's go for broke and
>> full bore time.
>>
>>       If I have two front ends I need I'll need the postix lock deal,
>> the io threader is a must or why bother.  If I unify, both front ends
>> need access to the same namespace brick so it has to have locks on it
>> too, yes?
>>
>>       Looking at the GlusterFS Translators v1.3 server examples.  Why
>> is the io thread xlator so high up in the stack?  Would it be better
>> farther down that stack closer to the basic bricks?  If not, why not?
>>
>>
>> -------------------------------------------------------------------------------
>> Chris Johnson               |Internet: johnson at nmr.mgh.harvard.edu
>> Systems Administrator       |Web:
>> http://www.nmr.mgh.harvard.edu/~johnson
>> NMR Center                  |Voice:    617.726.0949
>> Mass. General Hospital      |FAX:      617.726.7422
>> 149 (2301) 13th Street      |Knowing what thou knowest not
>> Charlestown, MA., 02129 USA |is in  a sence omniscience.  Piet Hein
>>
>> -------------------------------------------------------------------------------
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
> -- 
> It always takes longer than you expect, even when you take into account
> Hofstadter's Law.
>
> -- Hofstadter's Law
>

------------------------------------------------------------------------------- 
Chris Johnson               |Internet: johnson at nmr.mgh.harvard.edu
Systems Administrator       |Web:      http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street      |For all sad words of tongue or pen, the saddest
Charlestown, MA., 02129 USA |are these: "It might have been".  John G. Whittier 
-------------------------------------------------------------------------------





More information about the Gluster-devel mailing list