[Gluster-devel] Fwd: glusterfs write problem
nicolas prochazka
prochazka.nicolas at gmail.com
Wed Apr 9 07:19:55 UTC 2008
Hello again,
before to try with write-behind translator and 10MB of agregate size,
i've tested without write-behind translator, and the result is the
same, ( i'm trying with different size from 128kb to 10Mb without
success)
Nicolas
On Wed, Apr 9, 2008 at 7:09 AM, Anand Avati <avati at zresearch.com> wrote:
> nicolas,
> an aggregate-size of 10MB (in write-behind) is just too high to be used on
> the client size. please unset it and try.
>
> avati
>
> 2008/4/9, nicolas prochazka <prochazka.nicolas at gmail.com>:
> >
> >
> >
> > Hi,
> >
> > I'm working with file of 10 Go size.
> > When I read file all is ok and seems to be worked fine with glusterfs
> > If I read and write one file in the same time, all is bad :
> > - gluster client and server takes a lot of resources ( 30 %- 60 % of
> cpu)
> > - write is very very slowly and does not work , read also seems to
> > be cycling
> >
> > I'm trying with two configuration
> >
> > Computer 1 : client
> > Computer 2 : server
> >
> > computer1 ---> Read big file and write in local <---> computer2 :
> > works fine
> > computer1 ---> Read and Write big file <---> computer2 : not work
> > computer2 become client also so i mount gluster in local : Read and
> > Write big file : not work. (gluterfs and glusterfsd take a lot of
> > ressource)
> >
> > I'm trying differents client / server configuration without success
> >
> > Any idea ?
> > Regards,
> > Nicolas Prochazka.
> >
> >
> >
> > things to know :
> > - glusterfs 1.3.8pre5
> > - fuse : fuse-2.7.2glfs9
> >
> > ------------------
> > Computer 2 : Server configuration
> > ----------------------------------------------------
> > volume brick1
> > type storage/posix
> > option directory /mnt/disks/export
> > end-volume
> >
> >
> > volume brick
> > type performance/io-threads
> > option thread-count 8
> > option cache-size 1000MB
> > subvolumes brick1
> > end-volume
> >
> >
> > volume readahead-brick
> > type performance/read-ahead
> > option page-size 2M
> > option page-count 128
> > subvolumes brick
> > end-volume
> >
> >
> >
> > volume server
> > option window-size 2097152
> > type protocol/server
> > subvolumes readahead-brick
> > option transport-type tcp/server # For TCP/IP transport
> > option client-volume-filename /etc/glusterfs/glusterfs-client.vol
> > option auth.ip.brick.allow *
> > end-volume
> >
> >
> >
> > ----------------------------------------
> > computer 1 ------ client
> > -----------------------------------------
> > volume client1
> > option window-size 2097152
> > type protocol/client
> > option transport-type tcp/client
> > option remote-host 10.98.98.1
> > option remote-subvolume brick
> > end-volume
> >
> > volume readahead
> > type performance/read-ahead
> > option page-size 2MB
> > option page-count 64
> > subvolumes client1
> > end-volume
> >
> > volume iothreads
> > type performance/io-threads
> > option thread-count 32
> > subvolumes readahead
> > end-volume
> >
> > volume io-cache
> > type performance/io-cache
> > option cache-size 1000MB # default is 32MB
> > option page-size 1MB #128KB is default option
> > option force-revalidate-timeout 100 # default is 1
> > subvolumes iothreads
> > end-volume
> >
> > volume writebehind
> > type performance/write-behind
> > option aggregate-size 10MB # default is 0bytes
> > option flush-behind on # default is 'off'
> > subvolumes io-cache
> > end-volume
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> If I traveled to the end of the rainbow
> As Dame Fortune did intend,
> Murphy would be there to tell me
> The pot's at the other end.
More information about the Gluster-devel
mailing list