[Gluster-users] GlusterFS Preformance

Hiren Joshi josh at moonfruit.com
Thu Jul 9 08:33:59 UTC 2009


 

> -----Original Message-----
> From: Stephan von Krawczynski [mailto:skraw at ithnet.com] 
> Sent: 09 July 2009 09:08
> To: Liam Slusser
> Cc: Hiren Joshi; gluster-users at gluster.org
> Subject: Re: [Gluster-users] GlusterFS Preformance
> 
> On Wed, 8 Jul 2009 10:05:58 -0700
> Liam Slusser <lslusser at gmail.com> wrote:
> 
> > You have to remember that when you are writing with NFS 
> you're writing to
> > one node, where as your gluster setup below is copying the 
> same data to two
> > nodes;  so you're doubling the bandwidth.  Dont expect nfs 
> like performance
> > on writing with multiple storage bricks.  However read 
> performance should be
> > quite good.
> > liam
> 
> Do you think this problem can be solved by using 2 storage 
> bricks on two
> different network cards on the client?

I'd be surprised if the bottleneck here was the network. I'm testing on
a xen network but I've only been given one eth per slice.

> 
> Regards,
> Stephan
> 
> > 
> > On Wed, Jul 8, 2009 at 5:22 AM, Hiren Joshi 
> <josh at moonfruit.com> wrote:
> > 
> > > Hi,
> > >
> > > I'm currently evaluating gluster with the intention of 
> replacing our
> > > current setup and have a few questions:
> > >
> > > At the moment, we have a large SAN which is split into 10 
> partitions and
> > > served out via NFS. For gluster, I was thinking 12 nodes 
> to make up
> > > about 6TB (mirrored so that's 1TB per node) and served out using
> > > gluster. What sort of filesystem should I be using for the nodes
> > > (currently on ext3) to give me the best performance and 
> recoverability?
> > >
> > > Also, I setup a test with a simple mirrored pair with a 
> client that
> > > looks like:
> > > volume glust3
> > >  type protocol/client
> > >  option transport-type tcp/client
> > >  option remote-host glust3
> > >  option remote-port 6996
> > >  option remote-subvolume brick
> > > end-volume
> > > volume glust4
> > >  type protocol/client
> > >  option transport-type tcp/client
> > >  option remote-host glust4
> > >  option remote-port 6996
> > >  option remote-subvolume brick
> > > end-volume
> > > volume mirror1
> > >  type cluster/replicate
> > >  subvolumes glust3 glust4
> > > end-volume
> > > volume writebehind
> > >  type performance/write-behind
> > >  option window-size 1MB
> > >  subvolumes mirror1
> > > end-volume
> > > volume cache
> > >  type performance/io-cache
> > >  option cache-size 512MB
> > >  subvolumes writebehind
> > > end-volume
> > >
> > >
> > > I ran a basic test by writing 1G to an NFS server and 
> this gluster pair:
> > > [root at glust1 ~]# time dd if=/dev/zero of=/mnt/glust2_nfs/nfs_test
> > > bs=65536 count=15625
> > > 15625+0 records in
> > > 15625+0 records out
> > > 1024000000 bytes (1.0 GB) copied, 1718.16 seconds, 596 kB/s
> > >
> > > real    28m38.278s
> > > user    0m0.010s
> > > sys     0m0.650s
> > > [root at glust1 ~]# time dd if=/dev/zero 
> of=/mnt/glust/glust_test bs=65536
> > > count=15625
> > > 15625+0 records in
> > > 15625+0 records out
> > > 1024000000 bytes (1.0 GB) copied, 3572.31 seconds, 287 kB/s
> > >
> > > real    59m32.745s
> > > user    0m0.010s
> > > sys     0m0.010s
> > >
> > >
> > > With it taking almost twice as long, can I expect this sort of
> > > performance degradation on 'real' servers? Also, what 
> sort of setup
> > > would you recommend for us?
> > >
> > > Can anyone help?
> > > Thanks,
> > > Josh.
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> > >
> > 
> 
> 
> 




More information about the Gluster-users mailing list