[Gluster-devel] Re: Trying to setup afr 2 server 1 client following the example on the wiki

Brandon Lamb brandonlamb at gmail.com
Fri Jan 18 02:42:50 UTC 2008


On Jan 17, 2008 6:15 PM, Brandon Lamb <brandonlamb at gmail.com> wrote:
>
> On Jan 17, 2008 6:13 PM, Anand Avati <avati at zresearch.com> wrote:
> > Brandon,
> >  For the sake of diagnosing, can you try with these changes -
> >
> > 1. with a simple client and standalone server (no clustering anywhere)
> > 2. remove unify in your setup. unify is not needed in this configuration.
> > AFR on the client itself would be even better.
> > 3. try removing write-behind.
> >
> > We're interested in knowing your results from those changes.
> >
> > avati
> >
> > 2008/1/17, Brandon Lamb < brandonlamb at gmail.com>:
> > >
> > >
> > >
> > > On Jan 17, 2008 9:56 AM, Brandon Lamb < brandonlamb at gmail.com> wrote:
> > > > http://ezopg.com/gfs/
> > > >
> > > > I uploaded my client config and the server configs for the 2 servers.
> > > > 3 seperate machines.
> > > >
> > > > I can get to mounting, then do cd /mnt/gfs (mounted gfs dir) and then
> > > > i typed find . -type f -exec head -c 1 {} \; >/dev/null and got
> > > >
> > > > find: ./test: Transport endpoint is not connected
> > > > find: ./bak: Transport endpoint is not connected
> > > >
> > > > And then the crash on server1
> > > >
> > > > Am I doing someting obviously wrong?
> > > >
> > >
> > > Ok I emptied the gfs and gfsns dirs on both servers (they had existing
> > > files/dirs from a different gfs setup i was testing).
> > >
> > > Now I can create files and dirs.
> > >
> > > Now I am wondering about speed. I have a 82 megabyte tarball with 337
> > files.
> > > [root at client gfstest]# time tar xf yoda.tar
> > > real    0m22.567s
> > > user    0m0.042s
> > > sys     0m0.357s
> > >
> > > Now i changed to a dir on server2 that i have mounted over nfs
> > > [root at client nfstest]# time tar xf yoda.tar
> > > real    0m4.956s
> > > user    0m0.030s
> > > sys     0m0.827s
> > >
> > > Do I have some performance translators configured wrong or in the
> > > wrong place or is that really the speed I should be expecting?
> > >
> > > Server 1 is a 8 sata2 raid (8 seagate 250g es drives)
> > > Server 2 is a 16 scsi 160 raid
> > >
> > > From roughly 5 seconds to 22 is a huge increase, Im hoping im doing
> > > something horribly wrong. Using gigabit switch on its own 192 network
> > > for all this
> > >
> > >
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > Gluster-devel at nongnu.org
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> > If I traveled to the end of the rainbow
> > As Dame Fortune did intend,
> > Murphy would be there to tell me
> > The pot's at the other end.
>
> Ok I will set this up in awhile and report back later tonight

http://ezopg.com/gfssimpletest/

I posted gluster log files for server and client, spec files and results.txt





More information about the Gluster-devel mailing list