[Gluster-devel] Question about compile performance over GlusterFS
Amar S. Tumballi
amar at zresearch.com
Mon Mar 17 17:20:03 UTC 2008
Hi,
I am not sure if you can get ib-verbs to work with 1.3.8pre3 release.
Seeing some issues over it. I am trying to fix it. Will mail after its
fixed.
Regards,
Amar
On Mon, Mar 17, 2008 at 8:11 AM, Craig Tierney <Craig.Tierney at noaa.gov>
wrote:
> Amar S. Tumballi wrote:
> > Hi Craig,
> > I will be looking into this issue. Btw, is there any reason you are not
> > using ib-verbs? instead using ib-sdp?
> > Let me get back to you regarding this. Give me few days.
> >
>
>
> For streaming bandwidth, verbs and ib-sdp did not provide any different
> results. However, I didn't really think about it after getting it setup.
> I
> will re-test with ib-verbs today.
>
> Craig
>
>
> > Regards,
> > Amar
> >
> > On Thu, Mar 13, 2008 at 8:41 AM, Craig Tierney <Craig.Tierney at noaa.gov>
> > wrote:
> >
> >> Amar S. Tumballi wrote:
> >>> Hi Craig,
> >>> Thanks for a nice comparison between GlusterFS and other network
> file
> >>> systems. But sure, before concluding about the performance, I would
> >> suggest
> >>> few improvement to your GlusterFS setup.
> >>>
> >>> 1. Try with client/protocol instead of having unify with only one
> >> subvolume.
> >>> (Unify makes sense when you have more than one subvolume, but when
> there
> >> is
> >>> only one subvolume, its a extra layer which may count as overhead),
> >> below
> >>> io-thread volume.
> >>>
> >>> 2. in io-cache on client side, as during kernel compile lot of *.h
> files
> >> are
> >>> re-read, you can give preference to *h files only.
> >>>
> >>> volume ioc
> >>> type performance/io-cache
> >>> subvolumes wb
> >>> option priority *.h:100
> >>> end-volume
> >>>
> >> I changed the io-cache settings to those above and eliminated the use
> of
> >> the Unify
> >> subvolume (my scripts generate server/client configs automatically, and
> in
> >> most
> >> cases multiple servers are used, in this case they weren't). The
> >> compile time
> >> went down, but not by much. The latest test finished in 1042 seconds.
> >>
> >> What I didn't test this time is the compile directly on the storage
> that
> >> is exported
> >> by Gluster. The runtime there is 399 seconds, so the underlying
> >> filesystem is fast.
> >>
> >> I am not making any conclusions about the performance based on these
> >> numbers.
> >> Things are going great so far, and this should be a solveable problem
> >> based on the
> >> other performance characteristics I have seen.
> >>
> >> Craig
> >>
> >>
> >>
> >>> Regards,
> >>> Amar
> >>>
> >>> On Wed, Mar 12, 2008 at 3:55 PM, Craig Tierney <Craig.Tierney at noaa.gov
> >
> >>> wrote:
> >>>
> >>>> I have been testing out my GlusterFS setup. I have been
> >>>> very happy with the streaming IO performance and scalability.
> >>>> We have some users on the system now and they are seeing
> >>>> very good performance (fast and consistent) as compared
> >>>> to our other filesystem.
> >>>>
> >>>> I have a test that I created that tries to measure metadata
> >>>> performance by building the linux kernel. What I have
> >>>> found is that GlusterFS is slower than local disk, NFS,
> >>>> and Panasas. The compile time on those three systems
> >>>> is roughly 500 seconds. For GlusterFS (1.3.7), the
> >>>> compile time is roughly 1200 seconds. My GlusterFS filesystem
> >>>> is using ramdisks on the servers and communicating using
> >>>> IB-Verbs. My server and client configs are below.
> >>>>
> >>>> Note I did not implement both write-behind and not read-behind
> >>>> based on some benchmarks I saw on the list on how it affects
> >>>> re-write.
> >>>>
> >>>> So, is this just because mmap isn't (yet) supported in FUSE?
> >>>> Or, is there something else I should be looking at.
> >>>>
> >>>> Thanks,
> >>>> Craig
> >>>>
> >>>>
> >>>> server.cfg
> >>>> ----------
> >>>>
> >>>> volume brick
> >>>> type storage/posix # POSIX FS translator
> >>>> option directory /tmp/scratch/export # Export this directory
> >>>> end-volume
> >>>>
> >>>> volume server
> >>>> type protocol/server
> >>>> subvolumes brick
> >>>> option transport-type ib-sdp/server # For TCP/IP transport
> >>>> option auth.ip.brick.allow *
> >>>> end-volume
> >>>>
> >>>> client.cfgvolume client-ns
> >>>> type protocol/client
> >>>> option transport-type ib-sdp/client
> >>>> option remote-host w8-ib0
> >>>> option remote-subvolume brick-ns
> >>>> end-volume
> >>>>
> >>>>
> >>>>
> >>>> volume client-w8
> >>>> type protocol/client
> >>>> option transport-type ib-sdp/client
> >>>> option remote-host w8-ib0
> >>>> option remote-subvolume brick
> >>>> end-volume
> >>>>
> >>>> volume unify
> >>>> type cluster/unify
> >>>> subvolumes client-w8
> >>>> option namespace client-ns
> >>>> option scheduler rr
> >>>> end-volume
> >>>>
> >>>> volume iot
> >>>> type performance/io-threads
> >>>> subvolumes unify
> >>>> option thread-count 4
> >>>> end-volume
> >>>>
> >>>> volume wb
> >>>> type performance/write-behind
> >>>> subvolumes iot
> >>>> end-volume
> >>>>
> >>>> volume ioc
> >>>> type performance/io-cache
> >>>> subvolumes wb
> >>>> end-volume
> >>>>
> >>>> ----------
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Craig Tierney (craig.tierney at noaa.gov)
> >>>>
> >>>>
> >>>
> >>
> >> --
> >> Craig Tierney (craig.tierney at noaa.gov)
> >>
> >>
> >
> >
>
>
> --
> Craig Tierney (craig.tierney at noaa.gov)
>
>
--
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
More information about the Gluster-devel
mailing list