[Gluster-users] Gluster 3.3.0 on CentOS 6 - GigabitEthernet vs InfiniBand
corey.kovacs at gmail.com
Thu Oct 18 14:30:27 UTC 2012
Jeff that came from one of you core glusterfs engineers (sid?). According
to him the term stripe isn't used correctly and does not imply parallel
operations. Writes are handled one brick at a time. If a stripe were
parallel tje my 800 Mbs should have been closer to 2GB across three servers
each capable of 900 MB/s all day long. If you have docs that say otherwise
I'd love to see them.
On Oct 18, 2012 8:19 AM, "Jeff Darcy" <jdarcy at redhat.com> wrote:
> On 10/18/2012 09:39 AM, Corey Kovacs wrote:
> > My experiences so far were sort of disappointing until I found out a few
> > items about GlusterFS which I'd taken for granted.
> > 1. Stripes are not what you might think. The I/O for a stripe does _not_
> > out as in a raid card. It's an unfortunate use of the term only
> describing and
> > allowing you to store files larger than the max size of a single brick.
> I'm not sure what you mean by "don't fan out" because stripe *will* issue
> multiple requests in parallel. It's just not that beneficial most of the
> because the overhead of splitting and recombining writes tends to
> overwhelm the
> advantage of parallelism. Some people might have different results,
> on faster networks, but we don't push it as a general-purpose performance
> enhancer because it doesn't work that way for most people.
> > 2. I/O is done in sync mode so cache coherency isn't an issue and to
> ensure the
> > integrity of the data written.
> Generally true only for metadata - not for data. We do honor O_SYNC and
> when we see them, of course, but otherwise we're quite happy to buffer
> in write-behind, cache reads in io-cache, etc.
> > 3. The performance of a distributed volume far exceeds that of a stripe
> for my
> > use. Again, depends on the size of the bricks.
> ...and the size of the I/O requests, and a bunch of other things.
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users