[Gluster-users] 3.2.2 Performance Issue

Anand Avati anand.avati at gmail.com
Thu Aug 11 18:05:25 UTC 2011


On Thu, Aug 11, 2011 at 7:51 PM, Stephan von Krawczynski
<skraw at ithnet.com>wrote:

> On Thu, 11 Aug 2011 09:13:53 -0400
> Joe Landman <landman at scalableinformatics.com> wrote:
>
> > On 08/11/2011 09:11 AM, Burnash, James wrote:
> > > Cogently put and helpful, Joe. Thanks. I'm filing this under "good
> > > answers to frequently asked technical questions". You have a number
> > > of spots in that archive already :-)
> >
> > Thanks :)
>
> Unfortunately he failed to understand my point. Obviously I was not talking
> about simply _supplying_ more switches, I talked about _spreading_ the
> network
> over several switches. This means you take a client that has at least two
> GBit
> Ports and connect your two gluster servers (bricks) to one each. Obviously
> you
> can do the same with a bigger number of bricks, it only depends on the
> number
> of interfaces your client has. This means contention is not possible by
> accessing several bricks "at the same time" in a replication setup.
>
> But as told before, the problem of bad performance did not go away for us.
>

Write performance in replicate is not only a throughput factor of disk and
network, but also involves xattr performance. xattr performance is a
function of the inode size in most of the disk filesystems. Can you give
some more details about the backend filesystem, specifically the inode size
with which it was formatted? If it was ext3 with the default 128byte inode,
it is very likely you might be running out of in-inode xattr space (due to
enabling marker-related features like geo-sync or quota?) and hitting data
blocks. If so, please reformat with 512byte or 1KB inode size.

Also, what about read performance in replicate?

Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110811/5ae40e2d/attachment.html>


More information about the Gluster-users mailing list