[Gluster-users] Performance for KVM images (qcow)
Bryan Whitehead
driver at megahappy.net
Tue Apr 9 04:00:20 UTC 2013
This looks like you are replicating every file to all bricks?
What is tcp running on? 1G nics? 10G? IPoIB (40-80G)?
I think you want to have Distribute-Replicate. So 4 bricks with replica = 2.
Unless you are running at least 10G nics you are going to have serious
IO issues in your KVM/qcow2 VM's.
On Mon, Apr 8, 2013 at 7:11 AM, Eyal Marantenboim
<eyal at theserverteam.com> wrote:
> Hi,
>
>
> We have a set of 4 gluster nodes, all in replicated (design?)
>
> We use it to store our qcow2 images for kvm. These images have a variable
> IO, though most of them are for reading only.
>
>
> I tried to find some documentation re. performance optimization, but it's
> either unclear to me, or I couldn't find much.. so I'd copied from the
> internet and tried to adjust the config to our needs, but I'm sure it's not
> optimized.
>
>
> We're using 3.3.1 on top of XFS.
>
> The qcow images are about 30GB (a couple of 100GB).
>
>
> Can someone please tell what would be the best paramenter for performance to
> look at?
>
>
> here is volume info:
>
>
> Volume Name: images
>
> Type: Replicate
>
> Volume ID:
>
> Status: Started
>
> Number of Bricks: 1 x 4 = 4
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: vmhost2:/exports/1
>
> Brick2: vmhost3:/exports/1
>
> Brick3: vmhost5:/exports/1
>
> Brick4: vmhost6:/exports/1
>
> Options Reconfigured:
>
> performance.cache-max-file-size: 1GB
>
> nfs.disable: on
>
> performance.cache-size: 4GB
>
> performance.cache-refresh-timeout: 1
>
> performance.write-behind-window-size: 2MB
>
> performance.read-ahead: on
>
> performance.write-behind: on
>
> performance.io-cache: on
>
> performance.stat-prefetch: on
>
> performance.quick-read: on
>
> performance.io-thread-count: 64
>
> performance.flush-behind: on
>
> features.quota-timeout: 1800
>
> features.quota: off
>
>
> Thanks in advanced.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list