[Gluster-users] big file over glusterfs_nfs

Bryan Whitehead driver at megahappy.net
Tue Feb 21 17:50:17 UTC 2012


你好!

You might have a single brick filling up. Check the disk space free for all
your different xen_bk_vol volumes across the 6 directories.

2012/2/19 查舒玉 <zhashuyu at 163.com>

> Hi,
>
> Recently, I use glusterfs(3.2.5) server as xenserver(6.0) vms backend,
> the details comes behind:
> The linux system is centso-6.2-x86_64, I mount two 1T-disk in /gluster01
> and /gluster02 in all 3 mechines.
> I create a distribute volume named xen_bk_vol:
> gluster> volume info xen_bk_vol
> Volume Name: xen_bk_vol
> Type: Distribute
> Status: Stopped
> Number of Bricks: 6
> Transport-type: tcp
> Bricks:
> Brick1: 10.52.10.5:/gluster01/xen_bk_vol
> Brick2: 10.52.10.5:/gluster02/xen_bk_vol
> Brick3: 10.52.10.6:/gluster01/xen_bk_vol
> Brick4: 10.52.10.6:/gluster02/xen_bk_vol
> Brick5: 10.52.10.7:/gluster01/xen_bk_vol
> Brick6: 10.52.10.7:/gluster02/xen_bk_vol
> Options Reconfigured:
> network.ping-timeout: 5
> auth.allow: 10.*,172.27.*
> gluster>
>
> I mount the volume in xenserver using :
> "mount *10.52.10.6*:xen_bk_vol /backup -t nfs -o proto=tcp,vers =3"
> (My xenservers also are in the same subnet)
> When I use "xe vm-export" (export vm to one single xva file in xenserver)
> to export vms, I found something confused.  When the exported vm files
> exceed a single point, such as 100G or 200G(I don't known the exact
> number), the mount point could not write data anymore and the glusterfs
> server (the one I used to as mount point in xenserver, as *10.52.10.6*)
> reboots very often. I cannot get information in the glusterfs logs.
>
> Is my setting  wrong or it is a bug?
>
> Thanks!
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120221/5605f2f5/attachment.html>


More information about the Gluster-users mailing list