[Gluster-users] I am very confused about strip Stripe what way it hold space?

肖力 exiaoli at 163.com
Wed Nov 7 01:51:21 UTC 2012



thks a lot,i test it,and it work!


xiao li






在 2012-11-06 20:34:55,"Brian Foster" <bfoster at redhat.com> 写道:
>On 11/05/2012 08:38 PM, 肖力 wrote:
>> I have 4 dell 2970 server , three server  harddisk is 146Gx6 ,one hard
>> disk is 72Gx6:
>> 
>> each server mount info is
>> //dev/sda4 on /exp1 type xfs (rw)/
>> //dev/sdb1 on /exp2 type xfs (rw)/
>> //dev/sdc1 on /exp3 type xfs (rw)/
>> //dev/sdd1 on /exp4 type xfs (rw)/
>> //dev/sde1 on /exp5 type xfs (rw)/
>> //dev/sdf1 on /exp6 type xfs (rw)/
>> 
>> I create a gluster volume  have 4 stripe 
>> /gluster volume create test-volume3 stripe 4 transport tcp \/
>> /172.16.20.231:/exp4 \/
>> /172.16.20.232:/exp4  \/
>> /172.16.20.233:/exp4  \/
>> /172.16.20.235:/exp4 \/
>> 
>> then i mount volume on client 172.16.20.230
>> /mount -t glusterfs 192.168.106.231:/test-volume3 /gfs3/
>> and i dd 10G file in gfs3
>> /dd if=/dev/zero of=/gfs3/3 bs=1M count=10240/
>> /10240+0 records in/
>> /10240+0 records out/
>> /10737418240 bytes (11 GB) copied, 119.515 s, 89.8 MB/s/
>> I am very confused about this
>> /[root at node231 ~]# du -hs /exp4/
>> /10G     /exp4/
>> /[root at node232 ~]# du -hs /exp4/
>> /10G     /exp4/
>> /[root at node233 ~]# du -hs /exp4/
>> /10G     /exp4/
>> /[root at node235 ~]# du -hs /exp4/
>> /10G     /exp4/
>> i *understand **Stripe 4 should hold 1/4 space on each brick,why is 10G
>> on each brick?*
>> *can someon e explain it,think you.*
>> 
>
>The default stripe data layout conflicts with XFS default speculative
>preallocation behavior. XFS preallocates space beyond the end of files
>and the stripe translator continuously seeks past this space, making it
>permanent.
>
>You can address this by 1.) enabling the cluster.stripe-coalesce
>translator option in gluster or 2.) setting the allocsize mount option
>(i.e., allocsize=128k) in XFS. Note that using the latter option will
>increase the likelihood of fragmentation on the backend filesystem.
>
>Brian
>
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121107/5cc35435/attachment.html>


More information about the Gluster-users mailing list