[Gluster-users] Gluster striping taking up too much space

Khawaja Shams kshams at usc.edu
Fri Jul 6 18:48:15 UTC 2012


 Hello,
   We redid our setup with ext4 instead of xfs - and it seems to work fine.
It sounds like there may be a bug in the integration of xfs + glusterfs.
Thanks.


On Fri, Jul 6, 2012 at 11:03 AM, Khawaja Shams <kshams at usc.edu> wrote:

> Hello,
>   Thanks for the response. Does our create volume look correct?
> 1) We are using XFS
> 2)  We are using:
>
> dd if=/dev/zero of=/mnt/home/testfile bs=16k count=16384
>
> to write a 256MB file filled with 0s. I don't think that is a sparse file.
> We also tried other methods of writing the file to the system, and each
> file is consistently 40 times larger than we expect it to be.  Should we
> try a different fs?
>
> 3) We haven't tried the cluster.stripe-coalesce option. We will
> investigate it now. Google didn't turn up any useful info on how to use
> that at a first glance, but we will look a little further.
>
>
> Thank you.
>
> On Fri, Jul 6, 2012 at 9:13 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
>
>> On 07/06/2012 11:29 AM, Khawaja Shams wrote:
>> > Hello,
>> >    We are trying to create a 40 volume gluster across 4 machines. For
>> the
>> > purposes of our benchmarks, we are trying to set it up without any
>> replication.
>> > However, files are taking up 40 times the storage space than we think.
>> Putting
>> > out a 256MB file with dd puts a 9.2GB file on the file system. The
>> gluster FAQs
>> > claim that ls doesn't track the file space properly and du does:
>> >
>> http://gluster.org/community/documentation//index.php/GlusterFS_Technical_FAQ#Stripe_behavior_not_working_as_expected
>> >
>> > However, in our case, ls shows the file to be 256MB, while du shows it
>> to be
>> > 9.2GB. After looking at the individual drives, we also noticed that
>> each drive
>> > has 256MB of data and it seems to be getting replicated. Here is how we
>> create
>> > the volumes:
>> >
>> > gluster create volume gluster-test stripe 40 transport tcp
>> Gluster1:/data_f
>> > Gluster1:/data_g Gluster1:/data_h Gluster1:/data_i Gluster1:/data_j
>> > Gluster1:/data_k Gluster1:/data_l Gluster1:/data_m Gluster1:/data_n
>> > Gluster1:/data_o Gluster2:/data_f Gluster2:/data_g Gluster2:/data_h
>> > Gluster2:/data_i Gluster2:/data_j Gluster2:/data_k Gluster2:/data_l
>> > Gluster2:/data_m Gluster2:/data_n Gluster2:/data_o Gluster3:/data_f
>> > Gluster3:/data_g Gluster3:/data_h Gluster3:/data_i Gluster3:/data_j
>> > Gluster3:/data_k Gluster3:/data_l Gluster3:/data_m Gluster3:/data_n
>> > Gluster3:/data_o Gluster4:/data_f Gluster4:/data_g Gluster4:/data_h
>> > Gluster4:/data_i Gluster4:/data_j Gluster4:/data_k Gluster4:/data_l
>> > Gluster4:/data_m Gluster4:/data_n Gluster4:/data_o
>> >
>> > gluster --version
>> > *glusterfs 3.3.0 built on Jul  2 2012 23:26:48*
>> > Repository revision: git://git.gluster.com/glusterfs.git
>> > <http://git.gluster.com/glusterfs.git>
>> > Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
>> > GlusterFS comes with ABSOLUTELY NO WARRANTY.
>> > You may redistribute copies of GlusterFS under the terms of the GNU
>> General
>> > Public License.
>> >
>> > We do not have this issue if all the bricks are on the same machine.
>> What are
>> > we doing wrong? Are there specific logs we should be looking at?
>>
>> Three questions come to mind.
>>
>> (1) What underlying local filesystem (e.g. XFS, ext4) are you using for
>> the bricks?
>>
>> (2) What exactly is the "dd" command you're using?  Block size is
>> particularly
>> important, also whether you're trying to write a sparse file.
>>
>> (3) Have you tried turning on the "cluster.stripe-coalesce" volume
>> option, and
>> did it make a difference?
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120706/ce50e98f/attachment.html>


More information about the Gluster-users mailing list