[Gluster-users] filling gluster cluster with large file doesn't crash the system?!
matth at geospiza.com
Wed Nov 10 17:21:40 UTC 2010
On Nov 10, 2010, at 7:17 AM, Craig Carl wrote:
> Matt -
> A couple of questions -
> What is your volume config? (`gluster volume info all`)
gluster> volume info all
Volume Name: gs-test
Number of Bricks: 2
> What is the hardware config for each storage server?
brick 1 = 141GB
brick 2 = 143GB
> What command did you run to create the test data?
#perl -e 'print rand while 1' > y.out &
> What process is still writing to the file?
same one as above.
> Craig Carl
> Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Gtalk - craig.carl at gmail.com
> From: "Matt Hodson" <matth at geospiza.com>
> To: gluster-users at gluster.org
> Cc: "Jeff Kozlowski" <jeff at genesifter.net>
> Sent: Tuesday, November 9, 2010 10:46:04 AM
> Subject: Re: [Gluster-users] filling gluster cluster with large file
> doesn't crash the system?!
> I should also note that on this non-production test rig the block size
> on both bricks is 1KB (1024) so the theoretical file size limit is
> 16GB. so how then did i get a file of 200GB?
> On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote:
> > craig et al,
> > I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we
> > nfs mounted the cluster from a 3rd machine and wrote random junk to
> > a file. i watched the file grow to 200GB on the cluster when it
> > appeared to stop. however the machine writing to the file still
> > lists the file as growing. it's now at over 320GB. what's going on?
> > -matt
> > -------
> > Matt Hodson
> > Scientific Customer Support, Geospiza
> > (206) 633-4403, Ext. 111
> > http://www.geospiza.com
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users