[Gluster-users] filling gluster cluster with large file doesn't crash the system?!

Craig Carl craig at gluster.com
Fri Nov 12 05:44:09 UTC 2010


Matt, 
Based on your Gluster servers configs that file is bigger than the available disk space, obviously that isn't right. 

Can you send us the output of `stat y.out` taken from the Gluster mount point and from the back end of the server Gluster created the file on? 

I'm also going to try and reproduce the problem here on 3.1 and 3.1.1qa5. 





Thanks, 
Craig 

--> 
Craig Carl 



Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.carl at gmail.com 


From: "Matt Hodson" <matth at geospiza.com> 
To: "Craig Carl" <craig at gluster.com> 
Cc: "Jeff Kozlowski" <jeff at genesifter.net>, gluster-users at gluster.org 
Sent: Wednesday, November 10, 2010 9:21:40 AM 
Subject: Re: [Gluster-users] filling gluster cluster with large file doesn't crash the system?! 

Craig, 
inline... 



On Nov 10, 2010, at 7:17 AM, Craig Carl wrote: 




Matt - 
A couple of questions - 

What is your volume config? (`gluster volume info all`) 



gluster> volume info all 


Volume Name: gs-test 
Type: Distribute 
Status: Started 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: 172.16.1.76:/exp1 
Brick2: 172.16.2.117:/exp2 





What is the hardware config for each storage server? 



brick 1 = 141GB 
brick 2 = 143GB 




What command did you run to create the test data? 


#perl -e 'print rand while 1' > y.out & 





What process is still writing to the file? 



same one as above. 








Thanks, 
Craig 

--> 
Craig Carl 



Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.carl at gmail.com 



From: "Matt Hodson" < matth at geospiza.com > 
To: gluster-users at gluster.org 
Cc: "Jeff Kozlowski" < jeff at genesifter.net > 
Sent: Tuesday, November 9, 2010 10:46:04 AM 
Subject: Re: [Gluster-users] filling gluster cluster with large file doesn't crash the system?! 

I should also note that on this non-production test rig the block size 
on both bricks is 1KB (1024) so the theoretical file size limit is 
16GB. so how then did i get a file of 200GB? 
-matt 

On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote: 

> craig et al, 
> 
> I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we 
> nfs mounted the cluster from a 3rd machine and wrote random junk to 
> a file. i watched the file grow to 200GB on the cluster when it 
> appeared to stop. however the machine writing to the file still 
> lists the file as growing. it's now at over 320GB. what's going on? 
> 
> -matt 
> 
> ------- 
> Matt Hodson 
> Scientific Customer Support, Geospiza 
> (206) 633-4403, Ext. 111 
> http://www.geospiza.com 
> 
> 
> 
> 


_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 



More information about the Gluster-users mailing list