[Gluster-users] Distrbute AFR Storage
Sean Davis
sdavis2 at mail.nih.gov
Sat Mar 28 10:51:55 UTC 2009
2009/3/28 Simon Liang <simonl at bigair.net.au>
> Well currently I've got 4 servers set up.
> |- ServerA -AFR-> ServerB
> Client -|
> |- ServerC -AFR-> ServerD
>
> I keep copying the same 1GB file to the storage, but it always writes to
> ServerC no matter what I do. Is there anything I can do to fix this? Add
> scheduler?
>
My understanding is that distribute uses a hash to do the distribution. So,
if you copy the same file, you will get the same result every time; that is,
it will go to the same server every time. Hashing is deterministic in that
sense. Vikas or someone else may clarify if I have things wrong here....
Sean
>
>
>
> -----Original Message-----
> From: vikasgp at gmail.com on behalf of Vikas Gorur
> Sent: Sat 3/28/2009 3:53 PM
> To: Simon Liang
> Cc: gluster-users
> Subject: Re: [Gluster-users] Distrbute AFR Storage
>
> 2009/3/28 Simon Liang <simonl at bigair.net.au>:
> > How can I configure it so it writes it evenly to both servers, keeping
> the
> > free disk space even?
>
> Distribute by default will distribute _files_ evenly among its
> subvolumes. However, if one of the files is larger than the disk space
> available on the node it gets scheduled to, then distribute can do
> nothing. If you regularly want to store big files which might not fit
> on one of your subvolumes, you might want to try the 'stripe'
> translator.
>
> Vikas
> --
> Engineer - Z Research
> http://gluster.com/
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 8.0.238 / Virus Database: 270.11.29/2024 - Release Date: 03/27/09
> 18:51:00
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090328/97f95e86/attachment.html>
More information about the Gluster-users
mailing list