[Gluster-users] Distribute translator and differing brick sizes

Raghavendra G raghavendra at zresearch.com
Tue Jan 27 17:10:20 UTC 2009


Hi,

On Tue, Jan 27, 2009 at 5:01 PM, Sean Davis <sdavis2 at mail.nih.gov> wrote:

>
>
> On Tue, Jan 27, 2009 at 1:23 AM, Raghavendra G <raghavendra at zresearch.com>wrote:
>
>> Hi,
>>
>> On Tue, Jan 27, 2009 at 3:27 AM, Sean Davis <sdavis2 at mail.nih.gov> wrote:
>>
>>> If I am putting together several volumes of varying sizes using
>>> distribute, what type of load balancing should I expect?  I understand
>>> hashing and it sounds like if the disk fills, then it is not used, but can I
>>> use ALU scheduler to cut things off before the disk becomes full to allow
>>> for growth of directories and files?  How are people approaching this?
>>
>>
>> Distribute,  does not have any schedulers. The hashing as of now is sort
>> of static in the sense that if the disk becomes full, further creation of
>> files which happen to be scheduled to that node fail. Future versions of
>> distribute will reschedule the files to different nodes.
>>
>>
>
> Thanks, Raghavendra.
>
> So, it sounds like Distribute is problematic for any inhomogeneous file
> system (where bricks are of different sizes) or for systems that are not
> meant as "archival" (that is, write once, read many).  I understand that for
> boatloads of small files, performance is improved over unify by using
> distribute, but it sounds like unify is currently the better option for my
> situation.
>
> Is it worthwhile pointing out these details on the wiki somewhere?  The
> website appears to suggest that unify/schedulers are "legacy" systems, which
> implies that they are inferior to rather than an alternative to Distribute.
> However, in my situation, it appears that Unify is the only viable solution.
>

Its mentioned under "legacy" section in the sense that, it will be gradually
phased out as Distribute evolves.


>
>
> Thanks for the help.
>
> Sean
>
>
>


-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090127/a33d5a54/attachment.html>


More information about the Gluster-users mailing list