[Gluster-users] Unify with unequal space available

Harald Stürzebecher haralds at cs.tu-berlin.de
Sat Oct 4 10:11:42 UTC 2008


Hi!

2008/10/4 Deian Chepishev <dchepishev at nexbrod.com>:
> Hi guys,
>
> I have one simple question.
>
> I have 2 storage machines.
>
> stor01 - 12 TB
> stor02 - 5 TB
>
> stor01 is almost full (~ 500G free)
> stor02 is empty
>
> If I create unify and unify these spaces, how will unify handle the
> situation with the different space available on them.
> Is it going to do some smart checking or will just do round robin and
> fill the full brick ?

All available schedulers seem to have the "min-free-disk" option to
limit disk usage. IMHO, keeping that at the default 5% should prevent
that a volume gets filled completely. If you have a lot of files that
grow over time and very few files that shrink or get deleted it might
help to move some of the growing files to the new server before
unifying the volumes.

As your situation seems similar to mine (one volume partially filled,
adding an empty volume), I describe some of the things I found out:
I have a machine with local disks in "home use" as a backup storage
server, using unify with "alu" scheduler to combine the available
space to one big volume. I started with one disk (ignoring the warning
about using unify with only one volume) and added two disks over the
last months. The scheduler seems to distribute the files so that the
available space is used as equally as possible.

After installing the second disk and running some tests the usage
looked like this:
storage0 - 40%
storage1 - 2%

After copying some files to the volume the usage looked like this:
storage0 - 50%
storage1 - 20%

Adding a third disk made it look something like this:
storage0 - 50%
storage1 - 20%
storage2 - 2%

And now, it looks like this::
storage0 - 58%
storage1 - 58%
storage2 - 29%

I'd say: "works for me" :-)


AFAIK, redistribution of files is planned for version 1.5:
http://www.gluster.org/docs/index.php/GlusterFS_Roadmap#GlusterFS_1.5_-_High_Availability_.2F_Management


Regards,
Harald




More information about the Gluster-users mailing list