[Gluster-devel] df -kh not reporting correct value

Daniel van Ham Colchete daniel.colchete at gmail.com
Thu Jul 12 00:29:36 UTC 2007


On 7/11/07, DeeDee Park <deedee6905 at hotmail.com> wrote:
>
> Thanks, been very helpful. I'll look into the -n option to check out each
> brick.
> i worked with a deveoper before, and they said my config was all good when
> i was having problems. They probably have like 3 copies of my configs.
>
> assume it is something like
> # glusterfs [all my other options] -n <brickname>
> and it will check out only that one brick.
>
> Can then i add 2 bricks eg
> -n brick1 -n brick2
>
> to see the cumulative effects.
>

You can only specify one '-n' option. This will be the brick that GlusterFS
client will mount. The default is to mount the last brick at the volume spec
file.

Imagine the following case:

volume client1
        type protocol/client
        option transport-type tcp/client
        option remote-host 127.0.0.1
        option remote-subvolume brick1
end-volume

volume client2
        type protocol/client
        option transport-type tcp/client
        option remote-host 127.0.0.1
        option remote-subvolume brick2
end-volume

volume afr
        type cluster/afr
        subvolumes client1 client2
        option replicate *:2
        option self-heal on
end-volume

volume writebehind
        type performance/write-behind
        subvolumes afr
end-volume



If you 'glusterfs (all options) -n client1 /mnt/gluster' you will mount only
the client1 brick there. If you 'glusterfs (all options) -n client2
/mnt/gluster' you will mount only the client2 brick there. Them you can 'df
-h' on each and see who Gluster is seeing them before AFR.

Now, if you 'glusterfs (all options) -n afr /mnt/gluster' you will mount
both bricks, but now following the AFR rules, and without the writebehind
translator active.  Here you can do some benchmarking to measure, latter,
how good  writebehind is for you.

If you only 'glusterfs (all options but -n) /mnt/gluster)', the last
(writebehind) brick will be mounted. So now you access all the chain from
the beginning to the end.

It makes very little sense to 'glusterfs -n brick1 -n brick2' because
GlusterFS does not know how to work with two translators at the same time.
How would it know if it would have to distribute files between the bricks or
to replicate them?

GlusterFS can only connect to one brick. This brick, depending on it's
translator logic, can connect to one or more bricks and do whatever it wants
with them, but Gluster has to have a unique starting point always.

This translator design is very, very, very clever. I can't wait to see the
possibilities from the compression translator. Depending on how you mount
the chain you can:
1 - compress and uncompress the files at the servers, removing the
compression burden from the clients, or
2 - compress before protocol, uncompress just after, writing the files at
plain but making better use of the interconnects, or
3 - have the clients compressing and uncompressing everything, the server
will be totally unaware of it, and making a more scalable setup.

The current documentation on what translator are, how you should think of
them, and how powerful is this organization is a little thin at the wiki,
but as soon as you understand it you will also love it (as I do).

Best regards,
Daniel Colchete



More information about the Gluster-devel mailing list