[Gluster-users] Deployment (round 2)

Harald Stürzebecher haralds at cs.tu-berlin.de
Sun Sep 21 08:16:26 UTC 2008


Hi!

2008/9/21 Paolo Supino <paolo.supino at gmail.com>:
> Hi Krishna
>
>   I'm running glusterfs servers and clients on all the nodes that are
> connected to the private network and I want to run glusterfs clients (only)
> on researcher computers that have only access to the head node from an
> external network.
>   I've attached an image of the final setup I want to reach ...
>   What I'm missing now: Head node doesn't iSCSI mount the toaster and
> doesn't isn't a glusterfs server. The researcher's computer doesn't act as a
> glusterfs client ...

Thank you for posting the diagram.

If I'd try to create a setup like this my first attempt at a
configuration would look something like this:

cluster nodes except head node:
>>>
# file: /etc/glusterfs/glusterfs-server.vol
volume brick
  type storage/posix
  option directory /data/export
end-volume

volume brick-ns
  type storage/posix
  option directory /data/export-ns
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option auth.ip.brick.allow <private subnet>
  option auth.ip.brick-ns.allow <private subnet>
  subvolumes brick brick-ns
end-volume
>>>EOF

from http://www.gluster.org/docs/index.php/Aggregating_Three_Storage_Servers_with_Unify
Following the mailing list thread, IIRC you already have something like that.

cluster head node:
>>>
# file: /etc/glusterfs/glusterfs-client.vol
# "unify"ing the cluster nodes and the "toaster"
volume remote1
  type protocol/client
  option transport-type tcp/client
  option remote-host storage1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp/client
  option remote-host storage2.example.com
  option remote-subvolume brick
end-volume

<add the other cluster nodes here>

volume remote35
  type protocol/client
  option transport-type tcp/client
  option remote-host storage35.example.com
  option remote-subvolume brick
end-volume

volume remote-ns
  type protocol/client
  option transport-type tcp/client
  option remote-host storage1.example.com
  option remote-subvolume brick-ns
end-volume

# local data
volume remote36
  type storage/posix
  option directory /data/export
end-volume

# storage on "toaster"
volume toaster1
  type storage/posix
  option directory /mnt/toaster1  # mounted from iSCSI
end-volume

volume toaster2
  type storage/posix
  option directory /mnt/toaster2  # mounted from iSCSI
end-volume

# unify everything together
volume unify0
  type cluster/unify
  option scheduler rr # round robin
  option namespace remote-ns
  subvolumes remote1 remote2 <add the other 32 nodes here> remote35
remote36 toaster1 toaster2
end-volume

# and now export that to the cluster nodes and faculty systems
volume all-data
  type protocol/server
  option transport-type tcp/server
  option auth.ip.unify0.allow <faculty subnet>  # should be limited to
faculty systems
  option auth.i.remote36.allow <private subnet>  # export to cluster nodes
  option auth.i.toaster1.allow <private subnet>   # export to cluster nodes
  option auth.i.toaster2.allow <private subnet>   # export to cluster nodes
  subvolumes unify0 remote36 toaster1 toaster2
end-volume
>>>EOF

bits and pieces copied from
http://www.gluster.org/docs/index.php/Aggregating_Three_Storage_Servers_with_Unify

faculty systems acting  as clients:
>>>
# file: /etc/glusterfs/glusterfs-client.vol
volume remote
  type protocol/client
  option transport-type tcp/client
  option remote-host <(external) IP of cluster head node>
  option remote-subvolume all-data
end-volume
>>> EOF

adapted from http://www.gluster.org/docs/index.php/NFS_Like_Standalone_Storage_Server

cluster nodes acting as clients, including head node:
>>>
# file: /etc/glusterfs/glusterfs-client.vol
volume remote1
  type protocol/client
  option transport-type tcp/client
  option remote-host storage1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp/client
  option remote-host storage2.example.com
  option remote-subvolume brick
end-volume

<add other 32 nodes here>

volume remote35
  type protocol/client
  option transport-type tcp/client
  option remote-host storage35.example.com
  option remote-subvolume brick
end-volume

volume remote36
  type protocol/client
  option transport-type tcp/client
  option remote-host headnode.example.com
  option remote-subvolume brick
end-volume

volume toaster1
  type protocol/client
  option transport-type tcp/client
  option remote-host headnode.example.com
  option remote-subvolume brick
end-volume

volume toaster2
  type protocol/client
  option transport-type tcp/client
  option remote-host headnode.example.com
  option remote-subvolume brick
end-volume

volume remote-ns
  type protocol/client
  option transport-type tcp/client
  option remote-host storage1.example.com
  option remote-subvolume brick-ns
end-volume

volume unify0  # the same as used by the head node
  type cluster/unify
  option scheduler rr # round robin
  option namespace remote-ns
  subvolumes remote1 remote2 <> remote35 remote36 toaster1 toaster2
end-volume
>>>EOF

adapted from http://www.gluster.org/docs/index.php/Aggregating_Three_Storage_Servers_with_Unify

There are a some things that could be done to improve performance, e.g.
- adding performance translators at the "right places" ;-)
- use NUFA scheduler for unify, as all cluster nodes act as clients and servers.

I believe that adding the faculty systems as servers might be done -
but I would not touch that topic as long as i would call the setup
described above "complicated".
<loud thinking>
Some extensions to the volume description language might be nice:
- having remote(1..36) automatically expand to "remote1 remote2
remote3 .. remote36"
- some "if <regex done to hostname matches something> replace <this
volume> with <that volume>
- some loop structure to create volume descriptions, using copy/paste
usually leads to error when editing at a later time and forgetting one
copy - imagine a setup unifying AFR'ed volumes in a 500+ nodes cluster
</loud thinking>
Well, I should post that to the GlusterFS wishlist.


Harald Stürzebecher




More information about the Gluster-users mailing list