[Gluster-users] Deployment (round 2)

Paolo Supino paolo.supino at gmail.com
Sat Sep 20 22:36:50 UTC 2008

Hi Krishna

 Let me elaborate on the setup I have: 36 nodes and a toaster (a Netapp
2020 filer) all connected on a private Gbit Ethernet switch (i.e. the
switch isn't connected to anything but hte nodes and toaster) . A single
node from the 36 is multi homed and is also connected to the faculty
network (the head node). The multi homed node (head node) also acts as a
PAT/PNAT/Masquerading GW for all the nodes on the private network.
Researcher's computers only see the multi homed node  (on its outward
facing interface) and don't see any of the nodes on the private network.
The glusterfs I setup uses all 35 nodes, that are connected only to the
private network, as both servers and clients of the glusterfs volume.
Currently the head (multi homed) node acts only a client (this will
change once I have the toaster be an iSCSI target).
  My final goal is to have the researcher's computers act as glusterfs
clients of the glusterfs volume I created on the private network (that
they don't see) ...


BTW: if someone is getting confused by me calling the Netapp filer a
toaster (which is what Netapp does to show how simple their storage
systems are) let me know and I will stop calling it a toaster, though
it's a lot of fun to toast things on it ;-)

Krishna Srinivas wrote:
> On Sun, Sep 21, 2008 at 1:46 AM, Paolo Supino <paolo.supino at gmail.com> wrote:
>> Hi Krishna
>>  I have all intentions of mounting the toaster's filesystems on the
>> head node using iSCSI (see my original post), but I have a problem: I
>> have 35 servers in the gluster filesystem that the researchers don't see
>> directly: the head node hides the private network with
>> PAT/PNAT/Masquerading (pick your favorite acronym) so the clients only
>> see the head node but not all the gluster filesystem servers behind it.
>> The glusterfs clients get a wrong image of the gluster filesystem ... I
>> could simply remove the PAT/PNAT/Masquerading (pick your favorite
>> acronym), but I'd rather not do that because that adds an overhead in
>> systems administration and breaks the rule of KISS.
> Ah OK, you have 35 storage servers apart from toaster.
> If I understand you correctly, you are planing to run glusterfs
> client on head node and re-export this mount point to the
> researchers' nodes?
> If yes, you could setup port forwarding on head node and avoid
> re-exporting completely so that researchers' nodes access the
> storage nodes directly.
> Regards
> Krishna
>> --
>> TIA
>> Paolo
>> Krishna Srinivas wrote:
>>> Paolo,
>>> You could mount toaster's partition on head node using iscsi.
>>> Run glusterfs server on head node exporting the two partitions.
>>> Run glusterfs client on the researcher's nodes.
>>> Krishna
>>> 2008/9/18 Paolo Supino <paolo.supino at gmail.com>:
>>>> Hi
>>>>   now that I have a new shiney parallel filesystem :-) I want to take
>>>> it a step forward (the fun never ends ;-) ) ...
>>>>   A few words on my HPC cluster:
>>>> 1. The private network between the compute nodes, head and toaster
>>>> (Netapp FAS 2020) is Gigabit Ethernet.
>>>> 2. The toaster exports 2.1 and 5.1 TB volumes served over NFSv3 (ouch..)
>>>> 3. Only the head node is multi homed and connected to the faculty
>>>> network, where the researchers are ...
>>>>   What I thought of doing:
>>>> 1. Re export the toaster using iSCSI.
>>>> 2. Mount the iSCSI exports on the head and add them to the gluster
>>>> volume. This is pretty straight forward :-) and voilà I have a uniform
>>>> 9.3TB volume ...
>>>> 3. The last part is the tricky part that I still have to figure out:
>>>> have the researchers be able to be gluster clients of this volume
>>>> without exposing the private network to the faculty network (I don't
>>>> want to NFS export it)
>>>> --
>>>> Paolo
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

More information about the Gluster-users mailing list