[Gluster-users] HA NFS
Christopher Hawkins
chawkins at bplinux.com
Tue Jul 13 15:09:51 UTC 2010
Ah, I had no idea! I look forward to the correct answer because I am interested in the NFS translator as well.
----- "Layer7 Consultancy" <info at layer7.be> wrote:
> Hi Chris,
>
> Thanks for your answer, this was also my understanding on how things
> work when using the FUSE client (all storage nodes are described in
> the glusterfsd.vol file and thus the client has connection info to
> them).
>
> However, when using NFS the documentation states that one should
> connect to the 'management IP' and this also seems to be the only
> connection information that the client has.
> If this management IP is gone due to the server going down, there is
> no way the client can know there are multiple other servers who are
> also serving the same content, so unless this virtual IP is taken
> over
> by another storage node, the client wouldn't know where to route the
> request to.
>
> Can anyone confirm this?
>
> Cheers,
> Koen
>
>
> 2010/7/13 Christopher Hawkins <chawkins at bplinux.com>:
> > I can offer a little general information. My understanding is this:
> >
> > It is not like failover with a virtual IP. Instead, the gluster
> clients connect to all storage servers at the same time. If one of
> them becomes unavailable, the client can still reach the remaining
> one(s). Locks are preserved for all remaining nodes. Writes are marked
> (in the metadata) as having been completed on the remaining nodes, and
> NOT completed on whatever nodes is down. On access, the file will be
> healed if the downed node has returned. Or you can force healing of
> all files when the node comes back, simply by accessing all files with
> a 'find' command. See seal healing in the wiki for more information on
> this.
> >
> > I am not familiar with OpenQRM so I don't know if or how that would
> have be tweaked for integration.
> >
> > Chris
> >
> > ----- "Layer7 Consultancy" <info at layer7.be> wrote:
> >
> >> Hi all,
> >>
> >> I am considering the built-in NFS functionality of Gluster to build
> a
> >> virtual server environment. The idea is to have 4 or 5 hosts (KVM
> or
> >> Xen) that all contain around 300GB of 15K rpm SAS storage in a
> RAID5
> >> array. On each of the host servers I would install a VM with the
> >> Gluster Platform and expose all of this storage through NFS to my
> >> OpenQRM installation, which would then host all the other VM's on
> the
> >> same servers.
> >> An alternative idea is to have the storage boxes separate from the
> VM
> >> hosts, but the basic idea stays the same I think.
> >>
> >> Now from what I understand, the NFS storage that is exposed to the
> >> clients is approached through the management IP of the first
> Gluster
> >> Platform server. My biggest question is what exactly happens when
> the
> >> first storage node goes down. Does the platform offer some kind of
> >> VRRP setup that fails over the IP to one of the other nodes? Is
> the
> >> lock information preserved and how does this all work internally?
> >>
> >> Since I would be using KVM or Xen, it would in theory be possible
> to
> >> build the FUSE client on the host servers, though I am still in
> doubt
> >> on how OpenQRM will handle this. When choosing local storage,
> OpenQRM
> >> expects raw disks (I think) and creates LVM groups on these disks
> in
> >> order to allow snapshotting and backups. OpenQRM would also not
> know
> >> this is shared storage.
> >>
> >> Does anyone have some insight on a setup like this?
> >>
> >> Best regards,
> >> Koen
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list