[Gluster-devel] clarification regarding the "2 mirrored servers" example on the wiki

Krishna Srinivas krishna at zresearch.com
Thu Mar 20 19:42:01 UTC 2008


Hi Daniel,

You could do that setup, but a better approach would be to make the
server processes on storage servers directly talk to clients on the webservers.
(instead of re-exporting) just as shown in the example link you gave.

Regards
Krishna

On Thu, Mar 20, 2008 at 8:59 PM, Daniel Maher <dma+gluster at witbe.net> wrote:
> On Thu, 20 Mar 2008 20:11:16 +0530 "Krishna Srinivas"
>  <krishna at zresearch.com> wrote:
>
>  > It can suggest option "a" if the glusterfs client is run either on
>  > storage1.example.com or storage2.example.com
>  >
>  > It can suggest option "b" if the glusterfs client is run on a
>  > different machine.
>  >
>  > How you interpret that example depends on where you run the client.
>
>  Thank you for the prompt response, Krishna.
>
>  Suppose i have four webservers and two back-end storage servers.  I
>  would like to have both of storage servers be mirrors of each other,
>  and in turn, i would like to have the webservers be able to interact
>  with the storage "cluster" (such as it is) in a read/write capacity.
>
>  Using Gluster, would the proper approach here be to set up both of the
>  storage servers as gluster servers / clients of each other (option "a"),
>  then export the mirror as a volume to my webservers (running the client
>  only) ?  In this fashion i would hope to achieve a position whereby one
>  of the storage servers could suddenly burn to the ground, but
>  operations would not be affected, since the volume remains accessible.
>
>  As per the "simple high availability storage" wiki page, this would
>  seem to be the case - but i'm more than happy to hear any other
>  thoughts on the matter.
>
>  Thank you all.
>
>
>
>  --
>
>
> Daniel Maher <dma AT witbe.net>
>
>
>  _______________________________________________
>  Gluster-devel mailing list
>  Gluster-devel at nongnu.org
>  http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list