[Gluster-users] novice kind of question.. replication(raid)

RW gluster at tauceti.net
Fri Apr 16 21:57:40 UTC 2010


See my answers in the text.

> Robert, thank you ever so much for clarifying the picture,
> 
> but I still wonder, why I do? because to me that seems like kind of
> first aid functionality
> in any network distributed fs, it should be there..
> so I wonder is it possible with glusterfs get the following:
> 
> have server(backend) working as daemon on two(or any number of) boxes
> and have this server(s) on this box(es) watching over a local tree(folder)
> and basically these servers(backends) would be syncing with each other
> and would be doing it only to ensure of the content of this tree to be
> the same on all boxes

Puh... I don't know if I get you right but for me it looks like
that you're looking for a filesystem which requires a central storage
(SAN) like GFS/GFS2 (Redhat) or OCFS (Oracle Cluster File System).
GFS or GFS2 can also be used as a local filesystem. GFS/GFS2 is more
what you've described above.

> server_1  <->  server_2  <->  server_3
>      |                        |                        |
>     ^                      ^                       ^
> /watch_me         /watch_me       /watch_me
> 
> so no mounts, a process changes something in this local /watch_me on
> server_1
> server_1 propagates(obviously working through the logic) the change to
> other servers and vice versa
> 
> is it possible to, maybe by introducing client part of config into
> glusterfsd.vol,
> to have it like this? without having a client have to mount/configure
> replication?

Well if I haven't missed something then the short answer should be: no.
Since the glusterfsd daemons (backend) are only responsible for storing
the data locally (besides some other things of course) you need a
mount point because the magic of distribution/replication lies in the
client (configuration).

But I can show you a configuration where (almost) no mount is needed.
But I doubt that it will help you. We're using GlusterFS where we have a
central CMS (content management system). On this CMS host we've a
GlusterFS mount which replicates the pictures uploaded to 8 other
hosts. On each of this 8 hosts there is running glusterfsd of course.
glusterfsd then stores this files locally on each host. The 8 hosts
run Apache webservers which delivers this pictures to the web browsers
out there. This scenario is very practical if you need to distribute
files from a central location to many other hosts. Important to note
here is that you really only read the files and do not modify it
(besides the host which has the CMS of course). This changes on the
backends won't be replicated and you'll probably get strange results
over time.

> other than that glusterfs feels cool, last two days I was fiddling with coda
> but it the end it crashes way to often, at least Fedora's rpm is like this,
> yet there is(was) a problem with glusterfs for me too, if anybody uses
> fedora:
> https://bugzilla.redhat.com/show_bug.cgi?id=555728

I've had problems on Gentoo until version 3.0.2. 3.0.2 was the
first version for us which works quite well. There are some issues
left until now but I haven't tested 3.0.4 yet.

> ps. is it in reality as docs say, glusterfs won't work on slow and flaky
> networks? 1GbE at least?

I would definitely recommend 1GbE. If you need a filesystem for
slow and flaky networks (over WAN) maybe you should have a look at AFS
(http://en.wikipedia.org/wiki/Andrew_File_System). But it is more
complicated to setup. But I wouldn't compare GlusterFS and AFS
directly.

- Robert


> cheers
> 
> 
> On 16/04/10 15:01, RW wrote:
>>   
>>> many thanks Robert for your quick reply,
>>> I still probably am missing/misunderstanding the big picture here, what
>>> about this:
>>>
>>>                             box a  < -- >  box b
>>>                             /dir_1             /dir_1
>>>                               ^                     ^
>>>         serivces locally                     services locally
>>>    read/write to dir_1                     read/write to /dir_1
>>>     
>> This is basically the setup I described with my config files.
>> /dir_1 (or /some_folder in you former mail) is the client mount.
>> Everything you copy in there will be replicated to box a and
>> box b. It doesn't matter if you do the copy in box a or b.
>> But you need a different location for glusterfsd (the GlusterFS
>> daemon) to store the files locally. This could be /opt/glusterfsbackend
>> for example. You need this on both hosts and you need the mounts
>> (client) on both hosts.
>>
>>   
>>> - can all these local services/processes, whatever these might be,
>>> not know about mountig and all this stuff?
>>>     
>> You need to copy glusterfsd.vol on both hosts e.g. /etc/glusterfs/
>> Then you start glusterfsd (on Gentoo this is "/etc/init.d/glusterfsd
>> start"). Now you should see a glusterfsd process on both hosts.
>> You also copy glusterfs.vol to both hosts. As you can see in my
>> /etc/fstab I supply the glusterfs.vol file as the filesystem
>> and glusterfs as type. You now mount GlusterFS as you would do
>> with every other filesystem. If you now copy a file to /some_folder
>> on "box a" it will automatically be replicated to "box b" and after
>> that it will be immediately be available at "box b". The replication
>> is done by the client (the mountpoint in your case if this
>> helps to better understand). The servers basically only provide the
>> backend services to store the data somewhere on a brick (host).
>> In my example above this was /opt/glusterfsbackend.
>>
>>   
>>> - and server between themselves make sure(resolve conflicts, etc.)
>>> that content of dir_1 on both boxes is the same?
>>>     
>> Most of the time ;-) There're situations where conflicts can
>> occur but in this basic setup they're seldom. You have to monitor
>> the log files. But GlusterFS provides self healing which means
>> that if a backend (host) goes down the files generated on the
>> good host - while the bad host is down - will be copied to the failed
>> host if it is up again. But this will not happen immediately.
>> This is the "magic part" of GlusterFS ;-)
>>
>>   
>>> - so whatever happens(locally) on box_a is replicated(through "servers")
>>> on box_b and vice versa,
>>> possible with GlusterFS or I need to be looking for something else?
>>>     
>> As long as you copy the files into the glusterfs mount (in your
>> case /some_folder) the files will be copied to "box b" if you
>> copy it on "box a" and vice versa.
>>
>>   
>>> and your configs, do both files glusterfsd and glusterfs go to both
>>> box_a & box_b?
>>>     
>> Yes.
>>
>>   
>>> does mount need to be executed on both boxes as well?
>>>     
>> Yes.
>>
>> - Robert
>>
>>
>>   
>>> thanks again Robert
>>>
>>>
>>>
>>> On 16/04/10 13:42, RW wrote:
>>>     
>>>> This is basically the config I'm using for replicate
>>>> a directory between two hosts (RAID 1 if you like ;-) )
>>>> You need server and client even both are on the same
>>>> host:
>>>>
>>>> ##########################
>>>> # glusterfsd.vol (server):
>>>> ##########################
>>>> volume posix
>>>>   type storage/posix
>>>>   option directory /some_folder
>>>> end-volume
>>>>
>>>> volume locks
>>>>   type features/locks
>>>>   subvolumes posix
>>>> end-volume
>>>>
>>>> volume server
>>>>   type protocol/server
>>>>   option transport-type tcp
>>>>   option transport.socket.bind-address .......
>>>>   option transport.socket.listen-port 6996
>>>>   option auth.addr.locks.allow *
>>>>   subvolumes locks
>>>> end-volume
>>>>
>>>> #########################
>>>> # glusterfs.vol (client):
>>>> #########################
>>>> volume remote1
>>>>   type protocol/client
>>>>   option transport-type tcp
>>>>   option remote-host <ip_or_name_of_box_a>
>>>>   option remote-port 6996
>>>>   option remote-subvolume locks
>>>> end-volume
>>>>
>>>> volume remote2
>>>>   type protocol/client
>>>>   option transport-type tcp
>>>>   option remote-host <ip_or_name_of_box_b>
>>>>   option remote-port 6996
>>>>   option remote-subvolume locks
>>>> end-volume
>>>>
>>>> volume replicate
>>>>   type cluster/replicate
>>>>   # optionally but useful if most is reading
>>>>   # !!!different values for box a and box b!!!
>>>>   # option read-subvolume remote1
>>>>   # option read-subvolume remote2
>>>>   subvolumes remote1 remote2
>>>> end-volume
>>>>
>>>> #####################
>>>> # /etc/fstab
>>>> #####################
>>>> /etc/glusterfs/glusterfs.vol /some_folder  glusterfs  noatime  0  0
>>>>
>>>> "noatime" is optional of course. Depends on your needs.
>>>>
>>>> - Robert
>>>>
>>>>
>>>> On 04/16/10 14:18, pawel eljasz wrote:
>>>>   
>>>>       
>>>>> dear all, I just subscribed and started reading docs,
>>>>> but still not sure if I got the hung of it all
>>>>> is GlusterFS for something simple like:
>>>>>
>>>>> a box <->        b box
>>>>> /some_folder                      /some_folder
>>>>>
>>>>> so /some_folder on both boxes would contain same data
>>>>>
>>>>> if yes, then does setting only the servers suffice? or client side is
>>>>> needed too?
>>>>> can someone share a simplistic config that would work for above simple
>>>>> design?
>>>>>
>>>>> cheers
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>     
>>>>>         
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>   
>>>>       



More information about the Gluster-users mailing list