[Gluster-users] question of Replicate(GFS 2.0)

Io Noci ionoci at webchillaz.de
Thu Mar 5 19:00:37 UTC 2009


hi,
keep your config file as simple as possible for testing.
your first mentioned config file seems ok to me. the second one is too
overloaded for testing.
your debuglogs from "DEBUG log for replicate" and "What was the matter
with my GFS 2.0 " seems ok, just 2 warnings and the rest are debugs.

please reduced the not needed options to the minumum. then try following
steps !only if you are in a testing enviroment, this will kill all
exported data! :

* stop all gluster stuff
* clear all posix filesystems using mkfs
* use your first mentioned server and client config
* mount rep1 by
 'glusterfs  -f /etc/glusterfs/glusterfs.vol --volume-name=rep1 /data'
* write some files to /data and check for the existence and content of
the files on both node
* umount and redo the two steps above again for rep2 and rep-ns
* if all seems to be ok, stop gluster stuff again and cleanup filesystems
* mount bricks to /data, write some files to /data
* check how the files get distributed and replicated.
* stop one gluster-service, remove the file in the posix filesystm of
the node, start the gluster-service again.
* do a 'find /data -type f -print0 | xargs -0 head -c1' and check how
the posix filesystem on the stopped node gots populated again.
* when you reached this step without any error your gfs seems to bee ok
to me ;-)

hope it works out

Io Noci




eagleeyes schrieb:
>  Thanks ,but  which you mention was i missing  when   writing  mail ,not
> lossing in my server's config .
>  Are there  some else wrong ?
> Did  you have seen my mail about "What was the matter with my GFS 2.0 "
> its content was DEBUG log what  i grabbed,could you help me ?
>  
>     The DEBUG log like this
>  
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no range check required for 'option metadata-lock-server-
> count 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no range check required for 'option entry-lock-server-cou
> nt 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no range check required for 'option data-lock-server-coun
> t 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no range check required for 'option metadata-lock-server-co
> unt 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no range check required for 'option entry-lock-server-count
>  2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no range check required for 'option data-lock-server-count 
> 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no range check required for 'option metadata-lock-server-co
> unt 2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no range check required for 'option entry-lock-server-count
>  2'
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no range check required for 'option data-lock-server-count 
> 2'
> 2009-03-04 16:16:47 D [client-protocol.c:6221:init] client1: setting transport-timeout to 5
> 2009-03-04 16:16:47 D [client-protocol.c:6235:init] client1: defaulting ping-timeout to 10
> 2009-03-04 16:16:47 D [transport.c:141:transport_load] transport: attempt to load file /lib/glusterfs/2.0.0rc2/transport/socket.so
> 2009-03-04 16:16:47 W [xlator.c:426:validate_xlator_volume_options] client1: option 'transport.socket.remote-port' is deprecated, pr
> eferred is 'remote-port', continuing with correction
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] client1: no range check required for 'option remote-port 6996'
> 2009-03-04 16:16:47 D [transport.c:141:transport_load] transport: attempt to load file /lib/glusterfs/2.0.0rc2/transport/socket.so
> 2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] client1: no range check required for 'option remote-port 6996'
> 2009-03-04 16:16:47 D [xlator.c:595:xlator_init_rec] client1: Initialization done
>  
> " no range check required for " meant  what ? and  the
> option 'transport.socket.remote-port' is deprecated ?? 
> Why ? i modify configuration files  use options which   its own . 
>  
>  
> 2009-03-05
> ------------------------------------------------------------------------
> eagleeyes
> ------------------------------------------------------------------------
> *发件人:* Io Noci
> *发送时间:* 2009-03-05  03:31:26
> *收件人:* eagleeyes
> *抄送:*
> *主题:* Re: [Gluster-users] question of Replicate(GFS 2.0)
> see inline at volume-repns, perhaps thats all.
> eagleeyes schrieb:
>>   Hello :
>>       I have some  question of Replicate ,when i use tow servers and one
>> client  , the configuration files  are these :
>>     
>>      GFS server 1 and 2
>> glusterfsd.vol
>> =======================================================
>> volume posix1
>>   type storage/posix                    # POSIX FS translator
>>   option directory /data1        # Export this directory
>> end-volume
>>  
>> volume posix2
>>   type storage/posix                    # POSIX FS translator
>>   option directory /data2        # Export this directory
>> end-volume
>> ### Add POSIX record locking support to the storage brick
>> volume brick1
>>   type features/posix-locks
>>   #option mandatory-locks on          # enables mandatory locking on all
>> files
>>   subvolumes posix1
>> end-volume
>>  
>> volume brick2
>>   type features/posix-locks
>>   #option mandatory-locks on          # enables mandatory locking on all files
>>   subvolumes posix2
>> end-volume
>>  
>> volume ns
>>   type storage/posix                    # POSIX FS translator
>>   option directory /export    # Export this directory
>> end-volume
>>  
>> volume name
>>   type features/posix-locks
>>   #option mandatory-locks on          # enables mandatory locking on all
>> files
>>   subvolumes ns
>> end-volume
>>  
>> ### Add network serving capability to above brick.
>> volume server
>>   type protocol/server
>>   option transport-type tcp                 # For TCP/IP transport
>>   subvolumes    brick1 brick2 name
>>   option auth.addr.brick1.allow *               # access to "brick" volume
>>   option auth.addr.brick2.allow *               # access to "brick" volume
>>   option auth.addr.name.allow *               # access to "brick" volume
>> end-volume
>> =================================================================
>> GFS client
>> volume client1
>>   type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.249      # IP address of the remote brick
>>   option remote-subvolume brick1        # name of the remote volume
>> end-volume
>> ### Add client feature and attach to remote subvolume of server2
>> volume client2
>>   type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.249      # IP address of the remote brick
>>   option remote-subvolume brick2        # name of the remote volume
>> end-volume
>>  
>> volume client3
>>   type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.250      # IP address of the remote brick
>>   option remote-subvolume brick1        # name of the remote volume
>> end-volume
>> volume client4
>>   type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.250     # IP address of the remote brick
>>   option remote-subvolume brick2        # name of the remote volume
>> end-volume
>>  
>> volume  ns1 
>>  type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.249      # IP address of the remote brick
>>   option remote-subvolume name        # name of the remote volume
>> end-volume
>> volume  ns2 
>>  type protocol/client
>>   option transport-type tcp     # for TCP/IP transport
>>   option remote-host 172.20.92.250      # IP address of the remote brick
>>   option remote-subvolume name        # name of the remote volume
>> end-volume
>>  
>> ## Add replicate feature.
>> volume rep1
>>   type cluster/replicate
>>   subvolumes client1 client3
>> end-volume
>>  
>> volume rep2
>>   type cluster/replicate
>>   subvolumes client2 client4 
>> end-volume
>>  
>> volume rep-ns
>>   type cluster/replicate
> missing subvolumes ns1 ns2
>>  end-volume
>>  
>> volume bricks
>>   type cluster/unify
>>   option namespace rep-ns # this will not be storage child of unify.
>>   subvolumes rep1 rep2
>>   option self-heal background # foreground off # default is foreground
>>   option scheduler rr
>> end-volume
>> ========================================================================
>>               glusterfs  -f /etc/glusterfs/glusterfs.vol  /data   
>>  
>> After mount ,I  touch 11 22 33 44 four files into  /data  ,for the
>> Replicate,four files are both exist in 92.249 and 92.250
>> On GFS client  I echo "aaaaaaaaaaaaaaa" > 11 ,then  on 92.249  i rm -fr
>> /data1/11 , just like the file was lost. So  on client I couldn't read
>> 11 correct, I  " ll -h ",the file is appear again  in 92.249,but have
>> not the right "aaaaaaaaaaaaaaa",it was like "@@@@@@@@@@@" messy code !
>> If i copy 11 from 92.250 to 92.249, on GFS client  I read the right file
>> "aaaaaaaaaaaaaaa" . Was that my configuring wrong ?  why the file not
>> renew accurate?
>>  
>>  
>>    
>> 2009-03-04
>> ------------------------------------------------------------------------
>> eagleeyes
>>  
>> 
>> 
>> ------------------------------------------------------------------------
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list