[Gluster-devel] about Qluster FS configuration

Onyx lists at bmail.be
Sun Nov 18 13:43:36 UTC 2007


We did some tests with latency and loss on an afr link. Both have a 
rather big impact on the write performance. The write-behind translator 
helps a lot in this situation.
You can easily test latency and loss on a link in the lab by using the 
network emulation functionality of linux.
Like this for example:

tc qdisc add dev eth0 root netem delay 10ms
tc qdisc change dev eth0 root netem loss 0.5%



Felix Chu wrote:
> I am interested to see if it is ok to build up a global clustered storage
> across different IDCs with single naming space. That mean the Gluster
> servers are located in different IDCs. 
>
> Next week I organize more detailed info of test environment and send you. 
>
> Thanks again your reply. This project is pretty good and we are happy to
> continue testing on it.
>
> -----Original Message-----
> From: krishna.srinivas at gmail.com [mailto:krishna.srinivas at gmail.com] On
> Behalf Of Krishna Srinivas
> Sent: Friday, November 16, 2007 5:24 PM
> To: Felix Chu
> Cc: gluster-devel at nongnu.org
> Subject: Re: [Gluster-devel] about Qluster FS configuration
>
> Felix,
>
> sometimes touch does not open() so a better way would be "od -N1" command.
>
> Regarding your setup, can you give more details? how would glusterfs be
> setup
> across 20 data centers? what would the speed be between them?
>
> Krishna
>
> On Nov 16, 2007 2:38 PM, Felix Chu <felixchu at powerallnetworks.com> wrote:
>   
>> Hi Krishna,
>>
>> Thanks your quick reply.
>>
>> About self heal, is that mean before the event open() triggered, the whole
>> replication cluster will have one less replica than normal state? If our
>> goal is to make the replication status back to normal(same #of replicas as
>> normal), we need to trigger open() for all files store in the cluster file
>> system, right? If so, the easiest way is to "touch *" in the clustered
>>     
> mount
>   
>> point, right?
>>
>> By the way, we will setup a testing environment to create a GlusterFS
>>     
> across
>   
>> 20 data centres, each data centre has point to point fiber in between. The
>> longest distance between two data centres is about 1000km. Do you think
>> GlusterFS can be applied in this kind of environment? Any minimum network
>> quality between storage servers and clients?
>>
>> Regards,
>> Felix
>>
>>
>> -----Original Message-----
>> From: krishna.zresearch at gmail.com [mailto:krishna.zresearch at gmail.com] On
>> Behalf Of Krishna Srinivas
>> Sent: Friday, November 16, 2007 4:19 PM
>> To: Felix Chu
>> Cc: gluster-devel at nongnu.org
>> Subject: Re: [Gluster-devel] about Qluster FS configuration
>>
>> On Nov 16, 2007 1:18 PM, Felix Chu <felixchu at powerallnetworks.com> wrote:
>>     
>>> Hi all,
>>>
>>>
>>>
>>> I am new user to this QlusterFS project. I just started the test in
>>>       
> local
>   
>>> environment with 3 server nodes and 2 client nodes.
>>>
>>>
>>>
>>> So far, it works fine and now I have two questions:
>>>
>>>
>>>
>>> 1.      I cannot understand the option related to "namespace" clearly. I
>>> find that in most of the server conf files separated "DS" and "NS"
>>>       
>> volumes,
>>     
>>> what is the purpose of it?
>>>
>>>       
>> namespace is used :
>> * to assign inode numbers
>> * to readdir(), instead of reading contents of all the subvols, unify
>> readdir()s just from NS.
>>
>>     
>>> e.g. in
>>>
>>>       
> http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_wi
>   
>>> th_GlusterFS
>>>
>>>
>>>
>>> there are "ds" and "ns" volumes in this config
>>>
>>> volume mailspool-ds
>>>
>>>            type storage/posix
>>>
>>>            option directory /home/export/mailspool
>>>
>>>    end-volume
>>>
>>>
>>>
>>>    volume mailspool-ns
>>>
>>>            type storage/posix
>>>
>>>            option directory /home/export/mailspool-ns
>>>
>>>    end-volume
>>>
>>>
>>>
>>> 2.      In my testing environment, I applied the replication function to
>>> replicate from one server to other 2 servers. Then I unplug one of the
>>> server. On client side it still ok to access the mount point. After a
>>> period, I up the unplugged server again and find that all data during
>>>       
> the
>   
>>> outage period does not appear on this server. Any steps required to sync
>>> data back to new recovered server?
>>>
>>>       
>> You need to open() that file to trigger selfheal for that file.
>>
>> Krishna
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>     
>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>   





More information about the Gluster-devel mailing list