[Gluster-users] Feedback and Questions on afr+unify

Prabhu Ramachandran prabhu at aero.iitb.ac.in
Thu Dec 18 12:04:54 UTC 2008


Hi,

Thanks for the response.

Krishna Srinivas wrote:
>>  - This one is very minor. It wasn't explicitly clear from the docs that to
>> use unify one needed (a) locking and (b) the namespace.  The place this is
>> mentioned is in "Understanding unify translator" which isn't the first place
>> a user would look.  Would be nice if this were mentioned somewhere.
> 
> Unify needs namespace, what do you mean by "locking" here?

The fact that I need to turn the posix-locks feature on.

volume brick1
   type features/posix-locks
   option mandatory on          # enables mandatory locking on all files
   subvolumes posix1
end-volume

Without it I was running into problems.

>>  - There are a lot of options to choose from and without Anands initial help
>> in person I would be lost trying to choose a scheduler.  It would be great
>> if there were some recommended solutions.  I understand the software is
>> rapidly growing but this would make life easier for new adopters.
> 
> True. We will give "cookbook" recommended setups in our documentation.

That would be great!

>> 2008-12-18 00:25:40 E [addr.c:117:gf_auth] auth/addr: client is bound to
>> port 59327 which is not privilaged
>> 2008-12-18 00:25:40 E [authenticate.c:193:gf_authenticate] auth: no
>> authentication module is interested in accepting remote-client
>> 10.24.1.4:59327
>> 2008-12-18 00:25:40 E [server-protocol.c:6842:mop_setvolume] server: Cannot
>> authenticate client from 10.24.1.4:59327
>>
>>  I worked around this problem by exposing the machine as a DMZ host from the
>> router but this is not ideal.  Is there something I can do to fix this?
> 
> http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#Authenticate_modules
> 
> You can use "login" based authentication to get around this problem.

Thanks, yes, that would work but for some reason I feel a 
username/password is weaker than restricting it to an IP.

>>  - What would happen if I change the scheduler to something else? Would that
>> hose the data?  I haven't moved all my data yet so I can experiment
>> currently.  I am not likely to tinker with the setup later on though since
>> this will contain important data.
> 
> Changing schedulers will not cause any problem.

Interesting, so any existing data on the disks will not be affected? 
How does that work?  Does this mean I can fill the disks a-priori before 
unifying them?

>>  - What would happen if I added another brick, say another disk to the
>> existing set on one of the machines?  Would it break the round-robin
>> scheduler that I am using?  I see from the FAQ that this should work with
>> the alu but will it work with rr?
> 
> ALU will be useful when you add servers later. i.e it will see free
> diskspace and schedule creation of new files there. RR will just
> round-robin.
> 
> You can experiement with the new "DHT" translator in place of unify.

OK, I can do that, I see dht does not need the extra namespace disk, so 
I guess I can clear out that directory.

> You can go with a more standard setup of
> unify->afr->client instead of afr->unify->client

Sorry, I am not sure what you mean by the above, the arrow directions 
aren't completely clear to me.  My understanding of the setup currently 
is that I have unify->afr->client (I unify 4 partitions on one machine, 
afr them across machines and then mount that as a client) which is what 
you have mentioned too above.  I am confused now that you mention that I 
have afr->unify->client instead.

> When you have afr over unify and one of unify subvol goes down, afr's
> selfheal will create missing files which you don't want to happen.

OK, so you are saying I need to simply switch to using dht, remove the 
namespace directory and continue with the setup and this problem will 
not occur?

cheers,
prabhu




More information about the Gluster-users mailing list