[Gluster-users] loading 'features/posix-locks' on server side may help your application
Vu Tong Minh
vtminh at fpt.net
Wed Apr 8 04:31:40 UTC 2009
Hi,
After changed configuration, performance is slowly. I added
write-behind, read-ahead into my configuration. But performance become bad.
1,2,3's config:
volume storage1
type protocol/client
option transport-type tcp/client
option remote-host 210.245.xxx.xxx
option remote-subvolume locks
end-volume
volume writeback
type performance/write-behind
option cache-size 1MB
option block-size 1MB
subvolumes storage1
end-volume
volume readahead
type performance/read-ahead
option page-size 2MB
option page-count 16
subvolumes writeback
end-volume
4's config:
volume brick
type storage/posix
option directory /store
end-volume
volume locks
type features/posix-locks
subvolumes brick
end-volume
volume server
type protocol/server
option transport-type tcp/server
option auth.addr.locks.allow *
subvolumes locks
end-volume
You can check this at: http://mirror-fpt-telecom.fpt.net/
Any suggestions?
Vikas Gorur wrote:
> 2009/4/1 Vu Tong Minh <vtminh at fpt.net>:
>
>> I tried to load posix_locks on node1 too, but I got error:
>>
>
> Sorry, my bad. Got a little confused.
>
> The problem with your config is that you should export the "locks"
> volume from the server and specify "locks" as the remote-subvolume in
> client.
>
> So your configuration should be:
>
>
> 1,2,3's config:
> volume storage1
> type protocol/client
> option transport-type tcp/client
> option remote-host 210.245.xxx.xxx
> option remote-subvolume locks
> end-volume
>
> 4's config:
> volume brick
> type storage/posix
> option directory /store
> end-volume
>
> volume locks
> type features/posix-locks
> subvolumes brick
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> option auth.addr.locks.allow *
> subvolumes locks
> end-volume
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090408/3ea56234/attachment.html>
More information about the Gluster-users
mailing list